repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm | pytorch | 15,131 | [Usage]: relationship between embedding size and vocab_size | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I’ve noticed that the embedding size is always smaller than the vocab_size. Additionally, sometimes the `prompt_token_ids` are larger than the embedding size. Is there a way to map the embedding vector to each of the prompt tokens so that I can retrieve the logit of a prompt token like this:
`embeds[i, labels[i]]`?
```python
outputs = llm.encode(prompts)
print(f'vocab_size: {llm.get_tokenizer().vocab_size}')
for i in range(len(outputs)):
labels = outputs[i].prompt_token_ids[1:]
embeds = outputs[i].outputs.data
print(f'{i}-th prompt_token_ids: {labels}')
print(f'{i}-th embeddings: {embeds.shape}')
```
```log
Processed prompts: 100%|██████████| 4/4 [00:00<00:00, 55.18it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
vocab_size: 50254
0-th prompt_token_ids: [4007, 273, 253, 1986, 2077, 310]
0-th embeddings: torch.Size([7, 2560])
1-th prompt_token_ids: [13, 619, 1416, 310]
1-th embeddings: torch.Size([5, 2560])
2-th prompt_token_ids: [5347, 273, 6181, 310]
2-th embeddings: torch.Size([5, 2560])
3-th prompt_token_ids: [2852, 273, 14980, 310]
3-th embeddings: torch.Size([5, 2560])
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-19T14:02:44Z | 2025-03-24T17:09:09Z | https://github.com/vllm-project/vllm/issues/15131 | [
"usage"
] | Happy2Git | 12 |
miLibris/flask-rest-jsonapi | sqlalchemy | 164 | Return different fields based on user permissions | Is it possible to change the fields that can be returned/updated based on the permissions of the logged in user? | open | 2019-06-24T14:45:25Z | 2019-06-24T14:45:25Z | https://github.com/miLibris/flask-rest-jsonapi/issues/164 | [] | cglace | 0 |
pyeve/eve | flask | 1,260 | Eve package should be distributed as a python wheel | it's 2019 already | closed | 2019-04-11T08:19:30Z | 2019-04-11T08:21:00Z | https://github.com/pyeve/eve/issues/1260 | [] | nicolaiarocci | 0 |
randyzwitch/streamlit-folium | streamlit | 78 | Feature Request : Don't generate new data on double Click | is there a way to disable generating new variable when a user Double click for Zooming ? | closed | 2022-07-13T10:27:38Z | 2024-10-28T17:54:35Z | https://github.com/randyzwitch/streamlit-folium/issues/78 | [
"enhancement"
] | djouallah | 6 |
MemeMeow-Studio/MemeMeow | streamlit | 39 | 开源API与模块化 | 现在项目存在着闭源api,而开源版本则是采用streamlit。现在,我正在开发基于fastapi的[开源版本api](https://github.com/fQwQf/VVQuest/tree/dev)。问题在于,这与目前的streamlit直接调用服务可以说在前端是完全不同的两条技术路径。似乎有必要将使用streamlit的版本和api版本分别放在两个仓库中。但这也意味着后端也应该放入单独的仓库。或者也可以让streamlit和fastapi同时运行——但是我不喜欢这样。
因此,或许应该考虑将不同模块放入单独的仓库,并创建一个github organisation存放这些仓库。 | open | 2025-02-23T16:51:20Z | 2025-02-23T17:27:06Z | https://github.com/MemeMeow-Studio/MemeMeow/issues/39 | [
"discussion / 讨论"
] | fQwQf | 1 |
simple-login/app | flask | 1,209 | Add the same security measures to domains. | @nguyenkims is it possible to also add DNSSEC, CAA records, MTA-STS, and TLS-RPT to the following domains?
- simplelogin.com
- ale***.com
- 8sh****.net
- 8al***.com
- dral***.com | open | 2022-08-02T02:47:25Z | 2023-12-26T08:43:00Z | https://github.com/simple-login/app/issues/1209 | [] | c0nfigurati0n | 6 |
graphql-python/graphql-core | graphql | 102 | Let default_field_resolver check "typing.Mapping" instead of the more restrictive "dict" | # Feature requests
The current `default_field_resolver` checks against instances of `dict` specifically to decide whether to use `get` or `getattr` to access the field on the `source`. Checking against [typing.Mapping](https://docs.python.org/3.6/library/typing.html#typing.Mapping) would be more flexible, allowing, for example, a [ChainMap](https://docs.python.org/3/library/collections.html#collections.ChainMap) to be used as the underlying source. | closed | 2020-08-07T04:22:27Z | 2020-08-10T02:41:44Z | https://github.com/graphql-python/graphql-core/issues/102 | [] | jstlaurent | 4 |
huggingface/transformers | tensorflow | 36,187 | Recent Qwen2VL merge request (#35837) break compatibility with DeepSpeed | The recent merge request (#35837) works with accelerate but breaks with DeepSpeed (w/ and w/o deepspeed config)
- distributed_type: MULTI_GPU (work)
- distributed_type: DEEPSPEED (no longer works)
To be more precise the issue lies in this section: https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L200
```
emb = torch.cat((rotary_pos_emb, rotary_pos_emb), dim=-1)
cos = emb.cos().float()
sin = emb.sin().float()
else:
cos, sin = position_embeddings
q, k = apply_rotary_pos_emb_flashatt(q.unsqueeze(0), k.unsqueeze(0), cos, sin)
```
`cos, sin = position_embeddings` these are not casted to float and are subject to various dtypes depending on the DeepSpeed and mixed_precision config.
This accelerate config works:
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
enable_cpu_affinity: #false
main_training_function: main
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
mixed_precision: bf16
```
This accelerate config no longer works:
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: DEEPSPEED
deepspeed_config:
zero_stage: 3
downcast_bf16: 'no'
enable_cpu_affinity: false
main_training_function: main
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
| closed | 2025-02-14T00:25:37Z | 2025-02-18T19:30:12Z | https://github.com/huggingface/transformers/issues/36187 | [] | ArdalanM | 3 |
huggingface/transformers | deep-learning | 35,990 | Transformers PaliGemma evaluate and compute_loss fail with tensors/device errors | ### System Info
My versions are:
```
Python Version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0]
Torch Version: 2.5.1+cu124
CUDA Available: True
CUDA Device Count: 2
GPU Name: NVIDIA GeForce RTX 3090
Transformers Version: 4.48.1
Tokenizers Version: 0.21.0
Accelerate Version: 1.3.0
```
### Who can help?
@ArthurZucker , @amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm loading a PaliGemma2 model `google/paligemma2-3b-pt-224` and trying to fine-tune using Trainer/Seq2SeqTrainer. If I add evaluation, this fails. After doing some digging, I found that this only happens if the model is in evaluate mode.
```
batch = [valid_dataset[i] for i in range(8)]
inputs = collate_fn(batch)
#generate_ids = model.generate(**inputs, max_length=286+30)
trainer.model.train()
trainer.compute_loss(model, inputs, return_outputs=False, num_items_in_batch=416)
print("works")
trainer.model.train(False)
trainer.compute_loss(model, inputs, return_outputs=False, num_items_in_batch=416)
print("fails.")
```
I've worked around it by mokey-patching compute_loss_context_manager as follows:
```
orig_context_manager = trainer.compute_loss_context_manager
class TempTrainContext(object):
def __init__(self, trainer):
self.trainer = trainer
self.orig_context_manager = trainer.compute_loss_context_manager
def __enter__(self):
self.orig_context_inst = self.orig_context_manager()
self.orig_context_inst.__enter__()
self.training_enter = self.trainer.model.training
self.trainer.model.train()
def __exit__(self, type, value, traceback):
self.trainer.model.train(self.training_enter)
self.orig_context_inst.__exit__(type, value, traceback)
def __call__(self):
return self
trainer.compute_loss_context_manager = TempTrainContext(trainer)
```
(Bonus question: Is this safe to do, or will I train on the test set?)
Error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[13], line 8
6 print("works")
7 trainer.model.train(False)
----> 8 trainer.compute_loss(model, inputs, return_outputs=False, num_items_in_batch=416)
9 print("fails.")
12 orig_context_manager = trainer.compute_loss_context_manager
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/trainer.py:3731, in Trainer.compute_loss(self, model, inputs, return_outputs, num_items_in_batch)
3729 loss_kwargs["num_items_in_batch"] = num_items_in_batch
3730 inputs = {**inputs, **loss_kwargs}
-> 3731 outputs = model(**inputs)
3732 # Save past state if it exists
3733 # TODO: this needs to be fixed and made cleaner later.
3734 if self.args.past_index >= 0:
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/paligemma/modeling_paligemma.py:530, in PaliGemmaForConditionalGeneration.forward(self, input_ids, pixel_values, attention_mask, position_ids, past_key_values, token_type_ids, cache_position, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, num_logits_to_keep)
525 labels = torch.where(input_ids == self.pad_token_id, self.config.ignore_index, labels)
527 causal_mask = self._update_causal_mask(
528 attention_mask, token_type_ids, past_key_values, cache_position, input_ids, inputs_embeds, is_training
529 )
--> 530 outputs = self.language_model(
531 attention_mask=causal_mask,
532 position_ids=position_ids,
533 past_key_values=past_key_values,
534 inputs_embeds=inputs_embeds,
535 use_cache=use_cache,
536 output_attentions=output_attentions,
537 output_hidden_states=output_hidden_states,
538 return_dict=return_dict,
539 cache_position=cache_position,
540 num_logits_to_keep=num_logits_to_keep,
541 )
543 logits = outputs.logits
544 loss = None
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/gemma2/modeling_gemma2.py:842, in Gemma2ForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, num_logits_to_keep, **loss_kwargs)
840 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
841 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
--> 842 outputs = self.model(
843 input_ids=input_ids,
844 attention_mask=attention_mask,
845 position_ids=position_ids,
846 past_key_values=past_key_values,
847 inputs_embeds=inputs_embeds,
848 use_cache=use_cache,
849 output_attentions=output_attentions,
850 output_hidden_states=output_hidden_states,
851 return_dict=return_dict,
852 cache_position=cache_position,
853 )
855 hidden_states = outputs[0]
856 # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/gemma2/modeling_gemma2.py:629, in Gemma2Model.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, **flash_attn_kwargs)
617 layer_outputs = self._gradient_checkpointing_func(
618 decoder_layer.__call__,
619 hidden_states,
(...)
626 cache_position,
627 )
628 else:
--> 629 layer_outputs = decoder_layer(
630 hidden_states,
631 position_embeddings=position_embeddings,
632 attention_mask=causal_mask,
633 position_ids=position_ids,
634 past_key_value=past_key_values,
635 output_attentions=output_attentions,
636 use_cache=use_cache,
637 cache_position=cache_position,
638 **flash_attn_kwargs,
639 )
641 hidden_states = layer_outputs[0]
643 if output_attentions:
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/gemma2/modeling_gemma2.py:299, in Gemma2DecoderLayer.forward(self, hidden_states, position_embeddings, attention_mask, position_ids, past_key_value, output_attentions, use_cache, cache_position)
296 hidden_states = self.input_layernorm(hidden_states)
298 # Self Attention
--> 299 hidden_states, self_attn_weights = self.self_attn(
300 hidden_states=hidden_states,
301 position_embeddings=position_embeddings,
302 attention_mask=attention_mask,
303 position_ids=position_ids,
304 past_key_value=past_key_value,
305 output_attentions=output_attentions,
306 use_cache=use_cache,
307 cache_position=cache_position,
308 )
309 hidden_states = self.post_attention_layernorm(hidden_states)
310 hidden_states = residual + hidden_states
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/models/gemma2/modeling_gemma2.py:224, in Gemma2Attention.forward(self, hidden_states, position_embeddings, attention_mask, past_key_value, cache_position, **kwargs)
221 if past_key_value is not None:
222 # sin and cos are specific to RoPE models; cache_position needed for the static cache
223 cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
--> 224 key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
226 attention_interface: Callable = eager_attention_forward
227 if self.config._attn_implementation != "eager":
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/cache_utils.py:1717, in HybridCache.update(self, key_states, value_states, layer_idx, cache_kwargs)
1714 else:
1715 update_fn = self._static_update
-> 1717 return update_fn(
1718 cache_position,
1719 layer_idx,
1720 key_states,
1721 value_states,
1722 k_out,
1723 v_out,
1724 k_out.shape[2],
1725 )
File ~/local/miniconda3/envs/paligemma/lib/python3.12/site-packages/transformers/cache_utils.py:1694, in HybridCache._static_update(self, cache_position, layer_idx, key_states, value_states, k_out, v_out, max_cache_len)
1693 def _static_update(self, cache_position, layer_idx, key_states, value_states, k_out, v_out, max_cache_len):
-> 1694 k_out[:, :, cache_position] = key_states
1695 v_out[:, :, cache_position] = value_states
1697 self.key_cache[layer_idx] = k_out
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!"
```
Error of Evaluator (bottom half of file): https://gist.github.com/BlGene/607c7bee450e03835aa2bf0d2fd2959a
### Expected behavior
Training runs with evaluation enabled. | closed | 2025-01-31T12:48:43Z | 2025-02-13T15:24:29Z | https://github.com/huggingface/transformers/issues/35990 | [
"bug",
"Cache"
] | BlGene | 13 |
huggingface/datasets | tensorflow | 7,102 | Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True) | ### Describe the bug
When I load a dataset from a number of arrow files, as in:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
```
I'm able to get fast iteration speeds when iterating over the dataset without shuffling.
When I shuffle the dataset, the iteration speed is reduced by ~1000x.
It's very possible the way I'm loading dataset shards is not appropriate; if so please advise!
Thanks for the help
### Steps to reproduce the bug
Here's full code to reproduce the issue:
- Generate a random dataset
- Create shards of data independently using Dataset.save_to_disk()
- The below will generate 16 shards (arrow files), of 512 examples each
```
import time
from pathlib import Path
from multiprocessing import Pool, cpu_count
import torch
from datasets import Dataset, load_dataset
split = "train"
split_save_dir = "/tmp/random_split"
def generate_random_example():
return {
'inputs': torch.randn(128).tolist(),
'indices': torch.randint(0, 10000, (2, 20000)).tolist(),
'values': torch.randn(20000).tolist(),
}
def generate_shard_dataset(examples_per_shard: int = 512):
dataset_dict = {
'inputs': [],
'indices': [],
'values': []
}
for _ in range(examples_per_shard):
example = generate_random_example()
dataset_dict['inputs'].append(example['inputs'])
dataset_dict['indices'].append(example['indices'])
dataset_dict['values'].append(example['values'])
return Dataset.from_dict(dataset_dict)
def save_shard(shard_idx, save_dir, examples_per_shard):
shard_dataset = generate_shard_dataset(examples_per_shard)
shard_write_path = Path(save_dir) / f"shard_{shard_idx}"
shard_dataset.save_to_disk(shard_write_path)
return str(Path(shard_write_path) / "data-00000-of-00001.arrow")
def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512):
with Pool(cpu_count()) as pool:
args = [(m, save_dir, examples_per_shard) for m in range(num_shards)]
shard_filepaths = pool.starmap(save_shard, args)
return shard_filepaths
shard_filepaths = generate_split_shards(split_save_dir)
```
Load the dataset as IterableDataset:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
random_dataset = random_dataset.with_format("numpy")
```
Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating:
Without shuffling, this gives ~1500 iterations/second
```
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 705.74 iterations/second
Processed 200 items at an average of 1169.68 iterations/second
Processed 300 items at an average of 1497.97 iterations/second
Processed 400 items at an average of 1739.62 iterations/second
Processed 500 items at an average of 1931.11 iterations/second`
```
When shuffling, this gives ~3 iterations/second:
```
random_dataset = random_dataset.shuffle(buffer_size=100,seed=42)
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 3.75 iterations/second
Processed 200 items at an average of 3.93 iterations/second
```
### Expected behavior
Iterations per second should be barely affected by shuffling, especially with a small buffer size
### Environment info
Datasets version: 2.21.0
Python 3.10
Ubuntu 22.04 | open | 2024-08-14T21:44:44Z | 2024-08-15T16:17:31Z | https://github.com/huggingface/datasets/issues/7102 | [] | lajd | 2 |
voila-dashboards/voila | jupyter | 1,201 | Excel file downloadable in Jupyter notebook but not in voila | Hi I created an ipywidget button that downloads the Excel file with no problem in Jupyter Notebook. However, when I run in voila, it shows Failed-Forbidden error. Please advise.
The code below can be run as it is and should be able to reproduce error I am facing.
```python
import ipywidgets
import numpy as np
import pandas as pd
from IPython.display import HTML, display
from ipywidgets import widgets
from typing import Callable
import pandas.io.formats.style
class DownloadButtonExcel(ipywidgets.Button):
"""
Download button with dynamic content
The content is generated using a callback when the button is clicked.
The code is based on ollik1's answer at https://stackoverflow.com/questions/61708701/how-to-download-a-file-using-ipywidget-button/68683463#68683463
"""
def __init__(self, filename: str, contents: Callable[[], pandas.io.formats.style.Styler], **kwargs):
super(DownloadButtonExcel, self).__init__(**kwargs)
self.filename = filename
self.contents = contents
self.on_click(self.__on_click)
self.output = widgets.Output()
display(self.output)
def __on_click(self, b):
df = self.contents()
df.to_excel(self.filename, engine='openpyxl')
digest = pd.util.hash_pandas_object(df.data).sum() # bypass browser cache
id = f"dl_{digest}"
with self.output:
display(
HTML(
f"""
<html>
<body>
<a id="{id}" download ="{self.filename}" href="{self.filename}" download>
</a>
<script>
(function download() {{
document.getElementById('{id}').click();
}})()
</script>
</body>
</html>
"""
)
)
import pandas as pd
import numpy as np
import matplotlib as mpl
df = pd.DataFrame([[38.0, 2.0, 18.0, 22.0, 21, np.nan],[19, 439, 6, 452, 226,232]],
index=pd.Index(['Tumour (Positive)', 'Non-Tumour (Negative)'], name='Actual Label:'),
columns=pd.MultiIndex.from_product([['Decision Tree', 'Regression', 'Random'],['Tumour', 'Non-Tumour']], names=['Model:', 'Predicted:']))
df_style = df.style.format(precision = 2).background_gradient().hide(axis = 'index')
download_button_excel = DownloadButtonExcel(
filename="Test.xlsx",
contents=lambda: df_style,
description="Download",
style={"button_color": "transparent"},
)
```
<img width="334" alt="image" src="https://user-images.githubusercontent.com/8492535/190747838-90c0b973-b415-448e-aff5-4e609ae9c9b1.png">
| closed | 2022-09-16T21:11:40Z | 2022-09-19T19:10:47Z | https://github.com/voila-dashboards/voila/issues/1201 | [
"bug"
] | curieshicy | 1 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 393 | Other language option - Question before start | Hi there
before I am getting into setting this up I have a question: Readme says:
_LinkedIn language To ensure the bot works, your LinkedIn language must be set to English._
Does that mean this only works for generating applications in english or is this just about the language setting in LinkedIn? Is it possible to generate everything in other languages? German, in my case.
Thanks a lot.
| closed | 2024-09-16T14:05:19Z | 2024-09-25T14:12:57Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/393 | [] | supaeasy | 2 |
PrefectHQ/prefect | data-science | 17,039 | gui shows task run time as 4m 60s instead of 5m | ### Bug summary
gui shows task run time as 4m 60s instead of 5m
### Version info
```Text
Version: 3.1.15
API version: 0.8.4
Python version: 3.12.3
Git commit: 3ac3d548
Built: Thu, Jan 30, 2025 11:31 AM
OS/Arch: linux/x86_64
Profile: local
Server type: server
Pydantic version: 2.9.2
```
### Additional context
_No response_ | open | 2025-02-07T08:50:22Z | 2025-02-11T20:38:09Z | https://github.com/PrefectHQ/prefect/issues/17039 | [
"bug",
"ui"
] | ramelito | 2 |
ultralytics/ultralytics | deep-learning | 19,697 | YOLOv8n validation on DOTA v2.0 dataset | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello! When I use the YOLOv8n model to save TXT results (predictions_merged_txt) on the DOTA v2.0 validation and test sets, I frequently encounter out-of-memory (OOM) errors. How can I resolve this issue?
### Additional
_No response_ | open | 2025-03-14T09:51:52Z | 2025-03-14T20:11:45Z | https://github.com/ultralytics/ultralytics/issues/19697 | [
"question",
"detect"
] | HornGate | 2 |
proplot-dev/proplot | matplotlib | 13 | Metric string for subplot width doesn't work | With the most recent commit, trying to set a string width in metric (e.g., "10cm") does not work.
<img width="1227" alt="Screen Shot 2019-03-25 at 11 14 06 PM" src="https://user-images.githubusercontent.com/8881170/54973030-d2430c00-4f53-11e9-854d-c7c5d5e63ac6.png"> | closed | 2019-03-26T05:16:05Z | 2019-09-14T21:22:54Z | https://github.com/proplot-dev/proplot/issues/13 | [
"bug"
] | bradyrx | 2 |
public-apis/public-apis | api | 4,172 | https://github.com/username | Javascript
| open | 2025-03-11T02:00:30Z | 2025-03-17T23:33:10Z | https://github.com/public-apis/public-apis/issues/4172 | [] | yezus122 | 1 |
plotly/plotly.py | plotly | 4,159 | Secondary y issues when dump and load a Figure with Subpplots | In my webapp I'm trying to pass a chart for a fetch js function that allows you to download a report with that same chart.
The problem is that when in the other route I load the json object and try to assemble the chart, the secondary axes stop working correctly. I no longer see the title and axis ticks.
Briefly, what I do is the following:
```
chart.show()
# Dump and load:
plotly.io.from_json(plotly.io.to_json(chart)).show()
```
Before Load
https://i.imgur.com/XyA20H3.png
After Load
https://i.imgur.com/PP7IK1C.png
| closed | 2023-04-12T15:46:29Z | 2024-07-11T14:17:09Z | https://github.com/plotly/plotly.py/issues/4159 | [] | barrospt | 1 |
agronholm/anyio | asyncio | 625 | `anyio.open_process` should accept `subprocess.Popen` keywords | ### Things to check first
- [X] I have searched the existing issues and didn't find my feature already requested there
### Feature description
Both trio and asyncio pass any arguments to (their equivalent of) `open_process` which they don't understand to `subprocess.Popen`.
anyio should do that too.
### Use case
Sometimes you need to start a process with a "special" `argv[0]`. Or pass additional file descriptors to the child process. | closed | 2023-10-20T02:37:42Z | 2023-10-20T20:50:11Z | https://github.com/agronholm/anyio/issues/625 | [
"enhancement"
] | smurfix | 2 |
opengeos/leafmap | plotly | 695 | ModuleNotFoundError | <!-- Please search existing issues to avoid creating duplicates. -->
### Description
When I use colab for NASA data visualisation I find that the gdf.explore() function doesn't work!
https://leafmap.org/notebooks/88_nasa_earth_data/?h=nasa
### What I Did
```
error
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/geopandas/explore.py](https://localhost:8080/#) in _explore(df, column, cmap, color, m, tiles, attr, tooltip, popup, highlight, categorical, legend, scheme, k, vmin, vmax, width, height, categories, classification_kwds, control_scale, marker_type, marker_kwds, style_kwds, highlight_kwds, missing_kwds, tooltip_kwds, popup_kwds, legend_kwds, map_kwds, **kwargs)
286 import matplotlib.pyplot as plt
--> 287 from mapclassify import classify
288
ModuleNotFoundError: No module named 'mapclassify'
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
2 frames
[/usr/local/lib/python3.10/dist-packages/geopandas/explore.py](https://localhost:8080/#) in _explore(df, column, cmap, color, m, tiles, attr, tooltip, popup, highlight, categorical, legend, scheme, k, vmin, vmax, width, height, categories, classification_kwds, control_scale, marker_type, marker_kwds, style_kwds, highlight_kwds, missing_kwds, tooltip_kwds, popup_kwds, legend_kwds, map_kwds, **kwargs)
295
296 except (ImportError, ModuleNotFoundError):
--> 297 raise ImportError(
298 "The 'folium', 'matplotlib' and 'mapclassify' packages are required for "
299 "'explore()'. You can install them using "
ImportError: The 'folium', 'matplotlib' and 'mapclassify' packages are required for 'explore()'. You can install them using 'conda install -c conda-forge folium matplotlib mapclassify' or 'pip install folium matplotlib mapclassify'.

```
| closed | 2024-02-28T03:57:50Z | 2024-03-01T13:20:49Z | https://github.com/opengeos/leafmap/issues/695 | [
"bug"
] | xingguangYan | 2 |
ydataai/ydata-profiling | data-science | 1,197 | Does pandas-profiling work in Jupyter Notebooks on AWS? | Does pandas-profiling work in Jupyter Notebooks on AWS? I understand there are a lot of configuration differences that can lead to issues but whenever I try to produce a profiling report, I get the following errors when I run:
```
profile = ProfileReport(df, 'myreport')
profile.to_file('s3://myfolder/myreport.html')
```
```
Summarize dataset: 97%|█████████▋| 427/438 [01:14<00:01, 8.03it/s, Calculate auto correlation] /home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/multimethod/__init__.py:315: FutureWarning: In a future version, `df.iloc[:, i] = newvals` will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either `df[df.columns[i]] = newvals` or, if columns are non-unique, `df.isetitem(i, newvals)`
return func(*args, **kwargs)
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:112: RuntimeWarning: The input array could not be properly checked for nan values. nan values will be ignored.
warnings.warn("The input array could not be properly "
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:4881: ConstantInputWarning: An input array is constant; the correlation coefficient is not defined.
warnings.warn(stats.ConstantInputWarning(warn_msg))
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/correlations.py:67: UserWarning: There was an attempt to calculate the auto correlation, but this failed.
To hide this warning, disable the calculation
(using `df.profile_report(correlations={"auto": {"calculate": False}})`
If this is problematic for your use case, please report this as an issue:
https://github.com/ydataai/pandas-profiling/issues
(include the error message: 'No data; `observed` has size 0.')
warnings.warn(
Summarize dataset: 98%|█████████▊| 428/438 [28:20<32:48, 196.80s/it, Calculate spearman correlation]/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/multimethod/__init__.py:315: FutureWarning: The default value of numeric_only in DataFrame.corr is deprecated. In a future version, it will default to False. Select only valid columns or specify the value of numeric_only to silence this warning.
return func(*args, **kwargs)
Summarize dataset: 98%|█████████▊| 430/438 [30:55<21:07, 158.47s/it, Calculate kendall correlation] /home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:5218: RuntimeWarning: overflow encountered in long_scalars
(2 * xtie * ytie) / m + x0 * y0 / (9 * m * (size - 2)))
/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/scipy/stats/_stats_py.py:5219: RuntimeWarning: invalid value encountered in sqrt
z = con_minus_dis / np.sqrt(var)
Summarize dataset: 99%|█████████▊| 432/438 [45:40<00:38, 6.34s/it, Calculate phi_k correlation]
---------------------------------------------------------------------------
_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/externals/loky/backend/queues.py", line 125, in _feed
obj_ = dumps(obj, reducers=reducers)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/externals/loky/backend/reduction.py", line 211, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/externals/loky/backend/reduction.py", line 204, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 632, in dump
return Pickler.dump(self, obj)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/_memmapping_reducer.py", line 446, in __call__
for dumped_filename in dump(a, filename):
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/numpy_pickle.py", line 553, in dump
NumpyPickler(f, protocol=protocol).dump(value)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/pickle.py", line 487, in dump
self.save(obj)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/numpy_pickle.py", line 352, in save
wrapper.write_array(obj, self)
File "/home/ec2-user/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/numpy_pickle.py", line 134, in write_array
pickler.file_handle.write(chunk.tobytes('C'))
OSError: [Errno 28] No space left on device
"""
The above exception was the direct cause of the following exception:
PicklingError Traceback (most recent call last)
<ipython-input-9-34649000e9e9> in <module>
1 profile = ProfileReport(df_perf_18, title="MyReport")
----> 2 profile.to_file(f"s3://sf-puas-prod-use1-pc/fire/research/home_telematics/adt/analysis/MyReport.html")
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in to_file(self, output_file, silent)
307 create_html_assets(self.config, output_file)
308
--> 309 data = self.to_html()
310
311 if output_file.suffix != ".html":
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in to_html(self)
418
419 """
--> 420 return self.html
421
422 def to_json(self) -> str:
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in html(self)
229 def html(self) -> str:
230 if self._html is None:
--> 231 self._html = self._render_html()
232 return self._html
233
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in _render_html(self)
337 from pandas_profiling.report.presentation.flavours import HTMLReport
338
--> 339 report = self.report
340
341 with tqdm(
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in report(self)
223 def report(self) -> Root:
224 if self._report is None:
--> 225 self._report = get_report_structure(self.config, self.description_set)
226 return self._report
227
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/profile_report.py in description_set(self)
205 def description_set(self) -> Dict[str, Any]:
206 if self._description_set is None:
--> 207 self._description_set = describe_df(
208 self.config,
209 self.df,
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/describe.py in describe(config, df, summarizer, typeset, sample)
93 pbar.total += len(correlation_names)
94
---> 95 correlations = {
96 correlation_name: progress(
97 calculate_correlation, pbar, f"Calculate {correlation_name} correlation"
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/describe.py in <dictcomp>(.0)
94
95 correlations = {
---> 96 correlation_name: progress(
97 calculate_correlation, pbar, f"Calculate {correlation_name} correlation"
98 )(config, df, correlation_name, series_description)
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/utils/progress_bar.py in inner(*args, **kwargs)
9 def inner(*args, **kwargs) -> Any:
10 bar.set_postfix_str(message)
---> 11 ret = fn(*args, **kwargs)
12 bar.update()
13 return ret
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/correlations.py in calculate_correlation(config, df, correlation_name, summary)
105 correlation = None
106 try:
--> 107 correlation = correlation_measures[correlation_name].compute(
108 config, df, summary
109 )
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/multimethod/__init__.py in __call__(self, *args, **kwargs)
313 func = self[tuple(func(arg) for func, arg in zip(self.type_checkers, args))]
314 try:
--> 315 return func(*args, **kwargs)
316 except TypeError as ex:
317 raise DispatchError(f"Function {func.__code__}") from ex
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/pandas_profiling/model/pandas/correlations_pandas.py in pandas_phik_compute(config, df, summary)
152 from phik import phik_matrix
153
--> 154 correlation = phik_matrix(df[selected_cols], interval_cols=list(intcols))
155
156 return correlation
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/phik/phik.py in phik_matrix(df, interval_cols, bins, quantile, noise_correction, dropna, drop_underflow, drop_overflow, verbose, njobs)
254 verbose=verbose,
255 )
--> 256 return phik_from_rebinned_df(
257 data_binned,
258 noise_correction,
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/phik/phik.py in phik_from_rebinned_df(data_binned, noise_correction, dropna, drop_underflow, drop_overflow, njobs)
164 ]
165 else:
--> 166 phik_list = Parallel(n_jobs=njobs)(
167 delayed(_calc_phik)(co, data_binned[list(co)], noise_correction)
168 for co in itertools.combinations_with_replacement(
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/parallel.py in __call__(self, iterable)
1096
1097 with self._backend.retrieval_context():
-> 1098 self.retrieve()
1099 # Make sure that we get a last message telling us we are done
1100 elapsed_time = time.time() - self._start_time
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/parallel.py in retrieve(self)
973 try:
974 if getattr(self._backend, 'supports_timeout', False):
--> 975 self._output.extend(job.get(timeout=self.timeout))
976 else:
977 self._output.extend(job.get())
~/SageMaker/.envs/mykernel/lib/python3.9/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout)
565 AsyncResults.get from multiprocessing."""
566 try:
--> 567 return future.result(timeout=timeout)
568 except CfTimeoutError as e:
569 raise TimeoutError from e
~/SageMaker/.envs/mykernel/lib/python3.9/concurrent/futures/_base.py in result(self, timeout)
436 raise CancelledError()
437 elif self._state == FINISHED:
--> 438 return self.__get_result()
439
440 self._condition.wait(timeout)
~/SageMaker/.envs/mykernel/lib/python3.9/concurrent/futures/_base.py in __get_result(self)
388 if self._exception:
389 try:
--> 390 raise self._exception
391 finally:
392 # Break a reference cycle with the exception in self._exception
PicklingError: Could not pickle the task to send it to the workers.
```
I'm on the latest version of pandas-profiling (just installed it today). | open | 2022-12-05T18:01:37Z | 2022-12-20T12:12:24Z | https://github.com/ydataai/ydata-profiling/issues/1197 | [
"question/discussion ❓",
"information requested ❔"
] | JohnTravolski | 3 |
developmentseed/lonboard | jupyter | 482 | Chunking issues with Arrow input. | We have some chunking issues with Arrow input because the input _can already have_ chunking structure, and thus `to_batches()` won't work.
## Steps to reproduce the bug
Run the [overture notebook example](https://github.com/developmentseed/lonboard/blob/main/examples/overture-maps.ipynb)
without
```py
table = table.combine_chunks()
```
The table itself and the numpy `heights` and `color` arrays will have different chunking structure and the rendering will fail with a deck.gl-layers assertion failure. | closed | 2024-04-23T13:46:02Z | 2024-09-24T19:44:25Z | https://github.com/developmentseed/lonboard/issues/482 | [
"bug"
] | kylebarron | 1 |
amidaware/tacticalrmm | django | 1,188 | Windows Update Rework: Todo list consolidating tickets | - Have a TacticalRMM Global Option like: [] Have Tactical RMM manage all Windows Update functions
- Block specific patches agent/site/client
- Agent/Site/Client Button: "Approve patches based on policy now"
- Agent/Site/Client Button: "Approve and install Now"
- Run Updates on offline agents if missed
- Have Update install Window (time window, with working hours like windows has)
- different times for installation and reboot
- different schedules for different patch levels. eg Critical: Daily Other: monthly
- only reboot when accepted by user
- postpone updates for x days so it can be tested first (you can do this by just setting later than patch Tuesday date/time)
- Summary screen on patch status for machines
- Manual Mass approve Updates (add automation policy selection to Bulk Patch Management dialog)
- Add more time options to scheduled patching e.g. "first, second, Last <weekday> of Month
- Run script before or after patching
- Use new scheduling system from tasks for patching
- Include Feature and Driver Updates in Windows Updating
- add an ability to schedule patch installation/approval based on severity
- Have patch policy apply immediate upon coming online if agent was offline at scheduled time
- If user is logged in and active during windows update installation, popup notification to reboot in x mins/hrs and force reboot after that time (like windows)
- Add more items in TRMMs debug system for troubleshooting patching steps
- Allow enabling/disabling maintenance mode as part of patching
- Attempt WoL before running updates
- Allow enabling "Optional quality updates" to show up in the list of updates.
- Add a filter and option to automatically ignore all patches with "Preview" in the descriptions #1835
request from jd on discord:
```
I think there's a GitHub issue open to add a scheduling option to the Bulk Patch Management (and other bulk actions). Would also be really useful if there was a checkbox for whether you want to trigger a reboot (as opposed to following the TRMM patch policy).
I think it would be helpful to have something similar for the Install Patches button - maybe a pop asking if you want to reboot.
For now, I think we can work around with scheduled script-reboots under Automation Manager (as opposed to having the box checked).
However, for the purposes of patching zero days, and for making it just that much easier on admins, I think it would be nice if we could use the Install Patches button (and also the Bulk Install Patches) without having to worry about triggering an accidental, and still have auto reboot ticked in the TRMM Patch Policy.
```
Replication of all the features in PSWindowsUpdate will probably make everyone happy: <https://adamtheautomator.com/pswindowsupdate/>
This is what TRMM uses https://learn.microsoft.com/en-us/windows/win32/api/wuapi/
Additional requests please specify what part of the API you're talking about | open | 2022-06-24T17:26:41Z | 2024-05-02T08:35:32Z | https://github.com/amidaware/tacticalrmm/issues/1188 | [
"enhancement"
] | silversword411 | 10 |
shibing624/text2vec | nlp | 3 | Collecting text2vec | Collecting text2vec
ERROR: Could not find a version that satisfies the requirement text2vec (from versions: none)
ERROR: No matching distribution found for text2vec | closed | 2019-12-04T13:49:24Z | 2019-12-08T05:42:36Z | https://github.com/shibing624/text2vec/issues/3 | [
"bug"
] | huaji1992 | 1 |
django-cms/django-cms | django | 7,146 | [DOCS] 3.9.0 docs do not explain how to install the tutorial site | <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
The "Templates and Placeholders" section refers to the tutorial site. Is this forked from github, may be?
<!--
If this is a security issue stop immediately and follow the instructions at:
http://docs.django-cms.org/en/latest/contributing/development-policies.html#reporting-security-issues
-->
## Steps to reproduce
<!--
Clear steps describing how to reproduce the issue.
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
## Expected behaviour
<!--
A clear and concise description of what you expected to happen.
-->
## Actual behaviour
<!--
A clear and concise description of what is actually happening.
-->
## Screenshots
<!--If applicable, add screenshots to help explain your problem.
-->
## Additional information (CMS/Python/Django versions)
<!--
Add any other context about the problem such as environment,
CMS/Python/Django versions, logs etc. here.
-->
## Do you want to help fix this issue?
<!--
The django CMS project is managed and kept alive by its open source community and is backed by the [django CMS Association](https://www.django-cms.org/en/about-us/). We therefore welcome any help and are grateful if people contribute to the project. Please use 'x' to check the items below.
-->
* [ ] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [ ] No, I only want to report the issue.
| closed | 2021-10-28T10:16:00Z | 2021-11-04T18:26:44Z | https://github.com/django-cms/django-cms/issues/7146 | [
"component: documentation"
] | sureshvv | 4 |
ets-labs/python-dependency-injector | asyncio | 257 | Fix TravisCI warnings | TravisCI has a couple of warnings:
<img width="1352" alt="Screenshot 2020-06-24 at 21 29 01" src="https://user-images.githubusercontent.com/1742049/85643359-ddf31600-b661-11ea-8a8a-93f8ba5bb87a.png">
Need to fix that. | closed | 2020-06-25T01:30:50Z | 2020-06-30T21:32:45Z | https://github.com/ets-labs/python-dependency-injector/issues/257 | [
"enhancement"
] | rmk135 | 0 |
Asabeneh/30-Days-Of-Python | matplotlib | 307 | No code of conduct and contributing files in the root repo | Both the `code_of_conduct.md and contributing.md file are a most of a project.
The help contributors how the owner/org. want commits to be done and rules to be followed when wanting a pull request.
I can work on them, if assigned to me. | open | 2022-10-01T23:51:48Z | 2022-10-02T12:05:35Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/307 | [] | chemben17 | 1 |
Evil0ctal/Douyin_TikTok_Download_API | api | 323 | [BUG] Could not get douyin data | ***Platform where the error occurred?***
Douyin
***The endpoint where the error occurred?***
API
```
正在解析**douyin**视频链接...
该链接为原始链接,无需转换,原始链接为: https://www.douyin.com/video/6914948781100338440
获取到的**douyin**视频ID是6914948781100338440
正在请求抖音视频API: https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&aweme_id=6914948781100338440&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1344&screen_height=756&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=118.0&browser_online=true&engine_name=Gecko&engine_version=109.0&os_name=Windows&os_version=10&cpu_core_num=16&device_memory=&platform=PC&webid=7284189800734082615&msToken=B1N9FM825TkvFbayDsDvZxM8r5suLrsfQbC93TciS0O9Iii8iJpAPd__FM2rpLUJi5xtMencSXLeNn8xmOS9q7bP0CUsrt9oVTL08YXLPRzZm0dHKLc9PGRlyEk=&X-Bogus=DFSzswSL/rtANnEftqyisU9WcBnL
正在请求抖音视频API: https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&aweme_id=6914948781100338440&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1344&screen_height=756&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=118.0&browser_online=true&engine_name=Gecko&engine_version=109.0&os_name=Windows&os_version=10&cpu_core_num=16&device_memory=&platform=PC&webid=7284189800734082615&msToken=B1N9FM825TkvFbayDsDvZxM8r5suLrsfQbC93TciS0O9Iii8iJpAPd__FM2rpLUJi5xtMencSXLeNn8xmOS9q7bP0CUsrt9oVTL08YXLPRzZm0dHKLc9PGRlyEk=&X-Bogus=DFSzswSL/rtANnEftqyiMU9WcBnS
正在请求抖音视频API: https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&aweme_id=6914948781100338440&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1344&screen_height=756&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=118.0&browser_online=true&engine_name=Gecko&engine_version=109.0&os_name=Windows&os_version=10&cpu_core_num=16&device_memory=&platform=PC&webid=7284189800734082615&msToken=B1N9FM825TkvFbayDsDvZxM8r5suLrsfQbC93TciS0O9Iii8iJpAPd__FM2rpLUJi5xtMencSXLeNn8xmOS9q7bP0CUsrt9oVTL08YXLPRzZm0dHKLc9PGRlyEk=&X-Bogus=DFSzswSL/rtANnEftqyi0z9WcBnH
正在请求抖音视频API: https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&aweme_id=6914948781100338440&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1344&screen_height=756&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=118.0&browser_online=true&engine_name=Gecko&engine_version=109.0&os_name=Windows&os_version=10&cpu_core_num=16&device_memory=&platform=PC&webid=7284189800734082615&msToken=B1N9FM825TkvFbayDsDvZxM8r5suLrsfQbC93TciS0O9Iii8iJpAPd__FM2rpLUJi5xtMencSXLeNn8xmOS9q7bP0CUsrt9oVTL08YXLPRzZm0dHKLc9PGRlyEk=&X-Bogus=DFSzswSL/rtANnEftqyipt9WcBn/
exception calling callback for <Future at 0x7f8e406fe450 state=finished raised RetryError>
Traceback (most recent call last):
File "/www/wwwroot/Douyin_TikTok_Download_API/scraper.py", line 393, in get_douyin_video_data
response = await response.json()
^^^^^^^^^^^^^^^^^^^^^
File "/www/wwwroot/Douyin_TikTok_Download_API/.venv/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1104, in json
raise ContentTypeError(
aiohttp.client_exceptions.ContentTypeError: 0, message='Attempt to decode JSON with unexpected mimetype: text/plain; charset=utf-8', url=URL('https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&aweme_id=6914948781100338440&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1344&screen_height=756&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=118.0&browser_online=true&engine_name=Gecko&engine_version=109.0&os_name=Windows&os_version=10&cpu_core_num=16&device_memory=&platform=PC&webid=7284189800734082615&msToken=B1N9FM825TkvFbayDsDvZxM8r5suLrsfQbC93TciS0O9Iii8iJpAPd__FM2rpLUJi5xtMencSXLeNn8xmOS9q7bP0CUsrt9oVTL08YXLPRzZm0dHKLc9PGRlyEk=&X-Bogus=DFSzswSL/rtANnEftqyipt9WcBn/')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/www/wwwroot/Douyin_TikTok_Download_API/.venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/www/wwwroot/Douyin_TikTok_Download_API/scraper.py", line 400, in get_douyin_video_data
raise ValueError(f"获取抖音视频数据出错了: {e}")
ValueError: 获取抖音视频数据出错了: 0, message='Attempt to decode JSON with unexpected mimetype: text/plain; charset=utf-8', url=URL('https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&aweme_id=6914948781100338440&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1344&screen_height=756&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=118.0&browser_online=true&engine_name=Gecko&engine_version=109.0&os_name=Windows&os_version=10&cpu_core_num=16&device_memory=&platform=PC&webid=7284189800734082615&msToken=B1N9FM825TkvFbayDsDvZxM8r5suLrsfQbC93TciS0O9Iii8iJpAPd__FM2rpLUJi5xtMencSXLeNn8xmOS9q7bP0CUsrt9oVTL08YXLPRzZm0dHKLc9PGRlyEk=&X-Bogus=DFSzswSL/rtANnEftqyipt9WcBn/')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 340, in _invoke_callbacks
callback(self)
File "/www/wwwroot/Douyin_TikTok_Download_API/.venv/lib/python3.11/site-packages/pywebio/session/coroutinebased.py", line 347, in _wakeup
self.step(future.result())
^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/www/wwwroot/Douyin_TikTok_Download_API/scraper.py", line 705, in hybrid_parsing
data = await self.get_douyin_video_data(video_id) if url_platform == 'douyin' \
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/www/wwwroot/Douyin_TikTok_Download_API/.venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/www/wwwroot/Douyin_TikTok_Download_API/.venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/www/wwwroot/Douyin_TikTok_Download_API/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7f8e40595f90 state=finished raised ValueError>]
```
***Have you tried again?***
Yes, the error still exists after 3 time after the error occurred.
***Have you checked the readme or interface documentation for this project?***
Yes, and it is very sure that the problem is caused by the program.
| closed | 2024-02-12T09:40:03Z | 2024-03-26T09:10:26Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/323 | [
"BUG",
"enhancement"
] | sondh0127 | 3 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 447 | Pytorch synthesizer | Splitting this off from #370, which will remain for tensorflow2 conversion. I would prefer this route if we can get it to work. Asking for help from the community on this one.
One example of a pytorch-based tacotron is: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2
Another option is to manually convert the code and pretrained models which would be extremely time-consuming, but also an awesome learning experience. | closed | 2020-07-24T06:40:58Z | 2021-12-01T09:31:35Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/447 | [
"dependencies"
] | ghost | 74 |
tatsu-lab/stanford_alpaca | deep-learning | 262 | Question on License : commercial use permitted, but at the same time, research-purpose only ? | The [LICENSE](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE) of this repo is Apache 2.0 and it is written that commercial use is permitted.
On the other hand, In README.md, it is said that "Alpaca is intended and licensed for research use only."
These two information seem contradicting. Could anyone tell me what I am misunderstanding? Thank you. | closed | 2023-05-12T01:40:52Z | 2024-05-03T08:16:26Z | https://github.com/tatsu-lab/stanford_alpaca/issues/262 | [] | tetsu-kikuchi | 2 |
mage-ai/mage-ai | data-science | 4,971 | Multi-value filters for Status value on pipeline run pages | **Is your feature request related to a problem? Please describe.**
This is a UI/UX improvement for managing pipeline runs. Flipping between single statuses is tedious and it would be a great improvement to be able to select multiple values. I would love to be able to select combinations of values, such as {"Ready", "Running"} or {"Cancelled", "Failed"} to be able to monitor alive or dead runs.
**Describe the solution you'd like**
Provide the ability to be able to select multiple `run status` values from the drop down of the pipeline runs page. The list of pipeline runs would be filtered based on the selected status values.
**Describe alternatives you've considered**
Not sure if there are any alternatives here.
**Additional context**
<img width="653" alt="Screenshot 2024-04-22 at 2 31 20 PM" src="https://github.com/mage-ai/mage-ai/assets/1462478/8c17d0c4-f040-47ab-9b55-65049d2c7024">
| open | 2024-04-22T18:54:09Z | 2024-04-22T22:16:45Z | https://github.com/mage-ai/mage-ai/issues/4971 | [
"enhancement"
] | jdvermeire | 1 |
piskvorky/gensim | machine-learning | 3,166 | LSI gets stuck and connection to Jupyter is lost | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I want to achieve a LSI. But it gets stuck midway at every chunk, no matter how small the chunk size
#### Steps/code/corpus to reproduce
```python
lsi_model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=300, chunksize=500)
```
Here is the log from loglevel=DEBUG:
```
2021-06-06 19:35:18,168 : INFO : using serial LSI version on this node
2021-06-06 19:35:18,169 : INFO : updating model with new documents
Processing 299/3332249 (0.01%)
2021-06-06 19:35:18,522 : INFO : preparing a new chunk of documents
2021-06-06 19:35:18,523 : DEBUG : converting corpus to csc format
2021-06-06 19:35:18,531 : INFO : using 100 extra samples and 2 power iterations
2021-06-06 19:35:18,532 : INFO : 1st phase: constructing (1542840, 400) action matrix
Processing 500/3332249 (0.02%)
2021-06-06 19:35:18,563 : INFO : orthonormalizing (1542840, 400) action matrix
2021-06-06 19:35:21,646 : DEBUG : computing QR of (1542840, 400) dense matrix
2021-06-06 19:35:44,717 : DEBUG : running 2 power iterations
2021-06-06 19:35:52,571 : DEBUG : computing QR of (1542840, 400) dense matrix
2021-06-06 19:36:21,354 : DEBUG : computing QR of (1542840, 400) dense matrix
2021-06-06 19:36:48,119 : INFO : 2nd phase: running dense svd on (400, 500) matrix
2021-06-06 19:36:49,419 : INFO : computing the final decomposition
2021-06-06 19:36:49,420 : INFO : keeping 300 factors (discarding 9.516% of energy spectrum)
```
Then it gets stuck and after some time Jupyter shows me the following error message:
```
Server Connection Error
A connection to the Jupyter server could not be established. JupyterLab will continue trying to reconnect. Check your network connection or Jupyter server configuration.
```
#### Versions
```
Python 3.9.5 (default, May 14 2021, 00:00:00)
[GCC 11.1.1 20210428 (Red Hat 11.1.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform; print(platform.platform())
Linux-5.12.8-300.fc34.x86_64-x86_64-with-glibc2.33
>>> import sys; print("Python", sys.version)
Python 3.9.5 (default, May 14 2021, 00:00:00)
[GCC 11.1.1 20210428 (Red Hat 11.1.1-1)]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.19.5
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.6.3
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.0.1
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
### Additional info
```
>>> print(numpy.show_config())
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
None
>>> print(scipy.show_config())
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
None
``` | closed | 2021-06-06T17:42:02Z | 2021-06-06T20:20:59Z | https://github.com/piskvorky/gensim/issues/3166 | [] | raffaem | 3 |
custom-components/pyscript | jupyter | 635 | Repairs showing in homeassistant for each pyscript service | I declare many pyscript scripts as services, and they all work great, however each time HA boots it reports them as an issue that has to be repaired. When mentioning this in the HA discord, the suggestion was to raise a ticket here to see if it can be resolved. See a screenshot of what I see each time my HA boots.
https://ibb.co/kGbGvYP | open | 2024-09-14T00:27:44Z | 2024-12-01T00:32:21Z | https://github.com/custom-components/pyscript/issues/635 | [] | mark007 | 2 |
apache/airflow | python | 47,654 | DAG Versioning || versions not getting generated | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
I noticed when I am changing DAG parameters like say changing `tags` new versions are not getting logged in DAG version table.
### What you think should happen instead?
Versions should get created. I tried this in alpha4 and it is working fine.
### How to reproduce
1. Update a tag in an exiting DAG
2. Check in DAG_Version table for new version
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-12T05:34:25Z | 2025-03-12T13:41:31Z | https://github.com/apache/airflow/issues/47654 | [
"kind:bug",
"priority:critical",
"area:core",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 6 |
pallets-eco/flask-sqlalchemy | flask | 1,066 | `_sa_skip_events` error when invoking statement and calling `SignallingSession` | Hello, I've got instructed to post here from the very same issue I've created on this place: https://github.com/sqlalchemy/sqlalchemy/issues/8260
Here's the example code from here which I am using: https://github.com/sqlalchemy/sqlalchemy/blob/rel_1_4_39/test/ext/test_baked.py#L1023
When I try to execute a query with this event interceptor being in effect, I get the error about unexpected keyword argument `_sa_skip_events`, which I see is being appended to `_bind_arguments` in `invoke_statement` method, before calling `execute`: https://github.com/sqlalchemy/sqlalchemy/blob/rel_1_4_39/lib/sqlalchemy/orm/session.py#L221
Example code to reproduce:
```python
from sqlalchemy.testing.fixtures import fixture_session
from sqlalchemy.orm.query import Query
class CachingQuery(Query):
cache = {}
def set_cache_key(self, key):
return self.execution_options(_cache_key=key)
def set_cache_key_for_path(self, path, key):
return self.execution_options(**{"_cache_key_%s" % path: key})
def get_value(cache_key, cache, createfunc):
if cache_key in cache:
return cache[cache_key]()
else:
cache[cache_key] = retval = createfunc().freeze()
return retval()
s1 = fixture_session(query_cls=CachingQuery)
@event.listens_for(s1, "do_orm_execute", retval=True)
def do_orm_execute(orm_context):
ckey = None
for opt in orm_context.user_defined_options:
ckey = opt.get_cache_key(orm_context)
if ckey:
break
else:
if "_cache_key" in orm_context.execution_options:
ckey = orm_context.execution_options["_cache_key"]
if ckey is not None:
return get_value(
ckey,
CachingQuery.cache,
orm_context.invoke_statement,
)
s1.query(User).filter(User.id == 7).set_cache_key("user7")
```
The error with the stack trace I'm getting:
```
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2896, in __iter__
return self._iter().__iter__()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2903, in _iter
result = self.session.execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1693, in execute
result = fn(orm_exec_state)
File "/opt/project/app/project/__init__.py", line 63, in _do_orm_execute
retval = orm_context.invoke_statement().freeze()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 233, in invoke_statement
return self.session.execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1700, in execute
bind = self.get_bind(**bind_arguments)
TypeError: SignallingSession.get_bind() got an unexpected keyword argument '_sa_skip_events'
```
Environment:
- Python version: 3.9
- Flask-SQLAlchemy version: 2.5.1
- SQLAlchemy version: 1.4.39
| closed | 2022-07-16T13:28:11Z | 2022-10-03T00:21:45Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1066 | [] | softzer0 | 2 |
modelscope/modelscope | nlp | 483 | offline deployment of a container failed(利用容器离线部署失败) | Thanks for your error report and we appreciate it a lot.
**Checklist**
* I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* I have searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
**Describe the bug**
我在一个服务器上容器中下载了一个模型“speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch”,放到某路径下,然后容器commit成一个镜像。镜像在一个离线服务器上加载后,启动容器并把模型挂载到正确位置。代码中明确写入了模型所在位置,但是无法加载模型。程序总是试图访问“www.modelscope.cn”,由于是离线服务器,所以在max retry后就报失败了。尝试了官方提供的git lfs下载模型方式,仍然无法加载模型。
涉及的command:
```model = "models/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
decoder = pipeline("auto-speech-recognition",model)```
原始镜像为官方提供的镜像:registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-py38-torch2.0.1-tf1.15.5-1.8.0
modelscope版本为:1.8.0,1.8.3也试了,不行。
> on of what the bug is.
**To Reproduce**
* What command or script did you run?
> A placeholder for the command.
* Did you make any modifications on the code or config? Did you understand what you have modified?
* What dataset did you use?
**Your Environments (__required__)**
* OS: `uname -a`
* CPU: `lscpu`
* Commit id (e.g. `a3ffc7d8`)
* You may add addition that may be helpful for locating the problem, such as
* How you installed PyTorch [e.g., pip, conda, source]
* Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)
> 2023-08-18 19:21:32,454 - modelscope - INFO - initiate model from models/shit
2023-08-18 19:21:32,455 - modelscope - INFO - initiate model from location models/shit.
2023-08-18 19:21:32,457 - modelscope - INFO - initialize model from models/shit
2023-08-18 19:21:32,460 - modelscope - WARNING - No preprocessor field found in cfg.
2023-08-18 19:21:32,461 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-08-18 19:21:32,462 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'models/shit'}. trying to build by task and model information.
2023-08-18 19:21:32,463 - modelscope - WARNING - No preprocessor key ('generic-asr', 'auto-speech-recognition') found in PREPROCESSOR_MAP, skip building preprocessor.
2023-08-18 19:21:32,465 - modelscope - INFO - cuda is not available, using cpu instead.
2023-08-18 19:21:33,186 - modelscope - WARNING - [Errno 17] File exists: '/work/models/shit' -> '/mnt/workspace/.cache/modelscope/.cache/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch'
---------------------------------------------------------------------------
gaierror Traceback (most recent call last)
File /opt/conda/lib/python3.8/site-packages/urllib3/connection.py:174, in HTTPConnection._new_conn(self)
173 try:
--> 174 conn = connection.create_connection(
175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
178 except SocketTimeout:
File /opt/conda/lib/python3.8/site-packages/urllib3/util/connection.py:72, in create_connection(address, timeout, source_address, socket_options)
68 return six.raise_from(
69 LocationParseError(u"'%s', label empty or too long" % host), None
70 )
---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
73 af, socktype, proto, canonname, sa = res
File /opt/conda/lib/python3.8/socket.py:918, in getaddrinfo(host, port, family, type, proto, flags)
917 addrlist = []
--> 918 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
919 af, socktype, proto, canonname, sa = res
gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
File /opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py:714, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
713 # Make the request on the httplib connection object.
--> 714 httplib_response = self._make_request(
715 conn,
716 method,
717 url,
718 timeout=timeout_obj,
719 body=body,
720 headers=headers,
721 chunked=chunked,
722 )
724 # If we're going to release the connection in ``finally:``, then
725 # the response doesn't need to know about the connection. Otherwise
726 # it will also try to release it and we'll have a double-release
727 # mess.
File /opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py:415, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
414 else:
--> 415 conn.request(method, url, **httplib_request_kw)
417 # We are swallowing BrokenPipeError (errno.EPIPE) since the server is
418 # legitimately able to close the connection after sending a valid response.
419 # With this behaviour, the received response is still readable.
File /opt/conda/lib/python3.8/site-packages/urllib3/connection.py:244, in HTTPConnection.request(self, method, url, body, headers)
243 headers["User-Agent"] = _get_default_user_agent()
--> 244 super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File /opt/conda/lib/python3.8/http/client.py:1256, in HTTPConnection.request(self, method, url, body, headers, encode_chunked)
1255 """Send a complete request to the server."""
-> 1256 self._send_request(method, url, body, headers, encode_chunked)
File /opt/conda/lib/python3.8/http/client.py:1302, in HTTPConnection._send_request(self, method, url, body, headers, encode_chunked)
1301 body = _encode(body, 'body')
-> 1302 self.endheaders(body, encode_chunked=encode_chunked)
File /opt/conda/lib/python3.8/http/client.py:1251, in HTTPConnection.endheaders(self, message_body, encode_chunked)
1250 raise CannotSendHeader()
-> 1251 self._send_output(message_body, encode_chunked=encode_chunked)
File /opt/conda/lib/python3.8/http/client.py:1011, in HTTPConnection._send_output(self, message_body, encode_chunked)
1010 del self._buffer[:]
-> 1011 self.send(msg)
1013 if message_body is not None:
1014
1015 # create a consistent interface to message_body
File /opt/conda/lib/python3.8/http/client.py:951, in HTTPConnection.send(self, data)
950 if self.auto_open:
--> 951 self.connect()
952 else:
File /opt/conda/lib/python3.8/site-packages/urllib3/connection.py:205, in HTTPConnection.connect(self)
204 def connect(self):
--> 205 conn = self._new_conn()
206 self._prepare_conn(conn)
File /opt/conda/lib/python3.8/site-packages/urllib3/connection.py:186, in HTTPConnection._new_conn(self)
185 except SocketError as e:
--> 186 raise NewConnectionError(
187 self, "Failed to establish a new connection: %s" % e
188 )
190 return conn
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4f51ddf940>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
File /opt/conda/lib/python3.8/site-packages/requests/adapters.py:487, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
486 try:
--> 487 resp = conn.urlopen(
488 method=request.method,
489 url=url,
490 body=request.body,
491 headers=request.headers,
492 redirect=False,
493 assert_same_host=False,
494 preload_content=False,
495 decode_content=False,
496 retries=self.max_retries,
497 timeout=timeout,
498 chunked=chunked,
499 )
501 except (ProtocolError, OSError) as err:
File /opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py:826, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
823 log.warning(
824 "Retrying (%r) after connection broken by '%r': %s", retries, err, url
825 )
--> 826 return self.urlopen(
827 method,
828 url,
829 body,
830 headers,
831 retries,
832 redirect,
833 assert_same_host,
834 timeout=timeout,
835 pool_timeout=pool_timeout,
836 release_conn=release_conn,
837 chunked=chunked,
838 body_pos=body_pos,
839 **response_kw
840 )
842 # Handle redirect?
File /opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py:826, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
823 log.warning(
824 "Retrying (%r) after connection broken by '%r': %s", retries, err, url
825 )
--> 826 return self.urlopen(
827 method,
828 url,
829 body,
830 headers,
831 retries,
832 redirect,
833 assert_same_host,
834 timeout=timeout,
835 pool_timeout=pool_timeout,
836 release_conn=release_conn,
837 chunked=chunked,
838 body_pos=body_pos,
839 **response_kw
840 )
842 # Handle redirect?
File /opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py:798, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
796 e = ProtocolError("Connection aborted.", e)
--> 798 retries = retries.increment(
799 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
800 )
801 retries.sleep()
File /opt/conda/lib/python3.8/site-packages/urllib3/util/retry.py:592, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
591 if new_retry.is_exhausted():
--> 592 raise MaxRetryError(_pool, url, error or ResponseError(cause))
594 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPConnectionPool(host='[www.modelscope.cn](https://www.modelscope.cn/)', port=80): Max retries exceeded with url: /api/v1/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/revisions?EndTime=1692201600 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4f51ddf940>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
File /opt/conda/lib/python3.8/site-packages/modelscope/utils/registry.py:212, in build_from_cfg(cfg, registry, group_key, default_args)
211 else:
--> 212 return obj_cls(**args)
213 except Exception as e:
214 # Normal TypeError does not print class name.
File /opt/conda/lib/python3.8/site-packages/modelscope/pipelines/audio/asr_inference_pipeline.py:122, in AutomaticSpeechRecognitionPipeline.__init__(self, model, preprocessor, vad_model, vad_model_revision, punc_model, punc_model_revision, lm_model, lm_model_revision, timestamp_model, timestamp_model_revision, ngpu, **kwargs)
120 self.model_cfg = self.model.forward()
--> 122 self.cmd = self.get_cmd(kwargs, model)
123 from funasr.bin import asr_inference_launch
File /opt/conda/lib/python3.8/site-packages/modelscope/pipelines/audio/asr_inference_pipeline.py:371, in AutomaticSpeechRecognitionPipeline.get_cmd(self, extra_args, model_path)
370 update_local_model(model_config, model_path, extra_args)
--> 371 self.load_vad_model(cmd)
372 self.load_punc_model(cmd)
File /opt/conda/lib/python3.8/site-packages/modelscope/pipelines/audio/asr_inference_pipeline.py:407, in AutomaticSpeechRecognitionPipeline.load_vad_model(self, cmd)
406 else:
--> 407 vad_model = snapshot_download(
408 self.vad_model, revision=self.vad_model_revision)
409 logger.info('loading vad model from {0} ...'.format(vad_model))
File /opt/conda/lib/python3.8/site-packages/modelscope/hub/snapshot_download.py:96, in snapshot_download(model_id, revision, cache_dir, user_agent, local_files_only, cookies, ignore_file_pattern)
95 cookies = ModelScopeConfig.get_cookies()
---> 96 revision = _api.get_valid_revision(
97 model_id, revision=revision, cookies=cookies)
99 snapshot_header = headers if 'CI_TEST' in os.environ else {
100 **headers,
101 **{
102 'Snapshot': 'True'
103 }
104 }
File /opt/conda/lib/python3.8/site-packages/modelscope/hub/api.py:464, in HubApi.get_valid_revision(self, model_id, revision, cookies)
463 if revision is None: # user not specified revision, use latest revision before release time
--> 464 revisions = self.list_model_revisions(
465 model_id,
466 cutoff_timestamp=release_timestamp,
467 use_cookies=False if cookies is None else cookies)
468 if len(revisions) == 0:
File /opt/conda/lib/python3.8/site-packages/modelscope/hub/api.py:432, in HubApi.list_model_revisions(self, model_id, cutoff_timestamp, use_cookies)
431 path = f'{self.endpoint}/api/v1/models/{model_id}/revisions?EndTime=%s' % cutoff_timestamp
--> 432 r = self.session.get(path, cookies=cookies, headers=self.headers)
433 handle_http_response(r, logger, cookies, model_id)
File /opt/conda/lib/python3.8/site-packages/requests/sessions.py:600, in Session.get(self, url, **kwargs)
599 kwargs.setdefault("allow_redirects", True)
--> 600 return self.request("GET", url, **kwargs)
File /opt/conda/lib/python3.8/site-packages/requests/sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
586 send_kwargs.update(settings)
--> 587 resp = self.send(prep, **send_kwargs)
589 return resp
File /opt/conda/lib/python3.8/site-packages/requests/sessions.py:701, in Session.send(self, request, **kwargs)
700 # Send the request
--> 701 r = adapter.send(request, **kwargs)
703 # Total elapsed time of the request (approximately)
File /opt/conda/lib/python3.8/site-packages/requests/adapters.py:520, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
518 raise SSLError(e, request=request)
--> 520 raise ConnectionError(e, request=request)
522 except ClosedPoolError as e:
ConnectionError: HTTPConnectionPool(host='[www.modelscope.cn](https://www.modelscope.cn/)', port=80): Max retries exceeded with url: /api/v1/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/revisions?EndTime=1692201600 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4f51ddf940>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
Cell In[5], line 3
1 #model = "models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
2 #model = "speech_UniASR_asr_2pass-zh-cn-8k-common-vocab8358-tensorflow1-offline"
----> 3 decoder = pipeline("auto-speech-recognition",model)
File /opt/conda/lib/python3.8/site-packages/modelscope/pipelines/builder.py:147, in pipeline(task, model, preprocessor, config_file, pipeline_name, framework, device, model_revision, **kwargs)
144 if preprocessor is not None:
145 cfg.preprocessor = preprocessor
--> 147 return build_pipeline(cfg, task_name=task)
File /opt/conda/lib/python3.8/site-packages/modelscope/pipelines/builder.py:59, in build_pipeline(cfg, task_name, default_args)
48 def build_pipeline(cfg: ConfigDict,
49 task_name: str = None,
50 default_args: dict = None):
51 """ build pipeline given model config dict.
52
53 Args:
(...)
57 default_args (dict, optional): Default initialization arguments.
58 """
---> 59 return build_from_cfg(
60 cfg, PIPELINES, group_key=task_name, default_args=default_args)
File /opt/conda/lib/python3.8/site-packages/modelscope/utils/registry.py:215, in build_from_cfg(cfg, registry, group_key, default_args)
212 return obj_cls(**args)
213 except Exception as e:
214 # Normal TypeError does not print class name.
--> 215 raise type(e)(f'{obj_cls.__name__}: {e}')
ConnectionError: AutomaticSpeechRecognitionPipeline: HTTPConnectionPool(host='[www.modelscope.cn](https://www.modelscope.cn/)', port=80): Max retries exceeded with url: /api/v1/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/revisions?EndTime=1692201600 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4f51ddf940>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')) | closed | 2023-08-18T11:35:28Z | 2024-07-04T02:47:31Z | https://github.com/modelscope/modelscope/issues/483 | [] | XufengXufengXufeng | 10 |
serengil/deepface | deep-learning | 1,215 | [BUG]: The verify function in gunicorn is unavailable | ### Before You Report a Bug, Please Confirm You Have Done The Following...
- [X] I have updated to the latest version of the packages.
- [X] I have searched for both [existing issues](https://github.com/serengil/deepface/issues) and [closed issues](https://github.com/serengil/deepface/issues?q=is%3Aissue+is%3Aclosed) and found none that matched my issue.
### DeepFace's version
0.0.90
### Python version
Python 3.9.19
### Operating System
Centos7
### Dependencies
tensorflow==2.16.1
keras==3.1.1
torch==2.2.1
torchvision==0.17.1
Flask==3.0.2
### Reproducible example
```Python
from deepface import DeepFace
import cv2
from flask import Flask,request,jsonify
class FaceRecognitionServer:
def __init__(self):
# Initialize Flask app
self.app = Flask(__name__)
self.define_routes()
def verify_faces(self,video_url, img_path2):
cap = cv2.VideoCapture(video_url)
while cap.isOpened():
ret, frame = cap.read()
# 使用DeepFace验证两张图片
result = DeepFace.verify(frame, img_path2, model_name="Facenet",enforce_detection=False, detector_backend="yolov8")
# 打印结果和所需时间
print("Verification result:", result['verified'])
return result
cap.release() # 释放视频文件
def define_routes(self):
@self.app.route('/app', methods=['GET','POST'])
def face_recognition():
video_url = str(request.json.get('video_url'))
# 数据库存放的本人图片地址
Database_owner_url = str(request.json.get("database_owner_url"))
result = self.verify_faces(video_url,Database_owner_url)
return jsonify(result), 200
def run(self):
self.app.run(host="0.0.0.0",port=5000,debug=True)
if __name__ == '__main__':
# Instantiate the FaceRecognitionServer class and run the server
app = FaceRecognitionServer()
app.run()
starting mode:gunicorn -w 1 app:app --preload
```
### Relevant Log Output
starting mode:gunicorn -w 1 app:app --preload
### Expected Result
{
"detector_backend": "yolov8",
"distance": 0.6654026268386033,
"facial_areas": {
"img1": {
"h": 497,
"left_eye": [
606,
406
],
"right_eye": [
782,
400
],
"w": 369,
"x": 491,
"y": 217
},
"img2": {
"h": 203,
"left_eye": [
109,
115
],
"right_eye": [
178,
116
],
"w": 161,
"x": 61,
"y": 42
}
},
"model": "Facenet",
"similarity_metric": "cosine",
"threshold": 0.4,
"time": 18.42,
"verified": false
}
### What happened instead?
empty
### Additional Info
If I didn't use the --preload parameter, the program returned the correct response. If I used the --preload parameter, verify was running all the time with no errors and no results. If the program froze, the rest of deepface's functions would work just fine with the --preload parameter set. Only the verify function has this problem | closed | 2024-04-26T02:48:27Z | 2024-04-26T04:43:41Z | https://github.com/serengil/deepface/issues/1215 | [
"bug"
] | xioahai778 | 0 |
Kludex/mangum | asyncio | 318 | `Request.url` does not include the api_gateway_base_path | I am using Mangum in a Lambda handler behind an API Gateway with a custom domain name. When I instantiate Mangum I provide my base path mapping with the `api_gateway_base_path` parameter. Inside my handler I reference the `Request.url` property, but it does not provide the actual URL used for the request because it does not include the API Gateway base path value in the path.
Reading the FastAPI docs it seems that this is what the `root_path` scope property is supposed to be used for, but in the `APIGateway` handler class `root_path` gets set to `""`.
https://github.com/jordaneremieff/mangum/blob/main/mangum/handlers/api_gateway.py#L101
Would it make sense for the `root_path` to be set to the `api_gateway_base_path` value if it is provided? | open | 2024-02-20T18:49:16Z | 2024-02-20T18:49:16Z | https://github.com/Kludex/mangum/issues/318 | [] | beck3905 | 0 |
LibreTranslate/LibreTranslate | api | 466 | Provide stable public instances list | Please, provide most stable public instances in readme.
I'm maintainer of NPM package https://github.com/translate-tools/core that unify translation API, provide translation primitives and translators implementations for most popular translation services.
Your service is used by this package and sometimes fails tests. We have issue about this problem https://github.com/translate-tools/core/issues/75
For now we just skip tests for LibreTranslate, becuase public instances are not stable.
https://github.com/translate-tools/core/blob/fc0f000ef557c63c8616a77501345b25bdcf7c45/src/translators/__tests__/translators.test.ts#L38-L46
We need a stable public instances with high availability, to not mark translator that use your API are unstable.
Unstable translators are not visible for most users (but available to use) and may be removed any time in future releases with no notices.
I like a LibreTranslate and i use it for myself, but as maintainer i can't provide translator implementation that not have a tests that runs few times at week to catch the problems as soon as possible. To implement this tests, we need public available API. Could you please suggest me where i can find stable public instances of Libre translate? | open | 2023-07-20T13:22:09Z | 2025-02-27T17:31:08Z | https://github.com/LibreTranslate/LibreTranslate/issues/466 | [
"enhancement"
] | vitonsky | 2 |
MaxHalford/prince | scikit-learn | 76 | Incorrect Normalization | ```
if self.normalize:
# Scale continuous variables to unit variance
num = X.select_dtypes(np.number).columns
normalize = lambda x: x / np.sqrt((x ** 2).sum() or 1)
X.loc[:, num] = (X.loc[:, num] - X.loc[:, num].mean()).apply(normalize, axis='rows')
```
This will result in zero mean but **not** unit variance, for which one needs to divide by standard deviation. Correct version something like:
```
if self.normalize:
# Scale continuous variables to unit variance
num = X.select_dtypes(np.number).columns
n_samp = X.shape[0]
normalize = lambda x: x / (np.sqrt((x ** 2).sum()/n_samp) or 1)
X.loc[:, num] = (X.loc[:, num] - X.loc[:, num].mean()).apply(normalize, axis='rows')
``` | closed | 2019-08-23T13:53:27Z | 2019-08-25T21:38:17Z | https://github.com/MaxHalford/prince/issues/76 | [
"invalid"
] | anantmalhotra | 2 |
quokkaproject/quokka | flask | 650 | Quokka installation more instructions. | I had problems installing this project even following the instructions
I needed to install two operating system packages. Python3.6-dev and Pandoc via apt-get. I Use Ubunto OS.
I think this information can be add in readme or guidelines.
| open | 2018-02-27T10:12:44Z | 2018-09-26T12:45:02Z | https://github.com/quokkaproject/quokka/issues/650 | [
"TODOC"
] | Bernardoow | 5 |
babysor/MockingBird | deep-learning | 285 | RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for gst.stl.attention.W_query.weight: copying a param with shape torch.Size([512, 256]) from checkpoint, the shape in current model is torch.Size([512, 512]). | closed | 2021-12-20T15:17:34Z | 2024-03-12T08:56:21Z | https://github.com/babysor/MockingBird/issues/285 | [] | DanMerry | 5 | |
numpy/numpy | numpy | 27,701 | ENH: StringDType equivalent for bytes | ### Proposed new feature or change:
numpy 2.0 add a type for variable length `str`: `np.dtypes.StringDType()`. However it is strange there's not the equivalent for `bytes` (variable length bytes array).
Variable lenght bytes array are often encountered. For example in machine learning, many examples are stored as arrays of encoded images (`bytes`).
There's a `np.dtypes.BytesDType`, but the behavior seems to match the fixed-length array:
```python
x = np.asarray([b'aaa', b'bbbff'], dtype=np.dtypes.BytesDType)
x.dtype # Is '|S5' so waste space
```
https://numpy.org/devdocs/user/basics.strings.html#casting-to-and-from-fixed-width-strings | open | 2024-11-04T13:23:03Z | 2024-11-07T16:42:08Z | https://github.com/numpy/numpy/issues/27701 | [] | Conchylicultor | 4 |
coqui-ai/TTS | deep-learning | 2,354 | [Bug] Cannot fine tune YourTTS with reinit_text_encoder = True due to Runtime Error | ### Describe the bug
Hi,
I am trying the YourTTS recipe with a French dataset and ResNet = 1. It trains great regarding the voice similarity and audio quality BUT there are still some mispronunciations even after 305k steps and it does not improve (the mispronunciations were there from step 60k onwards).
So after watching this [video](https://www.youtube.com/watch?v=1yt2W-uK8mk) I understood the text encoder may be overfitting, I decided to reset the text encoder and train it for some thousands steps until the pronunciation is OK. My goal is to try and "save" my model trained during a week long.
So in `model_args = VitsArgs(` I added `reinit_text_encoder = True` to the list of arguments and use as restore path the path to my 305k step model.
But after around 1h30 minutes I start to get some `tensorboardX.x2num:NaN or Inf found in input tensor` warning and then an increasing number of losses are becoming NaN and finally I get :
```
if torch.min(inputs) < left or torch.max(inputs) > right:
RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify th
e reduction dim with the 'dim' argument.
```
I tried to also add `reinit_DP=True` but the same error appeared.
I tried to also add `detach_dp_input = False` as explained in the video without success.
I tried to also `use_phonemes = True` because my previous VITS models with phonemes did not have such mispronunciations but the same error still appeared.
I searched the web and found that @erogol [suggested a bug in torch](https://github.com/coqui-ai/TTS/discussions/1949) but I did not change anything to my environment nor did I reboot my computer. Consequently I doubt it applies to my case since I could train VITS, YourTTS without error for months.
Please note : If I continue the training with the recipe original recipe (ie without reinit_text_encoder) it trains normally.
What can I do to only retrain the text encoder so that mispronunciations disappear ? Or is it even possible to correct the mispronunciations (I'd answer positively since it is shown in the video) ?
### To Reproduce
- Train a model for some steps (I only tried with my last checkpoint which has reached 305 k steps).
- Stop the training and add `reinit_text_encoder = True` for `model_args` in the YourTTS recipe.
- Set the `RESTORE_PATH` to the checkpoint you want to train from.
- Launch this recipe
- Wait a little bit and the Runtime Error should occur.
### Expected behavior
YourTTS fine tuning with `reinit_text_encoder = True` should work.
### Logs
_No response_
### Environment
```shell
- TTS version : 0.10.0
- Pytorch version : 1.13.1+cu117
- Python : 3.10.6
- OS : Ubuntu 22.04
```
### Additional context
_No response_ | closed | 2023-02-18T04:14:18Z | 2023-02-21T05:59:09Z | https://github.com/coqui-ai/TTS/issues/2354 | [
"bug"
] | Ca-ressemble-a-du-fake | 1 |
FactoryBoy/factory_boy | django | 209 | Lazy attributes with fake factories some times fails. | Hi.
Some times my builds fails with integrity error like this: https://travis-ci.org/proofit404/codeflame/builds/67279773
Here is model definition caused this problem: https://github.com/alex/django-taggit/blob/develop/taggit/models.py#L87-L90
And here is my factory for it: https://github.com/alex/django-taggit/blob/develop/taggit/models.py#L87-L90
I try to wrote it both with `LazyAttribute` and with `Sequence`. Result is pretty much the same. Build will randomly fails with integrity error.
Since there is `Faker` integration in the upcoming `factory_boy` release I think this issue will apply to it too.
Do you know any workaround to this problem?
| closed | 2015-06-18T07:45:03Z | 2016-02-29T19:23:06Z | https://github.com/FactoryBoy/factory_boy/issues/209 | [
"NeedInfo",
"Django"
] | proofit404 | 9 |
xinntao/Real-ESRGAN | pytorch | 2 | 'VCOMP140D.DLL' is required | A straightforward execution of the 'Windows executable files' fails because 'VCOMP140D.DLL' is required.
For me it was necessary to install 'Visual Studio 2019' and load the 'MSVC v142' package to solve this problem.
Putting the 'VCOMP140D.DLL' into the 'Windows executable files' would help other users.
The result look good, thanks for releasing. | closed | 2021-07-24T23:06:51Z | 2021-07-31T18:39:35Z | https://github.com/xinntao/Real-ESRGAN/issues/2 | [] | ghost | 7 |
PokeAPI/pokeapi | graphql | 844 | Add Evolution chain details | <!--
Please search existing issues to avoid creating duplicates.
Describe the feature you'd like.
Thank you!
-->
Currently Evolution details dont include reigonal forms in the data those should be added for more clear information. Also some evolution details are missing such as one for rockruff where it shows details for all evolution forms and conditions but states only lycanroc in evolved species when it should be stating dusk midnight etc so I think a round if data validation is needed. | closed | 2023-02-11T03:19:40Z | 2025-02-28T04:45:54Z | https://github.com/PokeAPI/pokeapi/issues/844 | [] | FallenDeity | 9 |
airtai/faststream | asyncio | 1,134 | tests: cover confluent docs_src | We should hold testcoverage on 95%+, so to make stable release we should cover as much confluent files as possible (at least all tests) | closed | 2024-01-13T08:42:59Z | 2024-01-22T12:50:41Z | https://github.com/airtai/faststream/issues/1134 | [
"wontfix"
] | Lancetnik | 0 |
alecxe/scrapy-fake-useragent | web-scraping | 27 | [CRITICAL] useragentstring.com not working anymore | ```bash
2020-07-17 16:21:31 [fake_useragent] DEBUG: Error occurred during fetching http://useragentstring.com/pages/useragentstring.php?name=Chrome
```
It is failing to fetch it because the site seems to be down or not working properly. Consider removing it from the list or replacing it with another list.
This problem renders the tools useless. | closed | 2020-07-17T19:29:15Z | 2020-07-28T11:55:21Z | https://github.com/alecxe/scrapy-fake-useragent/issues/27 | [] | 0xfede7c8 | 12 |
bigscience-workshop/petals | nlp | 79 | Error: BFloat16 Unsupported scalar when trying to execute across multiple GPUs with BFloat16 & 8-Bits | I tried to run BLOOM distributed across multiple A100 GPUs with 8-Bit and using BFloat16 but ran into this error while trying to execute a slightly adjusted version of the example script:
```
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: CUDA runtime path found: /datadrive/miniconda3/envs/petals/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 113
CUDA SETUP: Loading binary /datadrive/miniconda3/envs/petals/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
Oct 18 09:52:07.795 [WARN] [/datadrive/repos/petals/src/client/remote_sequential.py.__init__:34] RemoteSequential is in active development; expect adventures
Some weights of DistributedBloomForCausalLM were not initialized from the model checkpoint at bloom-testing/test-bloomd-560m-main and are newly initialized: ['lm_head.word_embeddings.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/datadrive/repos/petals/simple_test_script.py", line 17, in <module>
remote_outputs = model.generate(inputs, max_length=100)
File "/datadrive/miniconda3/envs/petals/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/datadrive/repos/petals/src/client/remote_generation.py", line 113, in generate
hidden_state = sess.step(embs, prompts=intermediate_prompts, hypo_ids=hypo_ids)[:, -1]
File "/datadrive/repos/petals/src/client/inference_session.py", line 200, in step
outputs = session.step(inputs, prompts[self.chosen_spans[0].start : self.chosen_spans[0].end], **kwargs)
File "/datadrive/repos/petals/src/client/inference_session.py", line 109, in step
tensors=[
File "/datadrive/repos/petals/src/client/inference_session.py", line 110, in <listcomp>
serialize_torch_tensor(tensor.to(proto.dtype), proto.compression)
File "/datadrive/miniconda3/envs/petals/lib/python3.9/site-packages/hivemind/compression/serialization.py", line 41, in serialize_torch_tensor
return compression.compress(tensor, info, allow_inplace)
File "/datadrive/miniconda3/envs/petals/lib/python3.9/site-packages/hivemind/compression/base.py", line 83, in compress
array = tensor.detach().numpy()
TypeError: Got unsupported ScalarType BFloat16
```
**The code of simple_example_script:**
```
import torch
import torch.nn.functional as F
import transformers
from src import DistributedBloomForCausalLM
MODEL_NAME = "bloom-testing/test-bloomd-560m-main" #"bigscience/bloom-petals"
import os
initial_peer = os.getenv("initial_peer")
initial_peers = [initial_peer] # e.g. ["/ip4/127.0.0.1/tcp/more/stuff/here"]
tokenizer = transformers.BloomTokenizerFast.from_pretrained(MODEL_NAME)
model = DistributedBloomForCausalLM.from_pretrained(
MODEL_NAME, initial_peers=initial_peers, low_cpu_mem_usage=True, torch_dtype=torch.float32
) # this model has only embeddings / logits, all transformer blocks rely on remote servers
# model = model.to('cuda')
inputs = tokenizer("a cat sat", return_tensors="pt")["input_ids"]
remote_outputs = model.generate(inputs, max_length=100)
print(tokenizer.decode(remote_outputs[0])) # "a cat sat in the back of the car,"
# "train" input embeddings by backprop through distributed transformer blocks
model.transformer.word_embeddings.weight.requires_grad = True
outputs = model.forward(input_ids=inputs)
loss = F.cross_entropy(outputs.logits.flatten(0, 1), inputs.flatten())
loss.backward()
print("Gradients (norm):", model.transformer.word_embeddings.weight.grad.norm())
```
**Server launched via commands:**
```
python -m cli.run_server bloom-testing/test-bloomd-560m-main --num_blocks 12 --torch_dtype bfloat16 --host_maddrs /ip4/0.0.0.0/tcp/31337 --load_in_8bit
python -m cli.run_server bloom-testing/test-bloomd-560m-main --torch_dtype bfloat16 --host_maddrs /ip4/127.0.0.1/tcp/0 --load_in_8bit --initial_peers /ip4/127.0.0.1/tcp/31337/p2p/QmTHnjwKQFzvxrPesrSjtaL5eKUVdHfLsxV87vx8RFH21U --block_indices 12:24 --device cuda:1
```
**Packages in the environment, have been installed via requirements.txt:**
```
# packages in environment at /datadrive/miniconda3/envs/petals:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
accelerate 0.10.0 pypi_0 pypi
aiohttp 3.8.3 pypi_0 pypi
aiosignal 1.2.0 pypi_0 pypi
asttokens 2.0.5 pyhd3eb1b0_0
async-timeout 4.0.2 pypi_0 pypi
attrs 22.1.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
base58 2.1.1 pypi_0 pypi
bitsandbytes 0.34.0 pypi_0 pypi
blas 1.0 mkl
brotlipy 0.7.0 py39h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
ca-certificates 2022.07.19 h06a4308_0
certifi 2022.9.24 py39h06a4308_0
cffi 1.15.1 py39h74dc2b5_0
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.3 pypi_0 pypi
configargparse 1.5.3 pypi_0 pypi
cryptography 37.0.1 py39h9ce1e76_0
cudatoolkit 11.3.1 h2bc3f7f_2
datasets 2.5.2 pypi_0 pypi
debugpy 1.5.1 py39h295c915_0
decorator 5.1.1 pyhd3eb1b0_0
dill 0.3.5.1 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
entrypoints 0.4 py39h06a4308_0
executing 0.8.3 pyhd3eb1b0_0
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.8.0 pypi_0 pypi
freetype 2.11.0 h70c0345_0
frozenlist 1.3.1 pypi_0 pypi
fsspec 2022.8.2 pypi_0 pypi
giflib 5.2.1 h7b6447c_0
gitdb 4.0.9 pypi_0 pypi
gitpython 3.1.29 pypi_0 pypi
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
grpcio 1.49.1 pypi_0 pypi
grpcio-tools 1.48.2 pypi_0 pypi
hivemind 1.1.1 pypi_0 pypi
huggingface-hub 0.7.0 pypi_0 pypi
humanfriendly 10.0 pypi_0 pypi
idna 3.3 pyhd3eb1b0_0
intel-openmp 2021.4.0 h06a4308_3561
ipykernel 6.15.2 py39h06a4308_0
ipython 8.4.0 py39h06a4308_0
jedi 0.18.1 py39h06a4308_1
jpeg 9e h7f8727e_0
jupyter_client 7.3.5 py39h06a4308_0
jupyter_core 4.11.1 py39h06a4308_0
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libdeflate 1.8 h7f8727e_5
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libiconv 1.16 h7f8727e_2
libidn2 2.3.2 h7f8727e_0
libpng 1.6.37 hbc83047_0
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.16.0 h27cfd23_0
libtiff 4.4.0 hecacb30_0
libunistring 0.9.10 h27cfd23_0
libwebp 1.2.4 h11a3e52_0
libwebp-base 1.2.4 h5eee18b_0
lz4-c 1.9.3 h295c915_1
matplotlib-inline 0.1.6 py39h06a4308_0
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py39h7f8727e_0
mkl_fft 1.3.1 py39hd3c417c_0
mkl_random 1.2.2 py39h51133e4_0
msgpack 1.0.4 pypi_0 pypi
multiaddr 0.0.9 pypi_0 pypi
multidict 6.0.2 pypi_0 pypi
multiprocess 0.70.13 pypi_0 pypi
ncurses 6.3 h5eee18b_3
nest-asyncio 1.5.5 py39h06a4308_0
netaddr 0.8.0 pypi_0 pypi
nettle 3.7.3 hbbd107a_1
numpy 1.23.1 py39h6c91a56_0
numpy-base 1.23.1 py39ha15fc14_0
openh264 2.1.1 h4ff587b_0
openssl 1.1.1q h7f8727e_0
packaging 21.3 pyhd3eb1b0_0
pandas 1.5.0 pypi_0 pypi
parso 0.8.3 pyhd3eb1b0_0
pathtools 0.1.2 pypi_0 pypi
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.2.0 py39hace64e9_1
pip 22.2.2 py39h06a4308_0
prefetch-generator 1.0.1 pypi_0 pypi
promise 2.3 pypi_0 pypi
prompt-toolkit 3.0.20 pyhd3eb1b0_0
protobuf 3.20.3 pypi_0 pypi
psutil 5.9.2 pypi_0 pypi
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 9.0.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pydantic 1.10.2 pypi_0 pypi
pygments 2.11.2 pyhd3eb1b0_0
pymultihash 0.8.2 pypi_0 pypi
pyopenssl 22.0.0 pyhd3eb1b0_0
pyparsing 3.0.9 py39h06a4308_0
pysocks 1.7.1 py39h06a4308_0
python 3.9.13 haa1d7c7_1
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch 1.12.1 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2022.4 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
pyzmq 23.2.0 py39h6a678d5_0
readline 8.1.2 h7f8727e_1
regex 2022.9.13 pypi_0 pypi
requests 2.28.1 py39h06a4308_0
responses 0.18.0 pypi_0 pypi
scipy 1.9.2 pypi_0 pypi
sentry-sdk 1.9.10 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 63.4.1 py39h06a4308_0
shortuuid 1.0.9 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1
smmap 5.0.0 pypi_0 pypi
sortedcontainers 2.4.0 pypi_0 pypi
sqlite 3.39.3 h5082296_0
stack_data 0.2.0 pyhd3eb1b0_0
tk 8.6.12 h1ccaba5_0
tokenizers 0.12.1 pypi_0 pypi
torchaudio 0.12.1 py39_cu113 pytorch
torchvision 0.13.1 py39_cu113 pytorch
tornado 6.2 py39h5eee18b_0
tqdm 4.64.1 pypi_0 pypi
traitlets 5.1.1 pyhd3eb1b0_0
transformers 4.21.3 pypi_0 pypi
typing_extensions 4.3.0 py39h06a4308_0
tzdata 2022c h04d1e81_0
urllib3 1.26.11 py39h06a4308_0
uvloop 0.17.0 pypi_0 pypi
varint 1.0.2 pypi_0 pypi
wandb 0.13.4 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
wheel 0.37.1 pyhd3eb1b0_0
xxhash 3.0.0 pypi_0 pypi
xz 5.2.6 h5eee18b_0
yarl 1.8.1 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zlib 1.2.12 h5eee18b_3
zstd 1.5.2 ha4553b6_0
```
I just used the small version for debugging purposes, I need to distribute it across multiple GPUs since I intend to run the 176bn BLOOM version. I tried to naively just convert the tensor at that line to a supported DType but then another error occured somewhere else down the line.
Since I want to do Prompt Tuning on 8x 40GB A100s, I think I have to use BFloat16 & 8Bit or is there another solution/workaround with good performance?
| closed | 2022-10-18T14:30:29Z | 2022-11-29T05:54:15Z | https://github.com/bigscience-workshop/petals/issues/79 | [] | FTuma | 2 |
scikit-hep/awkward | numpy | 3,313 | Setting integer type within ArrayBuilder | ### Description of new feature
It seems that the arraybuilder always assumes int64 when setting up the data. It would be nice to set the integer type like uint8 or uint64. This could help reduce memory load when building huge datasets that only contain smaller numbers. the same goes for floats for example | closed | 2024-11-21T22:29:56Z | 2024-11-25T14:52:48Z | https://github.com/scikit-hep/awkward/issues/3313 | [
"feature"
] | TiniTinyTerminator | 1 |
wkentaro/labelme | computer-vision | 507 | Unexpected order or bounding box point | When annotate the image using `create rectangle` . The order of the points is `[[xmin, ymin], [xmax, ymax]]` if I draw the box from top left corner to bottom right corner. But this is problematic if I draw the box from bottom right to top left corner. | closed | 2019-11-05T03:21:45Z | 2019-12-05T12:57:25Z | https://github.com/wkentaro/labelme/issues/507 | [] | ArtificialNotImbecile | 1 |
ets-labs/python-dependency-injector | flask | 777 | Wire a single function manually | Is there a way to wire a function instead of wiring an entire module? | open | 2024-01-16T21:06:02Z | 2024-11-13T19:59:49Z | https://github.com/ets-labs/python-dependency-injector/issues/777 | [] | colonelpanic8 | 1 |
sinaptik-ai/pandas-ai | pandas | 1,168 | Support Chinese characters in prompt generation stage | ### System Info
pandasai == 2.0.43
python == 3.11
### 🐛 Describe the bug
I was trying to use *Field Descriptions* feature to improve the understanding of my dataset to LLMs. The way I am doing is write a data description function to create a dictionary info about dataset then pass then to pandasai through *Field Descriptions* like this:
```
data = preview_data(df)
# define a connector
connector = PandasConnector({"original_df": df}, name='My Connector', field_descriptions=data)
```
My part of `data` looks like this:
```
{'时间': 'The 时间 column contains string values. The unique values are: 2023-6-14, 2022-4-22, 2022-11-5.'}
```
As you can see there is some Chinese characters, but in the prompt_generation stage, the Chinese characters was not decoded thus it looks like this:
```
dfs[0]:
name: My Connector
description: null
type: pd.DataFrame
rows: 28
columns: 18
schema:
fields:
- name: "\u65F6\u95F4"
type: object
samples:
- 2022-4-22
- 2022-11-5
- 2023-6-14
```
Which makes LLM much more confused "\u65F6\u95F4".
Is any way we solve this problem? Any suggestion will be grateful! | closed | 2024-05-20T02:58:26Z | 2024-10-16T08:32:27Z | https://github.com/sinaptik-ai/pandas-ai/issues/1168 | [
"bug"
] | Tu-Zhenzhao | 1 |
pydata/xarray | numpy | 10,166 | Updating zarr causes errors when saving to zarr. | ### What is your issue?
xarray version 2024.9.0
zarr version 3.0.5
When attempting to save to zarr, the error below results. I can save the same file to zarr happily using zarr version 2.18.4. I've checked and the same thing happens for a wide range of files. Reseting the encoding has no effect.
[tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901(1).zip](https://github.com/user-attachments/files/19414274/tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901.1.zip)
```
ds = xr.open_dataset('/scratch/nhat_drf_data/zarr_sandbox/tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901(1).nc')
ds.to_zarr('/scratch/nhat_drf_data/zarr_sandbox/test.zarr')
```
gives the error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/common.py:139, in parse_shapelike(data)
138 try:
--> 139 data_tuple = tuple(data)
140 except TypeError as e:
TypeError: 'NoneType' object is not iterable
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 ds.to_zarr('/scratch/nhat_drf_data/zarr_sandbox/test.zarr')
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/core/dataset.py:2562, in Dataset.to_zarr(self, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options, zarr_version, write_empty_chunks, chunkmanager_store_kwargs)
2415 """Write dataset contents to a zarr group.
2416
2417 Zarr chunks are determined in the following way:
(...) 2558 The I/O user guide, with more details and examples.
2559 """
2560 from xarray.backends.api import to_zarr
-> 2562 return to_zarr( # type: ignore[call-overload,misc]
2563 self,
2564 store=store,
2565 chunk_store=chunk_store,
2566 storage_options=storage_options,
2567 mode=mode,
2568 synchronizer=synchronizer,
2569 group=group,
2570 encoding=encoding,
2571 compute=compute,
2572 consolidated=consolidated,
2573 append_dim=append_dim,
2574 region=region,
2575 safe_chunks=safe_chunks,
2576 zarr_version=zarr_version,
2577 write_empty_chunks=write_empty_chunks,
2578 chunkmanager_store_kwargs=chunkmanager_store_kwargs,
2579 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/backends/api.py:1784, in to_zarr(dataset, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options, zarr_version, write_empty_chunks, chunkmanager_store_kwargs)
1782 writer = ArrayWriter()
1783 # TODO: figure out how to properly handle unlimited_dims
-> 1784 dump_to_store(dataset, zstore, writer, encoding=encoding)
1785 writes = writer.sync(
1786 compute=compute, chunkmanager_store_kwargs=chunkmanager_store_kwargs
1787 )
1789 if compute:
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/backends/api.py:1467, in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims)
1464 if encoder:
1465 variables, attrs = encoder(variables, attrs)
-> 1467 store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/backends/zarr.py:720, in ZarrStore.store(self, variables, attributes, check_encoding_set, writer, unlimited_dims)
717 else:
718 variables_to_set = variables_encoded
--> 720 self.set_variables(
721 variables_to_set, check_encoding_set, writer, unlimited_dims=unlimited_dims
722 )
723 if self._consolidate_on_close:
724 zarr.consolidate_metadata(self.zarr_group.store)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/xarray/backends/zarr.py:824, in ZarrStore.set_variables(self, variables, check_encoding_set, writer, unlimited_dims)
821 else:
822 encoding["write_empty_chunks"] = self._write_empty
--> 824 zarr_array = self.zarr_group.create(
825 name,
826 shape=shape,
827 dtype=dtype,
828 fill_value=fill_value,
829 **encoding,
830 )
831 zarr_array = _put_attrs(zarr_array, encoded_attrs)
833 write_region = self._write_region if self._write_region is not None else {}
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/group.py:2354, in Group.create(self, *args, **kwargs)
2352 def create(self, *args: Any, **kwargs: Any) -> Array:
2353 # Backwards compatibility for 2.x
-> 2354 return self.create_array(*args, **kwargs)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/_compat.py:43, in _deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
41 extra_args = len(args) - len(all_args)
42 if extra_args <= 0:
---> 43 return f(*args, **kwargs)
45 # extra_args > 0
46 args_msg = [
47 f"{name}={arg}"
48 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:], strict=False)
49 ]
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/group.py:2473, in Group.create_array(self, name, shape, dtype, chunks, shards, filters, compressors, compressor, serializer, fill_value, order, attributes, chunk_key_encoding, dimension_names, storage_options, overwrite, config)
2378 """Create an array within this group.
2379
2380 This method lightly wraps :func:`zarr.core.array.create_array`.
(...) 2467 AsyncArray
2468 """
2469 compressors = _parse_deprecated_compressor(
2470 compressor, compressors, zarr_format=self.metadata.zarr_format
2471 )
2472 return Array(
-> 2473 self._sync(
2474 self._async_group.create_array(
2475 name=name,
2476 shape=shape,
2477 dtype=dtype,
2478 chunks=chunks,
2479 shards=shards,
2480 fill_value=fill_value,
2481 attributes=attributes,
2482 chunk_key_encoding=chunk_key_encoding,
2483 compressors=compressors,
2484 serializer=serializer,
2485 dimension_names=dimension_names,
2486 order=order,
2487 filters=filters,
2488 overwrite=overwrite,
2489 storage_options=storage_options,
2490 config=config,
2491 )
2492 )
2493 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/sync.py:208, in SyncMixin._sync(self, coroutine)
205 def _sync(self, coroutine: Coroutine[Any, Any, T]) -> T:
206 # TODO: refactor this to to take *args and **kwargs and pass those to the method
207 # this should allow us to better type the sync wrapper
--> 208 return sync(
209 coroutine,
210 timeout=config.get("async.timeout"),
211 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/sync.py:163, in sync(coro, loop, timeout)
160 return_result = next(iter(finished)).result()
162 if isinstance(return_result, BaseException):
--> 163 raise return_result
164 else:
165 return return_result
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/sync.py:119, in _runner(coro)
114 """
115 Await a coroutine and return the result of running it. If awaiting the coroutine raises an
116 exception, the exception will be returned.
117 """
118 try:
--> 119 return await coro
120 except Exception as ex:
121 return ex
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/group.py:1102, in AsyncGroup.create_array(self, name, shape, dtype, chunks, shards, filters, compressors, compressor, serializer, fill_value, order, attributes, chunk_key_encoding, dimension_names, storage_options, overwrite, config)
1007 """Create an array within this group.
1008
1009 This method lightly wraps :func:`zarr.core.array.create_array`.
(...) 1097
1098 """
1099 compressors = _parse_deprecated_compressor(
1100 compressor, compressors, zarr_format=self.metadata.zarr_format
1101 )
-> 1102 return await create_array(
1103 store=self.store_path,
1104 name=name,
1105 shape=shape,
1106 dtype=dtype,
1107 chunks=chunks,
1108 shards=shards,
1109 filters=filters,
1110 compressors=compressors,
1111 serializer=serializer,
1112 fill_value=fill_value,
1113 order=order,
1114 zarr_format=self.metadata.zarr_format,
1115 attributes=attributes,
1116 chunk_key_encoding=chunk_key_encoding,
1117 dimension_names=dimension_names,
1118 storage_options=storage_options,
1119 overwrite=overwrite,
1120 config=config,
1121 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/array.py:4146, in create_array(store, name, shape, dtype, data, chunks, shards, filters, compressors, serializer, fill_value, order, zarr_format, attributes, chunk_key_encoding, dimension_names, storage_options, overwrite, config, write_data)
4141 store_path = await make_store_path(store, path=name, mode=mode, storage_options=storage_options)
4143 data_parsed, shape_parsed, dtype_parsed = _parse_data_params(
4144 data=data, shape=shape, dtype=dtype
4145 )
-> 4146 result = await init_array(
4147 store_path=store_path,
4148 shape=shape_parsed,
4149 dtype=dtype_parsed,
4150 chunks=chunks,
4151 shards=shards,
4152 filters=filters,
4153 compressors=compressors,
4154 serializer=serializer,
4155 fill_value=fill_value,
4156 order=order,
4157 zarr_format=zarr_format,
4158 attributes=attributes,
4159 chunk_key_encoding=chunk_key_encoding,
4160 dimension_names=dimension_names,
4161 overwrite=overwrite,
4162 config=config,
4163 )
4165 if write_data is True and data_parsed is not None:
4166 await result._set_selection(
4167 BasicIndexer(..., shape=result.shape, chunk_grid=result.metadata.chunk_grid),
4168 data_parsed,
4169 prototype=default_buffer_prototype(),
4170 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/array.py:3989, in init_array(store_path, shape, dtype, chunks, shards, filters, compressors, serializer, fill_value, order, zarr_format, attributes, chunk_key_encoding, dimension_names, overwrite, config)
3986 chunks_out = chunk_shape_parsed
3987 codecs_out = sub_codecs
-> 3989 meta = AsyncArray._create_metadata_v3(
3990 shape=shape_parsed,
3991 dtype=dtype_parsed,
3992 fill_value=fill_value,
3993 chunk_shape=chunks_out,
3994 chunk_key_encoding=chunk_key_encoding_parsed,
3995 codecs=codecs_out,
3996 dimension_names=dimension_names,
3997 attributes=attributes,
3998 )
4000 arr = AsyncArray(metadata=meta, store_path=store_path, config=config)
4001 await arr._save_metadata(meta, ensure_parents=True)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/array.py:694, in AsyncArray._create_metadata_v3(shape, dtype, chunk_shape, fill_value, chunk_key_encoding, codecs, dimension_names, attributes)
687 if dtype.kind in "UTS":
688 warn(
689 f"The dtype `{dtype}` is currently not part in the Zarr format 3 specification. It "
690 "may not be supported by other zarr implementations and may change in the future.",
691 category=UserWarning,
692 stacklevel=2,
693 )
--> 694 chunk_grid_parsed = RegularChunkGrid(chunk_shape=chunk_shape)
695 return ArrayV3Metadata(
696 shape=shape,
697 data_type=dtype,
(...) 703 attributes=attributes or {},
704 )
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/chunk_grids.py:176, in RegularChunkGrid.__init__(self, chunk_shape)
175 def __init__(self, *, chunk_shape: ChunkCoordsLike) -> None:
--> 176 chunk_shape_parsed = parse_shapelike(chunk_shape)
178 object.__setattr__(self, "chunk_shape", chunk_shape_parsed)
File /opt/anaconda3/envs/nhat_eval/lib/python3.12/site-packages/zarr/core/common.py:142, in parse_shapelike(data)
140 except TypeError as e:
141 msg = f"Expected an integer or an iterable of integers. Got {data} instead."
--> 142 raise TypeError(msg) from e
144 if not all(isinstance(v, int) for v in data_tuple):
145 msg = f"Expected an iterable of integers. Got {data} instead."
TypeError: Expected an integer or an iterable of integers. Got None instead.
[tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901(1).zip](https://github.com/user-attachments/files/19414266/tas_AUST-04_ERA5_historical_hres_BOM_BARRA-C2_v1_mon_197901-197901.1.zip)
``` | closed | 2025-03-24T04:27:09Z | 2025-03-24T04:32:47Z | https://github.com/pydata/xarray/issues/10166 | [
"needs triage"
] | bweeding | 1 |
modoboa/modoboa | django | 3,235 | dkim_keys_storage_dir [ "Directory non-writable" ] | # Impacted versions
* OS Type: Debian
* OS Version: 12
* Database Type: postgres
* Database version: 15.6
* Modoboa: 2.2.4
* installer used: yes
* Webserver: nginx
# Steps to reproduce
1. install this, don't change anything from default.
2. login into a new-admin
3. go to /new-admin/parameters/admin (go to settings > administration)
4. ctrl+shift+k (open debug console in your browser)
5. no need to change anything, just click on the green floppy disk icon in bottom right corner and then see response from the server indicating failure
# Current behavior
this was installed yesterday (found after #3234), nothing was changed in filesystem - everything is either default debian or was changed by installation script.
<!--
Explain the behavior you're seeing that you think is a bug, and explain how you
think things should behave instead.
-->
## Response
```XHRPUT
XHRPUT
https://mail.perfugium.net/api/v2/parameters/admin/
[HTTP/2 400 109ms]
dkim_keys_storage_dir [ "Directory non-writable" ]
0 "Directory non-writable"
```
this can be fixed by a chmod/chown command, but maybe should be set properly? (which user should have write access to this directory apart from the opendkim guy?)
```bash
ls -l /var/lib/dkim
total 8
drwxr-xr-x 2 opendkim opendkim 4096 Apr 10 11:34 .
drwxr-xr-x 36 root root 4096 Apr 10 11:41 ..
```
# Expected behavior
status 200
# Video/Screenshot link (optional)

| closed | 2024-04-11T12:40:41Z | 2024-07-17T11:11:43Z | https://github.com/modoboa/modoboa/issues/3235 | [
"bug"
] | usernamehyphen | 1 |
dynaconf/dynaconf | fastapi | 344 | Load from env without defaults | Im trying to load a section of config using from_env
Is there any way to get rid of defaults?
```
inputs = settings.from_env('inputs')
print(inputs.to_dict())
{'TITLE': '123', 'AUTHOR': '123', 'CUSTOMER': '123', '123': {'test': 'test'}}
```
Is there any way to just contain the following:
```
{'123': {'test': 'test'}}
```
My intention is to then iterate over these configurations if possible. | open | 2020-05-22T16:49:02Z | 2024-07-01T10:26:57Z | https://github.com/dynaconf/dynaconf/issues/344 | [
"question",
"hacktoberfest",
"RFC",
"Docs"
] | minitriga | 5 |
dsdanielpark/Bard-API | nlp | 84 | KeyError: 'images' when using ChatBard | This is the code snippet I run:
'''
from bardapi import ChatBard
chat = ChatBard(token=token, language='en')
chat.start()
'''
Sometimes it pops up the keyerror, even if I modify the code in chat.py, is that a problem related to network? Note that I'm using a virtual network. Thanks guys.
<img width="927" alt="image" src="https://github.com/dsdanielpark/Bard-API/assets/82095274/23b165d2-906f-432f-9995-af9c8dc38ead">
| closed | 2023-06-29T14:03:15Z | 2023-06-30T06:43:35Z | https://github.com/dsdanielpark/Bard-API/issues/84 | [] | Xiansssss | 2 |
huggingface/transformers | pytorch | 36,705 | Ruff update | Currently, `transformers` uses the outdated `ruff-lsp`
https://github.com/huggingface/transformers/blob/09a309d27364204eb118d352f22483bdd9652a46/setup.py#L164
Please see [this discussion](https://github.com/astral-sh/ruff/discussions/15991). I think it would be good to update to native server as stable in [Ruff 0.5.3](https://github.com/astral-sh/ruff/releases/tag/0.5.3)
@Rocketknight1 | open | 2025-03-13T17:44:46Z | 2025-03-24T11:39:09Z | https://github.com/huggingface/transformers/issues/36705 | [] | d-kleine | 6 |
MaartenGr/BERTopic | nlp | 1,533 | Embedding Error | Hi @MaartenGr ,
I installed a Google package that updated some packages and after that I am getting the following error. Can you please help me to resolve this? Thanks
2023-09-19 16:43:58,838 - BERTopic - Transformed documents to Embeddings
Traceback (most recent call last):
topics, probs = topic_model.fit_transform(docs)
File "... /bertopic/_bertopic.py", line 350, in fit_transform
y, embeddings = self._guided_topic_modeling(embeddings)
File "... /bertopic/_bertopic.py", line 2919, in _guided_topic_modeling
seed_topic_embeddings = np.vstack([seed_topic_embeddings, embeddings.mean(axis=0)])
File "<__array_function__ internals>", line 5, in vstack
File "... /site-packages/numpy/core/shape_base.py", line 282, in vstack
return _nx.concatenate(arrs, 0)
File "<__array_function__ internals>", line 5, in concatenate
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 46 and the array at index 1 has size 100 | open | 2023-09-19T06:51:08Z | 2023-11-14T00:40:58Z | https://github.com/MaartenGr/BERTopic/issues/1533 | [] | mjavedgohar | 3 |
OthersideAI/self-operating-computer | automation | 122 | `monitor_size` is hardcoded | Hello, thanks for sharing this awesome project!
I noticed that in `config/settings.py`. The monitor size is hardcoded as:
```python
{
"width": 1920,
"height": 1080,
}
```
Is that intentional? I figured out that you can get the monitor size using `pyautogui.size()`. | closed | 2024-01-02T02:24:31Z | 2024-01-02T04:44:23Z | https://github.com/OthersideAI/self-operating-computer/issues/122 | [] | outday29 | 1 |
zihangdai/xlnet | nlp | 292 | XLnet colab example error . | When I run XLnet colab example
https://github.com/zihangdai/xlnet/blob/master/notebooks/colab_imdb_gpu.ipynb
It showed the following error:
=========================
Traceback (most recent call last):
File "xlnet/run_classifier.py", line 25, in <module>
import model_utils
File "/content/xlnet/model_utils.py", line 295, in <module>
class AdamWeightDecayOptimizer(tf.train.Optimizer):
AttributeError: module 'tensorflow._api.v2.train' has no attribute 'Optimizer' | open | 2023-02-08T16:43:35Z | 2023-02-08T16:49:18Z | https://github.com/zihangdai/xlnet/issues/292 | [] | AlexTrinityBlock | 1 |
jonaswinkler/paperless-ng | django | 1,001 | [BUG] Document not found after update | **Describe the bug**
I have updates paperless-ng to 1.4.2. I cannot open any old document since this update. paperless always throw a HTTP 404.
fyi: on my storage folder, there are all documents :)
fyi 2: new documents works great.
**To Reproduce**
Steps to reproduce the behavior:
1. Upgrade to 1.4.2
2. Open some document which exists before the update
**Screenshots**


**Webserver logs**
```
[2021-05-10 07:18:42,520] [WARNING] [django.request] Not Found: /api/documents/177/preview/
[2021-05-10 07:18:46,981] [WARNING] [django.request] Not Found: /api/documents/177/download/
```
**Relevant information**
- Ubuntu Server with docker installation
- browser: Firefox / Chrome
- docker-compose.yml: https://paste.solardorf.eu/lewixuboqa | closed | 2021-05-10T07:28:02Z | 2021-05-22T05:01:54Z | https://github.com/jonaswinkler/paperless-ng/issues/1001 | [] | gruessung | 13 |
MilesCranmer/PySR | scikit-learn | 775 | New scikit-learn tests failing | Looks like there are some newly added scikit-learn tests. Some of them are failing:
```python
Failed check_do_not_raise_errors_in_init_or_set_params with:
Traceback (most recent call last):
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/pysr/test/test_main.py", line 885, in test_scikit_learn_compatibility
check(model)
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/sklearn/utils/estimator_checks.py", line 5221, in check_do_not_raise_errors_in_init_or_set_params
est = Estimator(**new_params)
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/pysr/sr.py", line 1024, in __init__
raise TypeError(err_msg)
TypeError: `kwargs` is not a valid keyword argument for PySRRegressor.
Failed check_n_features_in_after_fitting with:
Traceback (most recent call last):
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/sklearn/utils/estimator_checks.py", line 4410, in check_n_features_in_after_fitting
callable_method(X_bad)
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/pysr/sr.py", line 2336, in predict
X.columns = self.feature_names_in_
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/pandas/core/generic.py", line 6313, in __setattr__
return object.__setattr__(self, name, value)
File "properties.pyx", line 69, in pandas._libs.properties.AxisProperty.__set__
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/pandas/core/generic.py", line 814, in _set_axis
self._mgr.set_axis(axis, labels)
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/pandas/core/internals/managers.py", line 238, in set_axis
self._validate_set_axis(axis, new_labels)
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/pandas/core/internals/base.py", line 98, in _validate_set_axis
raise ValueError(
ValueError: Length mismatch: Expected axis has 1 elements, new values have 4 elements
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/pysr/test/test_main.py", line 885, in test_scikit_learn_compatibility
check(model)
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/sklearn/utils/_testing.py", line 147, in wrapper
return fn(*args, **kwargs)
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/sklearn/utils/estimator_checks.py", line 4407, in check_n_features_in_after_fitting
with raises(
File "/home/runner/miniconda3/envs/pysr-test/lib/python3.10/site-packages/sklearn/utils/_testing.py", line 1114, in __exit__
raise AssertionError(err_msg) from exc_value
AssertionError: `PySRRegressor.predict()` does not check for consistency between input number
of features with PySRRegressor.fit(), via the `n_features_in_` attribute.
You might want to use `sklearn.utils.validation.validate_data` instead
of `check_array` in `PySRRegressor.fit()` and PySRRegressor.predict()`. This can be done
like the following:
from sklearn.utils.validation import validate_data
...
class MyEstimator(BaseEstimator):
...
def fit(self, X, y):
X, y = validate_data(self, X, y, ...)
...
return self
...
def predict(self, X):
X = validate_data(self, X, ..., reset=False)
...
return X
```
I think I will disable the `check_do_not_raise_errors_in_init_or_set_params` test because I think the deprecation errors are more helpful than just throwing a naked error. Maybe sklearn means we should throw these errors during `fit` rather than `__init__`? But in that case I'm not sure how to store the `kwargs`. | closed | 2024-12-11T19:36:55Z | 2024-12-12T03:55:36Z | https://github.com/MilesCranmer/PySR/issues/775 | [] | MilesCranmer | 0 |
RobertCraigie/prisma-client-py | asyncio | 319 | Expand List input types | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Our query input types use `List` (e.g. `{'string': {'not_in': ['a', 'b']}}`), this severely limits the types that users can pass to these methods. We should aim to be as broad as possible.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We could switch these types to support taking `Iterable` however this could cause some false positives as `str` is `Iterable` this means that the following would now be statically accepted for string fields:
```py
{
'not_in': 'a',
}
```
This would cause an error at runtime.
I do not know how solvable this is by us until a PEP is drafted for a `Not` type. | open | 2022-03-04T18:33:37Z | 2022-09-08T16:20:25Z | https://github.com/RobertCraigie/prisma-client-py/issues/319 | [
"kind/improvement",
"topic: client",
"level/advanced",
"priority/low"
] | RobertCraigie | 0 |
marcomusy/vedo | numpy | 888 | module 'numpy' has no attribute 'bool' | Cannot convert volumes loaded from .nrrd to numpy arrays due to deprecated bool call:
```
import vedo
vedo.load('/path/to/file.nrrd')
/Users/nicholas.lusk/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vtkmodules/util/numpy_support.py:74: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
_vtk_np = {vtkConstants.VTK_BIT:numpy.bool,
Traceback (most recent call last):
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/IPython/core/formatters.py", line 345, in __call__
return method()
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vedo/volume.py", line 58, in _repr_html_
arr = self.thumbnail(azimuth=0, elevation=-60, zoom=1.4, axes=True)
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vedo/base.py", line 980, in thumbnail
axes = vedo.addons.Axes(self)
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vedo/addons.py", line 3117, in Axes
gxy = shapes.Grid(s=(xticks_float, yticks_float))
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vedo/shapes.py", line 3031, in __init__
Mesh.__init__(self, [verts, faces], c, alpha)
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vedo/mesh.py", line 142, in __init__
self._data = buildPolyData(inputobj[0], inputobj[1])
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vedo/utils.py", line 586, in buildPolyData
source_points.SetData(numpy2vtk(vertices, dtype=np.float32))
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vedo/utils.py", line 447, in numpy2vtk
varr = numpy_to_vtk(arr.astype(dtype), deep=deep)
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vtkmodules/util/numpy_support.py", line 164, in numpy_to_vtk
arr_dtype = get_numpy_array_type(vtk_typecode)
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vtkmodules/util/numpy_support.py", line 94, in get_numpy_array_type
return get_vtk_to_numpy_typemap()[vtk_array_type]
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/vtkmodules/util/numpy_support.py", line 74, in get_vtk_to_numpy_typemap
_vtk_np = {vtkConstants.VTK_BIT:numpy.bool,
File "/Users/opt/miniconda3/envs/ng_link_env/lib/python3.9/site-packages/numpy/__init__.py", line 305, in __getattr__
raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'bool'.
`np.bool` was a deprecated alias for the builtin `bool`. To avoid this error in existing code, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
``` | closed | 2023-06-20T22:14:34Z | 2023-06-21T00:56:48Z | https://github.com/marcomusy/vedo/issues/888 | [] | nal10 | 2 |
huggingface/datasets | nlp | 6,689 | .load_dataset() method defaults to zstandard | ### Describe the bug
Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets.
This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it happens on datasets that are uploaded in json format too, meaning the dataset loader will attempt to convert the data to a zstandard compatible format, and THEN try to unpackage it.
My 4tb drive runs out of room when using zstandard on slimpajama. It loads fine on 1.5tb when using json, however I lack the understanding of the "magic numbers" system used to select the unpackaging algorithm, so I can't push a change myself.
Commenting out this line, in "/datasets/utils/extract.py" fixes the issue, and causes SlimPajama to properly extract using rational amounts of storage, however it completely disables zstandard, which is probably undesirable behavior. Someone with an understanding of the "magic numbers" system should probably take a pass over this issue.
```
class Extractor:
# Put zip file to the last, b/c it is possible wrongly detected as zip (I guess it means: as tar or gzip)
extractors: Dict[str, Type[BaseExtractor]] = {
"tar": TarExtractor,
"gzip": GzipExtractor,
"zip": ZipExtractor,
"xz": XzExtractor,
#"zstd": ZstdExtractor, # This line needs to go, in order for datasets to work w/o non-dependent packages
"rar": RarExtractor,
"bz2": Bzip2Extractor,
"7z": SevenZipExtractor, # <Added version="2.4.0"/>
"lz4": Lz4Extractor, # <Added version="2.4.0"/>
}
```
### Steps to reproduce the bug
'''
from datasaets import load_dataset
load_dataset(path="/cerebras/SlimPajama-627B")
'''
This alone should trigger the error on any system that does not have zstandard pip installed.
### Expected behavior
This repository (which is encoded in json format, not zstandard) should check whether zstandard is installed before defaulting to it. Additionally, using zstandard should not use more than 3x the required space that other extraction mechanisms use.
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
- Python version: 3.12.0
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | closed | 2024-02-22T17:39:27Z | 2024-03-07T14:54:16Z | https://github.com/huggingface/datasets/issues/6689 | [] | ElleLeonne | 4 |
HIT-SCIR/ltp | nlp | 531 | tensorflow serving支持 | 目前我看ltp的模型是torch的,未来会支持放出tensorflow的模型,方便在tensorflow serving部署嘛 | open | 2021-08-05T09:08:08Z | 2023-01-15T13:48:00Z | https://github.com/HIT-SCIR/ltp/issues/531 | [] | shunshunyin | 1 |
biosustain/potion | sqlalchemy | 122 | Missing doc about validation process overloading ? | Hello,
I can't found in the documentation how overload validation process.My goal is to add validation who cannot be done with jsonschema.
Example invented: POST /users
Validation error: Maximum created user count reached for the day
Where a dynamic callback/python code check somewere (database, etc) if maximum created user count is reached.
Is it possible ? If yes can you say how, i can make a pull request to update doc.
bux. | closed | 2017-05-18T15:00:31Z | 2017-05-22T10:56:06Z | https://github.com/biosustain/potion/issues/122 | [] | buxx | 2 |
encode/databases | asyncio | 183 | sqlite: BEGIN IMMEDIATE/EXCLUSIVE transactions | I would be beneficial to allow different types of transaction for [sqlite](https://www.sqlite.org/lang_transaction.html). | closed | 2020-04-01T11:02:54Z | 2020-04-01T11:09:31Z | https://github.com/encode/databases/issues/183 | [] | teucer | 1 |
kennethreitz/responder | flask | 412 | Please use uvloop>=0.14.0 to support python 3.8.x | From python 3.8.x, `sys.set_coroutine_wrapper` is removed. So while we use old uvloop version, responder also doesn't run on python 3.8.0 or later. Please upgrade dependencies. | closed | 2019-12-15T03:02:59Z | 2019-12-15T15:34:13Z | https://github.com/kennethreitz/responder/issues/412 | [] | theoremoon | 1 |
roboflow/supervision | deep-learning | 787 | [ByteTrack] - change the names and documenting input arguments | ### Description
Change the names of [input arguments](https://github.com/roboflow/supervision/blob/3024ddca83ad837651e59d040e2a5ac5b2b4f00f/supervision/tracker/byte_tracker/core.py#L182) to be more intuitive and better document them so that users can easily use them while optimizing ByteTrack. Let's also consider exposing more parameters. Especially those that are available in the original implementation.
Document how changes in the values of individual arguments affect the behavior of ByteTrack.
### Reference
- [ByteTrack paper](https://arxiv.org/pdf/2110.06864.pdf)
- [ByteTrack code](https://github.com/ifzhang/ByteTrack)
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | closed | 2024-01-26T12:19:14Z | 2024-02-29T11:57:13Z | https://github.com/roboflow/supervision/issues/787 | [
"enhancement",
"good first issue",
"api:tracker",
"Q1.2024"
] | SkalskiP | 5 |
statsmodels/statsmodels | data-science | 9,012 | REF/ENH: jackknife - what should we add? | I'm reading up a bit on jackknife, mainly for looo residuals and looo changes in params. #9008
(The main reason that I never looked at it more carefully is that we can use bootstrap as more general version of jackknife when we are willing to use repeated full estimation or cross-validation for other use cases. However, in many cases jackknife can be used without repeated estimation loops.)
some possible uses
- bias correction for parameter estimates
- I don't know whether it's worth it
- jackknife cov_params
- very close to White sandwich cov_type, e.g. Weber 1986 difference in OLS is only small sample correction n / (n - k)
- #8461 cluster robust cov_type and jackknife with leave one groupcluster out (I did not look at references for leave one group out)
- We can stick to White sandwiches and look only at jackknife if it adds anything to that.
- other usages (main target for now)
- diagnostic (looo)
- conformal prediction intervals #9005
Weber, N. C. “The Jackknife and Heteroskedasticity: Consistent Variance Estimation for Regression Models.” Economics Letters 20, no. 2 (January 1, 1986): 161–63. https://doi.org/10.1016/0165-1765(86)90165-5.
Weber has bias corrected cov_params at the end, which IIRC is HC3 (residuals are corrected by diag of hatmatrix)
| open | 2023-09-29T15:03:16Z | 2023-09-29T15:08:43Z | https://github.com/statsmodels/statsmodels/issues/9012 | [
"type-enh",
"comp-base",
"topic-diagnostic"
] | josef-pkt | 0 |
graphql-python/graphene | graphql | 1,241 | Allow for curbing logs while testing | **Is your feature request related to a problem? Please describe.**
Yes, I am writing tests for my graphene app. I need to test whether certain queries produce errors. When `client.execute(...)`is run, this logs an annoying `graphql.error.located_error.GraphQLLocatedError`. I cannot even use `pytest.raises` or `self.assertRaises` in unittest because it only logs without actually raising an error. It even logs the entire traceback which is even more annoying.
**Describe the solution you'd like**
It'd be much better if we had a solution to have a loglevel flag in `client.execute(...)` or some other way to disable logging of those specific errors.
**Describe alternatives you've considered**
I have considered disabling all error logs from the test runner (with `logging.disable(logging.ERROR)`). I am going with this for now but I don't want to miss out on other important logs which may help in surfacing faulty tests.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2020-08-01T16:35:35Z | 2020-08-01T16:36:28Z | https://github.com/graphql-python/graphene/issues/1241 | [
"✨ enhancement"
] | nikochiko | 0 |
jupyterlab/jupyter-ai | jupyter | 509 | Update Cohere model IDs | Hi there,
I'm trying to use a Cohere model, but I ask a question in the JupyterLab AI window (Jupyternaut) and I seem to be getting:
```python
Traceback (most recent call last):
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/jupyter_ai/chat_handlers/base.py", line 45, in on_message
await self.process_message(message)
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/jupyter_ai/chat_handlers/default.py", line 88, in process_message
response = await self.llm_chain.apredict(input=message.body, stop=["\nHuman:"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/chains/llm.py", line 274, in apredict
return (await self.acall(kwargs, callbacks=callbacks))[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/chains/base.py", line 377, in acall
raise e
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/chains/base.py", line 371, in acall
await self._acall(inputs, run_manager=run_manager)
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/chains/llm.py", line 239, in _acall
response = await self.agenerate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/chains/llm.py", line 117, in agenerate
return await self.llm.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/llms/base.py", line 507, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/llms/base.py", line 813, in agenerate
output = await self._agenerate_helper(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/llms/base.py", line 701, in _agenerate_helper
raise e
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/llms/base.py", line 688, in _agenerate_helper
await self._agenerate(
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/llms/base.py", line 1064, in _agenerate
else await self._acall(prompt, stop=stop, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/jupyter_ai_magics/providers.py", line 320, in _acall
return await self._call_in_executor(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/jupyter_ai_magics/providers.py", line 205, in _call_in_executor
return await loop.run_in_executor(executor, _call_with_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/llms/cohere.py", line 211, in _call
response = completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/llms/cohere.py", line 51, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/tenacity/__init__.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/tenacity/__init__.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/langchain/llms/cohere.py", line 49, in _completion_with_retry
return llm.client.generate(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/cohere/client.py", line 221, in generate
response = self._request(cohere.GENERATE_URL, json=json_body, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/cohere/client.py", line 945, in _request
self._check_response(json_response, response.headers, response.status_code)
File "~/mambaforge/envs/jupyter-ai/lib/python3.11/site-packages/cohere/client.py", line 887, in _check_response
raise CohereAPIError(
cohere.error.CohereAPIError: model not found, make sure the correct model ID was used and that you have access to the model.
```
In case helpful, this works:
```python
import cohere
co = cohere.Client('KEY')
message = "Hello World!"
response = co.chat(
message,
model="command",
temperature=0.9
)
answer = response.text
```
Many thanks for any help, and this amazing lib!
## Context
- Operating System and version: Linux Ubuntu 22.04
- Browser and version: Firefox 120.0
<details><summary>Troubleshoot Output</summary>
<pre>
pip list:
Package Version
------------------------- ------------
aiohttp 3.9.1
aiosignal 1.3.1
aiosqlite 0.19.0
annotated-types 0.6.0
anyio 3.7.1
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 2.4.1
async-lru 2.0.4
attrs 23.1.0
Babel 2.13.1
backoff 2.2.1
beautifulsoup4 4.12.2
bleach 6.1.0
Brotli 1.1.0
cached-property 1.5.2
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
cloudpickle 3.0.0
cohere 4.37
comm 0.1.4
dask 2023.12.0
dataclasses-json 0.6.3
debugpy 1.8.0
decorator 5.1.1
deepmerge 1.1.0
defusedxml 0.7.1
distributed 2023.12.0
entrypoints 0.4
exceptiongroup 1.2.0
executing 2.0.1
faiss-cpu 1.7.4
fastavro 1.9.0
fastjsonschema 2.19.0
fqdn 1.5.1
frozenlist 1.4.0
fsspec 2023.12.1
gpt4all 2.0.2
greenlet 3.0.1
idna 3.6
importlib-metadata 6.11.0
importlib-resources 6.1.1
ipykernel 6.26.0
ipython 8.18.1
isoduration 20.11.0
jedi 0.19.1
Jinja2 3.1.2
json5 0.9.14
jsonpatch 1.33
jsonpath-ng 1.6.0
jsonpointer 2.4
jsonschema 4.20.0
jsonschema-specifications 2023.11.2
jupyter_ai 2.6.0
jupyter_ai_magics 2.6.0
jupyter_client 8.6.0
jupyter_core 5.5.0
jupyter-events 0.9.0
jupyter-lsp 2.2.1
jupyter_server 2.12.0
jupyter_server_terminals 0.4.4
jupyterlab 4.0.9
jupyterlab_pygments 0.3.0
jupyterlab_server 2.25.2
langchain 0.0.318
langsmith 0.0.69
locket 1.0.0
MarkupSafe 2.1.3
marshmallow 3.20.1
matplotlib-inline 0.1.6
mistune 3.0.2
msgpack 1.0.7
multidict 6.0.4
mypy-extensions 1.0.0
nbclient 0.8.0
nbconvert 7.12.0
nbformat 5.9.2
nest-asyncio 1.5.8
notebook_shim 0.2.3
numpy 1.26.2
openai 0.28.1
overrides 7.4.0
packaging 23.2
pandocfilters 1.5.0
parso 0.8.3
partd 1.4.1
pexpect 4.8.0
pickleshare 0.7.5
pip 23.3.1
pkgutil_resolve_name 1.3.10
platformdirs 4.1.0
ply 3.11
prometheus-client 0.19.0
prompt-toolkit 3.0.41
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
pycparser 2.21
pydantic 2.5.2
pydantic_core 2.14.5
Pygments 2.17.2
PySocks 1.7.1
python-dateutil 2.8.2
python-json-logger 2.0.7
pytz 2023.3.post1
PyYAML 6.0.1
pyzmq 25.1.2
referencing 0.31.1
regex 2023.10.3
requests 2.31.0
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.13.2
Send2Trash 1.8.2
setuptools 68.2.2
six 1.16.0
sniffio 1.3.0
sortedcontainers 2.4.0
soupsieve 2.5
SQLAlchemy 2.0.23
stack-data 0.6.2
tblib 3.0.0
tenacity 8.2.3
terminado 0.18.0
tiktoken 0.5.2
tinycss2 1.2.1
tomli 2.0.1
toolz 0.12.0
tornado 6.3.3
tqdm 4.66.1
traitlets 5.14.0
types-python-dateutil 2.8.19.14
typing_extensions 4.8.0
typing-inspect 0.9.0
typing-utils 0.1.0
uri-template 1.3.0
urllib3 2.1.0
wcwidth 0.2.12
webcolors 1.13
webencodings 0.5.1
websocket-client 1.7.0
wheel 0.42.0
yarl 1.9.4
zict 3.0.0
zipp 3.17.0
conda list:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
aiohttp 3.9.1 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
aiosqlite 0.19.0 pypi_0 pypi
annotated-types 0.6.0 pypi_0 pypi
anyio 3.7.1 pypi_0 pypi
argon2-cffi 23.1.0 pyhd8ed1ab_0 conda-forge
argon2-cffi-bindings 21.2.0 py311h459d7ec_4 conda-forge
arrow 1.3.0 pyhd8ed1ab_0 conda-forge
asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
async-lru 2.0.4 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 pyh71513ae_1 conda-forge
babel 2.13.1 pyhd8ed1ab_0 conda-forge
backoff 2.2.1 pypi_0 pypi
beautifulsoup4 4.12.2 pyha770c72_0 conda-forge
bleach 6.1.0 pyhd8ed1ab_0 conda-forge
brotli-python 1.1.0 py311hb755f60_1 conda-forge
bzip2 1.0.8 hd590300_5 conda-forge
ca-certificates 2023.11.17 hbcca054_0 conda-forge
cached-property 1.5.2 hd8ed1ab_1 conda-forge
cached_property 1.5.2 pyha770c72_1 conda-forge
certifi 2023.11.17 pyhd8ed1ab_0 conda-forge
cffi 1.16.0 py311hb3a22ac_0 conda-forge
charset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge
click 8.1.7 pypi_0 pypi
cloudpickle 3.0.0 pypi_0 pypi
cohere 4.37 pypi_0 pypi
comm 0.1.4 pyhd8ed1ab_0 conda-forge
dask 2023.12.0 pypi_0 pypi
dataclasses-json 0.6.3 pypi_0 pypi
debugpy 1.8.0 py311hb755f60_1 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
deepmerge 1.1.0 pypi_0 pypi
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
distributed 2023.12.0 pypi_0 pypi
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
exceptiongroup 1.2.0 pyhd8ed1ab_0 conda-forge
executing 2.0.1 pyhd8ed1ab_0 conda-forge
faiss-cpu 1.7.4 pypi_0 pypi
fastavro 1.9.0 pypi_0 pypi
fqdn 1.5.1 pyhd8ed1ab_0 conda-forge
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.12.1 pypi_0 pypi
gpt4all 2.0.2 pypi_0 pypi
greenlet 3.0.1 pypi_0 pypi
idna 3.6 pyhd8ed1ab_0 conda-forge
importlib-metadata 6.11.0 pypi_0 pypi
importlib_metadata 7.0.0 hd8ed1ab_0 conda-forge
importlib_resources 6.1.1 pyhd8ed1ab_0 conda-forge
ipykernel 6.26.0 pyhf8b6a83_0 conda-forge
ipython 8.18.1 pyh707e725_3 conda-forge
isoduration 20.11.0 pyhd8ed1ab_0 conda-forge
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
json5 0.9.14 pyhd8ed1ab_0 conda-forge
jsonpatch 1.33 pypi_0 pypi
jsonpath-ng 1.6.0 pypi_0 pypi
jsonpointer 2.4 py311h38be061_3 conda-forge
jsonschema 4.20.0 pyhd8ed1ab_0 conda-forge
jsonschema-specifications 2023.11.2 pyhd8ed1ab_0 conda-forge
jsonschema-with-format-nongpl 4.20.0 pyhd8ed1ab_0 conda-forge
jupyter-ai 2.6.0 pypi_0 pypi
jupyter-ai-magics 2.6.0 pypi_0 pypi
jupyter-lsp 2.2.1 pyhd8ed1ab_0 conda-forge
jupyter_client 8.6.0 pyhd8ed1ab_0 conda-forge
jupyter_core 5.5.0 py311h38be061_0 conda-forge
jupyter_events 0.9.0 pyhd8ed1ab_0 conda-forge
jupyter_server 2.12.0 pyhd8ed1ab_0 conda-forge
jupyter_server_terminals 0.4.4 pyhd8ed1ab_1 conda-forge
jupyterlab 4.0.9 pyhd8ed1ab_0 conda-forge
jupyterlab_pygments 0.3.0 pyhd8ed1ab_0 conda-forge
jupyterlab_server 2.25.2 pyhd8ed1ab_0 conda-forge
langchain 0.0.318 pypi_0 pypi
langsmith 0.0.69 pypi_0 pypi
ld_impl_linux-64 2.40 h41732ed_0 conda-forge
libexpat 2.5.0 hcb278e6_1 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc-ng 13.2.0 h807b86a_3 conda-forge
libgomp 13.2.0 h807b86a_3 conda-forge
libnsl 2.0.1 hd590300_0 conda-forge
libsodium 1.0.18 h36c2ea0_1 conda-forge
libsqlite 3.44.2 h2797004_0 conda-forge
libstdcxx-ng 13.2.0 h7e041cc_3 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libzlib 1.2.13 hd590300_5 conda-forge
locket 1.0.0 pypi_0 pypi
markupsafe 2.1.3 py311h459d7ec_1 conda-forge
marshmallow 3.20.1 pypi_0 pypi
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
mistune 3.0.2 pyhd8ed1ab_0 conda-forge
msgpack 1.0.7 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
mypy-extensions 1.0.0 pypi_0 pypi
nbclient 0.8.0 pyhd8ed1ab_0 conda-forge
nbconvert-core 7.12.0 pyhd8ed1ab_0 conda-forge
nbformat 5.9.2 pyhd8ed1ab_0 conda-forge
ncurses 6.4 h59595ed_2 conda-forge
nest-asyncio 1.5.8 pyhd8ed1ab_0 conda-forge
notebook-shim 0.2.3 pyhd8ed1ab_0 conda-forge
numpy 1.26.2 pypi_0 pypi
openai 0.28.1 pypi_0 pypi
openssl 3.2.0 hd590300_1 conda-forge
overrides 7.4.0 pyhd8ed1ab_0 conda-forge
packaging 23.2 pyhd8ed1ab_0 conda-forge
pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge
parso 0.8.3 pyhd8ed1ab_0 conda-forge
partd 1.4.1 pypi_0 pypi
pexpect 4.8.0 pyh1a96a4e_2 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pip 23.3.1 pyhd8ed1ab_0 conda-forge
pkgutil-resolve-name 1.3.10 pyhd8ed1ab_1 conda-forge
platformdirs 4.1.0 pyhd8ed1ab_0 conda-forge
ply 3.11 pypi_0 pypi
prometheus_client 0.19.0 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.41 pyha770c72_0 conda-forge
psutil 5.9.5 py311h459d7ec_1 conda-forge
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 2.5.2 pypi_0 pypi
pydantic-core 2.14.5 pypi_0 pypi
pygments 2.17.2 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.11.6 hab00c5b_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-fastjsonschema 2.19.0 pyhd8ed1ab_0 conda-forge
python-json-logger 2.0.7 pyhd8ed1ab_0 conda-forge
python_abi 3.11 4_cp311 conda-forge
pytz 2023.3.post1 pyhd8ed1ab_0 conda-forge
pyyaml 6.0.1 py311h459d7ec_1 conda-forge
pyzmq 25.1.2 py311h34ded2d_0 conda-forge
readline 8.2 h8228510_1 conda-forge
referencing 0.31.1 pyhd8ed1ab_0 conda-forge
regex 2023.10.3 pypi_0 pypi
requests 2.31.0 pyhd8ed1ab_0 conda-forge
rfc3339-validator 0.1.4 pyhd8ed1ab_0 conda-forge
rfc3986-validator 0.1.1 pyh9f0ad1d_0 conda-forge
rpds-py 0.13.2 py311h46250e7_0 conda-forge
send2trash 1.8.2 pyh41d4057_0 conda-forge
setuptools 68.2.2 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sniffio 1.3.0 pyhd8ed1ab_0 conda-forge
sortedcontainers 2.4.0 pypi_0 pypi
soupsieve 2.5 pyhd8ed1ab_1 conda-forge
sqlalchemy 2.0.23 pypi_0 pypi
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
tblib 3.0.0 pypi_0 pypi
tenacity 8.2.3 pypi_0 pypi
terminado 0.18.0 pyh0d859eb_0 conda-forge
tiktoken 0.5.2 pypi_0 pypi
tinycss2 1.2.1 pyhd8ed1ab_0 conda-forge
tk 8.6.13 noxft_h4845f30_101 conda-forge
tomli 2.0.1 pyhd8ed1ab_0 conda-forge
toolz 0.12.0 pypi_0 pypi
tornado 6.3.3 py311h459d7ec_1 conda-forge
tqdm 4.66.1 pypi_0 pypi
traitlets 5.14.0 pyhd8ed1ab_0 conda-forge
types-python-dateutil 2.8.19.14 pyhd8ed1ab_0 conda-forge
typing-extensions 4.8.0 hd8ed1ab_0 conda-forge
typing-inspect 0.9.0 pypi_0 pypi
typing_extensions 4.8.0 pyha770c72_0 conda-forge
typing_utils 0.1.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
uri-template 1.3.0 pyhd8ed1ab_0 conda-forge
urllib3 2.1.0 pyhd8ed1ab_0 conda-forge
wcwidth 0.2.12 pyhd8ed1ab_0 conda-forge
webcolors 1.13 pyhd8ed1ab_0 conda-forge
webencodings 0.5.1 pyhd8ed1ab_2 conda-forge
websocket-client 1.7.0 pyhd8ed1ab_0 conda-forge
wheel 0.42.0 pyhd8ed1ab_0 conda-forge
xz 5.2.6 h166bdaf_0 conda-forge
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.9.4 pypi_0 pypi
zeromq 4.3.5 h59595ed_0 conda-forge
zict 3.0.0 pypi_0 pypi
zipp 3.17.0 pyhd8ed1ab_0 conda-forge
conda env:
name: jupyter-ai
channels:
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=2_gnu
- argon2-cffi=23.1.0=pyhd8ed1ab_0
- argon2-cffi-bindings=21.2.0=py311h459d7ec_4
- arrow=1.3.0=pyhd8ed1ab_0
- asttokens=2.4.1=pyhd8ed1ab_0
- async-lru=2.0.4=pyhd8ed1ab_0
- attrs=23.1.0=pyh71513ae_1
- babel=2.13.1=pyhd8ed1ab_0
- beautifulsoup4=4.12.2=pyha770c72_0
- bleach=6.1.0=pyhd8ed1ab_0
- brotli-python=1.1.0=py311hb755f60_1
- bzip2=1.0.8=hd590300_5
- ca-certificates=2023.11.17=hbcca054_0
- cached-property=1.5.2=hd8ed1ab_1
- cached_property=1.5.2=pyha770c72_1
- certifi=2023.11.17=pyhd8ed1ab_0
- cffi=1.16.0=py311hb3a22ac_0
- charset-normalizer=3.3.2=pyhd8ed1ab_0
- comm=0.1.4=pyhd8ed1ab_0
- debugpy=1.8.0=py311hb755f60_1
- decorator=5.1.1=pyhd8ed1ab_0
- defusedxml=0.7.1=pyhd8ed1ab_0
- entrypoints=0.4=pyhd8ed1ab_0
- exceptiongroup=1.2.0=pyhd8ed1ab_0
- executing=2.0.1=pyhd8ed1ab_0
- fqdn=1.5.1=pyhd8ed1ab_0
- idna=3.6=pyhd8ed1ab_0
- importlib_metadata=7.0.0=hd8ed1ab_0
- importlib_resources=6.1.1=pyhd8ed1ab_0
- ipykernel=6.26.0=pyhf8b6a83_0
- ipython=8.18.1=pyh707e725_3
- isoduration=20.11.0=pyhd8ed1ab_0
- jedi=0.19.1=pyhd8ed1ab_0
- jinja2=3.1.2=pyhd8ed1ab_1
- json5=0.9.14=pyhd8ed1ab_0
- jsonpointer=2.4=py311h38be061_3
- jsonschema=4.20.0=pyhd8ed1ab_0
- jsonschema-specifications=2023.11.2=pyhd8ed1ab_0
- jsonschema-with-format-nongpl=4.20.0=pyhd8ed1ab_0
- jupyter-lsp=2.2.1=pyhd8ed1ab_0
- jupyter_client=8.6.0=pyhd8ed1ab_0
- jupyter_core=5.5.0=py311h38be061_0
- jupyter_events=0.9.0=pyhd8ed1ab_0
- jupyter_server=2.12.0=pyhd8ed1ab_0
- jupyter_server_terminals=0.4.4=pyhd8ed1ab_1
- jupyterlab=4.0.9=pyhd8ed1ab_0
- jupyterlab_pygments=0.3.0=pyhd8ed1ab_0
- jupyterlab_server=2.25.2=pyhd8ed1ab_0
- ld_impl_linux-64=2.40=h41732ed_0
- libexpat=2.5.0=hcb278e6_1
- libffi=3.4.2=h7f98852_5
- libgcc-ng=13.2.0=h807b86a_3
- libgomp=13.2.0=h807b86a_3
- libnsl=2.0.1=hd590300_0
- libsodium=1.0.18=h36c2ea0_1
- libsqlite=3.44.2=h2797004_0
- libstdcxx-ng=13.2.0=h7e041cc_3
- libuuid=2.38.1=h0b41bf4_0
- libzlib=1.2.13=hd590300_5
- markupsafe=2.1.3=py311h459d7ec_1
- matplotlib-inline=0.1.6=pyhd8ed1ab_0
- mistune=3.0.2=pyhd8ed1ab_0
- nbclient=0.8.0=pyhd8ed1ab_0
- nbconvert-core=7.12.0=pyhd8ed1ab_0
- nbformat=5.9.2=pyhd8ed1ab_0
- ncurses=6.4=h59595ed_2
- nest-asyncio=1.5.8=pyhd8ed1ab_0
- notebook-shim=0.2.3=pyhd8ed1ab_0
- openssl=3.2.0=hd590300_1
- overrides=7.4.0=pyhd8ed1ab_0
- packaging=23.2=pyhd8ed1ab_0
- pandocfilters=1.5.0=pyhd8ed1ab_0
- parso=0.8.3=pyhd8ed1ab_0
- pexpect=4.8.0=pyh1a96a4e_2
- pickleshare=0.7.5=py_1003
- pip=23.3.1=pyhd8ed1ab_0
- pkgutil-resolve-name=1.3.10=pyhd8ed1ab_1
- platformdirs=4.1.0=pyhd8ed1ab_0
- prometheus_client=0.19.0=pyhd8ed1ab_0
- prompt-toolkit=3.0.41=pyha770c72_0
- psutil=5.9.5=py311h459d7ec_1
- ptyprocess=0.7.0=pyhd3deb0d_0
- pure_eval=0.2.2=pyhd8ed1ab_0
- pycparser=2.21=pyhd8ed1ab_0
- pygments=2.17.2=pyhd8ed1ab_0
- pysocks=1.7.1=pyha2e5f31_6
- python=3.11.6=hab00c5b_0_cpython
- python-dateutil=2.8.2=pyhd8ed1ab_0
- python-fastjsonschema=2.19.0=pyhd8ed1ab_0
- python-json-logger=2.0.7=pyhd8ed1ab_0
- python_abi=3.11=4_cp311
- pytz=2023.3.post1=pyhd8ed1ab_0
- pyyaml=6.0.1=py311h459d7ec_1
- pyzmq=25.1.2=py311h34ded2d_0
- readline=8.2=h8228510_1
- referencing=0.31.1=pyhd8ed1ab_0
- requests=2.31.0=pyhd8ed1ab_0
- rfc3339-validator=0.1.4=pyhd8ed1ab_0
- rfc3986-validator=0.1.1=pyh9f0ad1d_0
- rpds-py=0.13.2=py311h46250e7_0
- send2trash=1.8.2=pyh41d4057_0
- setuptools=68.2.2=pyhd8ed1ab_0
- six=1.16.0=pyh6c4a22f_0
- sniffio=1.3.0=pyhd8ed1ab_0
- soupsieve=2.5=pyhd8ed1ab_1
- stack_data=0.6.2=pyhd8ed1ab_0
- terminado=0.18.0=pyh0d859eb_0
- tinycss2=1.2.1=pyhd8ed1ab_0
- tk=8.6.13=noxft_h4845f30_101
- tomli=2.0.1=pyhd8ed1ab_0
- tornado=6.3.3=py311h459d7ec_1
- traitlets=5.14.0=pyhd8ed1ab_0
- types-python-dateutil=2.8.19.14=pyhd8ed1ab_0
- typing-extensions=4.8.0=hd8ed1ab_0
- typing_extensions=4.8.0=pyha770c72_0
- typing_utils=0.1.0=pyhd8ed1ab_0
- tzdata=2023c=h71feb2d_0
- uri-template=1.3.0=pyhd8ed1ab_0
- urllib3=2.1.0=pyhd8ed1ab_0
- wcwidth=0.2.12=pyhd8ed1ab_0
- webcolors=1.13=pyhd8ed1ab_0
- webencodings=0.5.1=pyhd8ed1ab_2
- websocket-client=1.7.0=pyhd8ed1ab_0
- wheel=0.42.0=pyhd8ed1ab_0
- xz=5.2.6=h166bdaf_0
- yaml=0.2.5=h7f98852_2
- zeromq=4.3.5=h59595ed_0
- zipp=3.17.0=pyhd8ed1ab_0
- pip:
- aiohttp==3.9.1
- aiosignal==1.3.1
- aiosqlite==0.19.0
- annotated-types==0.6.0
- anyio==3.7.1
- backoff==2.2.1
- click==8.1.7
- cloudpickle==3.0.0
- cohere==4.37
- dask==2023.12.0
- dataclasses-json==0.6.3
- deepmerge==1.1.0
- distributed==2023.12.0
- faiss-cpu==1.7.4
- fastavro==1.9.0
- frozenlist==1.4.0
- fsspec==2023.12.1
- gpt4all==2.0.2
- greenlet==3.0.1
- importlib-metadata==6.11.0
- jsonpatch==1.33
- jsonpath-ng==1.6.0
- jupyter-ai==2.6.0
- jupyter-ai-magics==2.6.0
- langchain==0.0.318
- langsmith==0.0.69
- locket==1.0.0
- marshmallow==3.20.1
- msgpack==1.0.7
- multidict==6.0.4
- mypy-extensions==1.0.0
- numpy==1.26.2
- openai==0.28.1
- partd==1.4.1
- ply==3.11
- pydantic==2.5.2
- pydantic-core==2.14.5
- regex==2023.10.3
- sortedcontainers==2.4.0
- sqlalchemy==2.0.23
- tblib==3.0.0
- tenacity==8.2.3
- tiktoken==0.5.2
- toolz==0.12.0
- tqdm==4.66.1
- typing-inspect==0.9.0
- yarl==1.9.4
- zict==3.0.0
</pre>
</details>
<details><summary>Command Line Output</summary>
<pre>
> Entering new ConversationChain chain...
Prompt after formatting:
You are Jupyternaut, a conversational assistant living in JupyterLab to help users.
You are not a language model, but rather an application built on a foundation model from Cohere called medium.
You are talkative and you provide lots of specific details from the foundation model's context.
You may use Markdown to format your response.
Code blocks must be formatted in Markdown.
Math should be rendered with inline TeX markup, surrounded by $.
If you do not know the answer to a question, answer truthfully by responding that you do not know.
The following is a friendly conversation between you and a human.
Current conversation:
Human: debug
AI:
Retrying langchain.llms.cohere.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised CohereAPIError: model not found, make sure the correct model ID was used and that you have access to the model..
Retrying langchain.llms.cohere.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised CohereAPIError: model not found, make sure the correct model ID was used and that you have access to the model..
</pre>
</details>
| closed | 2023-12-07T19:11:40Z | 2024-01-18T18:10:31Z | https://github.com/jupyterlab/jupyter-ai/issues/509 | [
"bug"
] | asmith26 | 6 |
tortoise/tortoise-orm | asyncio | 1,317 | Defer function call till after transaction | I've been getting my hands dirty with tortoise recently, and I have been loving it. Porting from Django, I noticed (not sure if it isn't implemented already but with a different name) that I could not find something similar to transaction.on_commit in Django, something I could use to run comment only if the current transaction block is committed. I could easily go round this by moving the function call down, but in cases involving multiple models in different modules, that might up my imports and make my code more tightly coupled.
The solution must not completely mirror Django, as tortoise is not Django, but could be something similar.
| open | 2023-01-06T06:04:53Z | 2024-12-18T08:50:44Z | https://github.com/tortoise/tortoise-orm/issues/1317 | [] | the-akpan | 2 |
Kludex/mangum | asyncio | 112 | Document that a $connect route is required for WebSockets | It is possible to send WebSocket messages without a $connect route defined, however the message event doesn't contain the necessary information to form the ASGI scope. Maybe there is a clever way to work around this that I haven't thought about, but I think documenting that a $connect route must be defined is probably the better solution. | closed | 2020-05-17T10:19:07Z | 2020-06-28T01:52:35Z | https://github.com/Kludex/mangum/issues/112 | [
"docs",
"websockets"
] | jordaneremieff | 0 |
ydataai/ydata-profiling | data-science | 1,334 | Request: Include non-null count in summary statistics for variable | ### Missing functionality
A count of non-null values
### Proposed feature
Currently, percent missing is provided as part of the summary, but I'd really appreciate an exact count somewhere.
### Alternatives considered
_No response_
### Additional context
_No response_ | closed | 2023-05-19T14:35:14Z | 2023-05-25T13:09:54Z | https://github.com/ydataai/ydata-profiling/issues/1334 | [
"feature request 💬"
] | gdevenyi | 4 |
gevent/gevent | asyncio | 1,246 | Add a function to list the stacks for all greenlets with a signal handler to show them ? | Just like there is the backdoor package, maybe it's worth to add a function that dumps all greenlets with their names and their stack traces.
It could be triggered by an signal handler for example SIGUSR1.
I've seen that docker daemon does it, upon SIGUSR1 it dumps all channel stacks to a file with a timestamp.
This can be helpful to debug live running processes. | closed | 2018-06-28T16:10:42Z | 2018-06-28T16:22:34Z | https://github.com/gevent/gevent/issues/1246 | [] | tzickel | 2 |
modelscope/data-juicer | data-visualization | 600 | image_caption_mapper等类似算子使用前怎么处理自己的数据格式 | ### Before Asking 在提问之前
- [x] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [x] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [x] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
我手上只有几张图片,我该怎么把他们处理成合法的输入格式呢,还是直接在process.yaml中把dataset_path写成包含图片的文件夹路径 或者单张图片路径也可以呢。我看到了fmt_conversion/multimodal/ 中dj数据格式的介绍,但还是不太清楚该如何组织这些输入图片
### Additional 额外信息
_No response_ | open | 2025-02-28T06:37:29Z | 2025-03-04T03:55:49Z | https://github.com/modelscope/data-juicer/issues/600 | [
"question"
] | Crazy-JY | 7 |
tortoise/tortoise-orm | asyncio | 840 | how to run tortoise orm in python shell like django manage.py shell | >>> from shell import *
>>> run_async(main())
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/tortoise/models.py", line 265, in db
return current_transaction_map[self.default_connection].get()
KeyError: None
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/tortoise/__init__.py", line 679, in run_async
loop.run_until_complete(coro)
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/app/shell.py", line 30, in main
# await dps()
File "/app/shell.py", line 22, in dps
# await Tournament.create(name='Another Tournament')
File "/usr/local/lib/python3.9/site-packages/tortoise/models.py", line 1031, in create
db = kwargs.get("using_db") or cls._meta.db
File "/usr/local/lib/python3.9/site-packages/tortoise/models.py", line 267, in db
raise ConfigurationError("No DB associated to model")
tortoise.exceptions.ConfigurationError: No DB associated to model
>>> init()
<coroutine object init at 0x7fec6672e5c0>
>>> User.all()
<stdin>:1: RuntimeWarning: coroutine 'init' was never awaited
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
<tortoise.queryset.QuerySet object at 0x7fec674d0a90>
>>> User.all()[0].username
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'QuerySet' object is not subscriptable
>>> u=User.first()
>>> u.username
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'QuerySet' object has no attribute 'username'
>>> u
<tortoise.queryset.QuerySet object at 0x7fec66e899a0>
>>> u=User.all()
>>> for user in u:
... print(user.username)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'QuerySet' object is not iterable
>>> u.count()
<tortoise.queryset.CountQuery object at 0x7fec6672e5c0>
>>> u
<tortoise.queryset.QuerySet object at 0x7fec674d0a90>
>>> async for user in u:
... print(user.username)
...
File "<stdin>", line 1
SyntaxError: 'async for' outside async function
>>> async def print():
... async for user in u:
... ... print(user.username)
File "<stdin>", line 3
... print(user.username)
IndentationError: expected an indented block
>>> async def print():
... async for user in u:
... print(user.username)
...
>>> print()
<coroutine object print at 0x7fec66ef04c0>
>>> s=print()
>>> s
<stdin>:1: RuntimeWarning: coroutine 'print' was never awaited
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
<coroutine object print at 0x7fec6674eb40>
>>> s = await print()
File "<stdin>", line 1
SyntaxError: 'await' outside function
>>> user = await User.first()
File "<stdin>", line 1
| closed | 2021-07-28T14:45:54Z | 2021-07-29T08:36:36Z | https://github.com/tortoise/tortoise-orm/issues/840 | [] | sakthiRathinam | 3 |
gevent/gevent | asyncio | 1,805 | Timeout causing memory access violation | * gevent version: 21.1.2
* Python version: 3.8.10
* Operating System: windows 10
### Description:
Creating gevent.Timeout objects without closing them will cause memory to be overwritten.
In this issue: https://github.com/zeromq/pyzmq/issues/1555 a variable was being overwritten and causing libzmq to assert. Using gflags to enable heap debugging I found the memory override was coming from gevent.Timeout.
```python-traceback
Exception thrown at 0x00007FFB7FA0FA35 (_corecffi.pyd) in python.exe: 0xC0000005: Access violation writing location 0x000002C4658C4F78.
```
https://www.gevent.org/api/gevent.timeout.html#gevent.Timeout
The reason for this was pyzmq was calling cancel() instead of close(). Replacing the cancel() calls with close() resolved the memory override issue.
I wanted to bring this issue up in case other encounter the issue.
### What I've run:
Enable heap debugging on python.exe: gflags.exe /p /enable python.exe /full
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/gflags-commands
Then run the python below and attach a debugger to the python exe before the 20 sleep runs out and it will catch the memory access violation.
```python
import gevent
import random
import time
def sleep():
while True:
timeout = gevent.Timeout(seconds=5)
try:
timeout.start()
gevent.sleep(random.uniform(0.25, 2.0))
finally:
timeout.cancel()
def main():
time.sleep(20)
for x in range(10):
gevent.spawn(sleep)
while True:
timeout = gevent.Timeout(seconds=5)
try:
timeout.start()
gevent.sleep(random.uniform(0.25, 2.0))
finally:
timeout.cancel()
main()
```
| open | 2021-07-09T11:08:55Z | 2021-12-16T13:31:07Z | https://github.com/gevent/gevent/issues/1805 | [
"Type: Bug",
"Loop: libuv"
] | RichardLions | 1 |
babysor/MockingBird | deep-learning | 692 | long_file_cut_by_srt.py", line 46, in cut_file_by_srt if end_time - start_time <= min_length or len(line[2].replace(" ", "")) < min_text: IndexError: list index out of range | long_file_cut_by_srt.py", line 46, in cut_file_by_srt
if end_time - start_time <= min_length or len(line[2].replace(" ", "")) < min_text:
IndexError: list index out of range
好像太长了无法截取 用新的又无法跳出那个页面 老是提示什么QT 无法查到
| open | 2022-08-01T08:10:48Z | 2022-08-01T08:10:48Z | https://github.com/babysor/MockingBird/issues/692 | [] | b95595 | 0 |
ageitgey/face_recognition | machine-learning | 1,288 | Append new entries to pickle file (KNNClassifier object) | * face_recognition version: v1.22
* Python version: 3.6
* Operating System: Mac
### Description
I am trying to add new encodings and names to saved pickle file (KNNClassifier object) - but unable to append.
### What I Did
```
# Save the trained KNN classifier
if os.path.getsize(model_save_path) > 0:
if model_save_path is not None:
with open(model_save_path, 'rb') as f:
unpickler = pickle.Unpickler(f)
clf = unpickler.load()
newEncodings = X, y
clf.append(newEncodings)
with open(model_save_path,'wb') as f:
pickle.dump(clf, f)
else:
if model_save_path is not None:
with open(model_save_path, 'wb') as f:
pickle.dump(knn_clf, f)
```
Getting error : `KNeighborsClassifier' object has no attribute 'append' ` Is there any way to achieve this? Please advice.
Other questions, if I train all images for every new training requests, does it going to impact the verification process as the pickle file is in use or OS can handle that?
I am working on moving to MySQL, if anyone did this please share your thoughts. Thank you! | closed | 2021-02-24T06:39:40Z | 2021-03-07T15:05:37Z | https://github.com/ageitgey/face_recognition/issues/1288 | [] | rathishkumar | 3 |
ageitgey/face_recognition | python | 923 | Question: will this work with Google Coral USB accelerator? | I see references to the Jetson Nano, but I would like to know if this would use the TPU processor on a normal PC (Intel NUC) with a Google Coral USB accelerator. If that's the case, it would be nice to add a reference to this in the README. Thanks! | closed | 2019-09-02T12:06:29Z | 2021-03-09T12:30:29Z | https://github.com/ageitgey/face_recognition/issues/923 | [] | juanjux | 5 |
ccxt/ccxt | api | 24,727 | KuCoin watchTickers returning undefined on many values. | ### Operating System
MacOs
### Programming Languages
JavaScript
### CCXT Version
4.4.44
### Description
As of several hours ago KuCoin watchTickers is returning undefined on several of the values. KuCoin is returning partial updates. I haven't found anything in the KuCoin api changelog to suggest a change.
```
{
"symbol":"BTC/USDT",
"timestamp":1735847125558,
"datetime":"2025-01-02T19:45:25.558Z",
"high":"undefined",
"low":"undefined",
"bid":97469.5,
"bidVolume":0.02904654,
"ask":97469.6,
"askVolume":2.37966158,
"vwap":"undefined",
"open":"undefined",
"close":97462.3,
"last":97462.3,
"previousClose":"undefined",
"change":"undefined",
"percentage":"undefined",
"average":"undefined",
"baseVolume":"undefined",
"quoteVolume":"undefined",
"markPrice":"undefined",
"info":{
"bestAsk":"97469.6",
"bestAskSize":"2.37966158",
"bestBid":"97469.5",
"bestBidSize":"0.02904654",
"price":"97462.3",
"sequence":"16166957352",
"size":"0.0001",
"time":1735847125558
},
"indexPrice":"undefined"
}
```
### Code
```
while (true) {
const res = await kucoin.watchTickers(['BTC/USDT'])
console.log(res)
}
```
| closed | 2025-01-02T19:51:08Z | 2025-02-07T12:04:52Z | https://github.com/ccxt/ccxt/issues/24727 | [] | Vk1511 | 3 |
dmlc/gluon-cv | computer-vision | 953 | test.py in GluonCV does not work | I tried Semantic Segmentation in GluonCV: a Deep Learning Toolkit for Computer Vision follwing the instructions as follows.
Environments: Python 3.6 in ANACONDA2019.03
Framework: mxnet-mkl-1.5.0 and gluoncv-0.5.0
python test.py --dataset ade20k --model-zoo fcn_resnet50_ade --eval
But it does not work due to the follwong error.
test.py: error: unrecognized arguments: --model-zoo fcn_resnet50_ade
What's wrong? Please help me.
| closed | 2019-09-21T02:39:16Z | 2019-12-10T07:34:27Z | https://github.com/dmlc/gluon-cv/issues/953 | [] | HisashiShimodaira | 10 |
erdewit/ib_insync | asyncio | 1 | Error validating request:-'bY' : cause - The API interface is currently in Read-Only mode. | @erdewit Thanks for sharing this new Python framework. I just gave your example a try, but I got the following error. Does it require write access even for downloading historical data? The reason I set my gateway to read-only mode is to prevent any mistake because I am not ready to place any orders through the API yet. Thanks!
ERROR:ib_insync.wrapper:Error 321, reqId 2147483647: Error validating request:-'bY' : cause - The API interface is currently in Read-Only mode.
ERROR:ib_insync.wrapper:Error 321, reqId 2147483647: Error validating request:-'a0' : cause - The account code is required for this operation.
```
from ib_insync import *
ib = IB()
ib.connect('127.0.0.1', 4003, clientId=1)
bars = ib.reqHistoricalData(
contract=Stock('TSLA', 'SMART', 'USD'),
endDateTime='',
durationStr='30 D',
barSizeSetting='1 hour',
whatToShow='TRADES',
useRTH=True)
print(bars)
``` | closed | 2017-07-13T00:08:49Z | 2024-02-16T16:47:27Z | https://github.com/erdewit/ib_insync/issues/1 | [] | grandtiger | 10 |
serengil/deepface | machine-learning | 947 | Custom model/detector backend support | I noticed that DeepFace uses a dictionary to describe the model/backend, would DeepFace be able to accept custom models/backends via a custom loadModel/build_model? Thank you! | closed | 2024-01-08T14:05:33Z | 2024-01-08T14:06:48Z | https://github.com/serengil/deepface/issues/947 | [
"question"
] | xfqwdsj | 1 |
matplotlib/mplfinance | matplotlib | 354 | Change scatter markers edgecolor and/or edgewidth | Hi there,
while playing with the alpha mode (alpha=0.1) I noticed that the marker have a border. Is that a feature, or is there any way to disable it?

`
if df.signal_bull_week.notna().sum() > 0:
signal_bull_week = mpf.make_addplot( df.signal_bull_week - 1 * offset,
scatter=True,
markersize=40,
marker='^',
alpha=0.1,
color='black')
add_plots.append(signal_bull_week)
if df.signal_bear_week.notna().sum() > 0:
signal_bear_week = mpf.make_addplot(df.signal_bear_week + 1 * offset,
scatter=True,
marker='v',
markersize=40,
alpha=0.1,
color='black')
add_plots.append(signal_bear_week)
` | closed | 2021-03-17T07:23:36Z | 2021-12-23T12:52:24Z | https://github.com/matplotlib/mplfinance/issues/354 | [
"enhancement",
"good first issue",
"released"
] | fxhuhn | 12 |
gradio-app/gradio | data-visualization | 10,645 | ChatInterface displays errors using custom chatbox and save_history | ### Describe the bug
ChatInterface displays errors using custom chatbox and save_history
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def yes(message, history):
return "yes"
def vote(data: gr.LikeData):
if data.liked:
print("You upvoted this response: " + data.value["value"])
else:
print("You downvoted this response: " + data.value["value"])
with gr.Blocks() as demo:
chatbot = gr.Chatbot(placeholder="<strong>Your Personal Yes-Man</strong><br>Ask Me Anything")
chatbot.like(vote, None, None)
gr.ChatInterface(fn=yes, type="messages",save_history=True, chatbot=chatbot)
demo.launch()
```
### Screenshot

### Logs
```shell
```
### System Info
```shell
Operating System: Windows
gradio version: 5.17.0
gradio_client version: 1.7.1
```
### Severity
I can work around it | open | 2025-02-21T03:00:04Z | 2025-02-21T07:35:38Z | https://github.com/gradio-app/gradio/issues/10645 | [
"bug",
"💬 Chatbot"
] | eaxts | 0 |
mkhorasani/Streamlit-Authenticator | streamlit | 95 | Login needs 2x click (authentication_status possible error) | Hi, I think the authentication_status is not benig able to persist across multiple pages. Please see this page for more information (source code + issues):
https://discuss.streamlit.io/t/login-button-random-behavior/54909
@mkhorasani can you please help, would be greatly appreciated | closed | 2023-11-07T10:59:11Z | 2024-03-28T11:20:09Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/95 | [
"help wanted"
] | yash2mehta | 2 |
lgienapp/aquarel | data-visualization | 15 | How does this compare to the official matplotlib stylesheet? | This article shows how to use it https://www.datafantic.com/the-magic-of-matplotlib-stylesheets/.
Despite the format which is different it looks like it is another way to do do a declarative templating | closed | 2022-08-22T14:40:56Z | 2022-08-23T12:37:51Z | https://github.com/lgienapp/aquarel/issues/15 | [
"documentation",
"question"
] | mazzma12 | 4 |
CorentinJ/Real-Time-Voice-Cloning | python | 401 | Toolbox not working with python3.8 | Hello All,
Debian 11 Bullseye.
Started with python3.7.x and python3.8.x and recently updated to python3.8.xx along with deprecation python3.7. Tried installing python3.7 from source but for some reason python3.7 is never seen as an alternative and did install with the altinstall switch or whatever that is. From a LOT of reading python3.8 will only accept tensorflow2.x This is the major downfall for python3.8 and realtimevoicecloning. Had the realtimevoicecloning working great in virtualenv until debian suggested & done an autoremove of pyton3.7. ,, 'no longer needed'.
Have spent quite a bit of time trying to figure this out,and still no joy.
Thank You
| closed | 2020-07-05T16:32:20Z | 2020-07-07T01:18:05Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/401 | [] | brcisna | 19 |
rougier/scientific-visualization-book | matplotlib | 4 | re: Textual contours example | You may be interested by https://github.com/matplotlib/matplotlib/pull/16171? | open | 2020-11-06T12:50:09Z | 2020-11-06T22:00:08Z | https://github.com/rougier/scientific-visualization-book/issues/4 | [] | anntzer | 4 |
Gerapy/Gerapy | django | 50 | gerapy python2.7的错误怎么解决? | File "D:\python27\lib\site-packages\django\db\backends\base\base.py", line 213, in ensure_connection
self.connect()
File "D:\python27\lib\site-packages\django\db\backends\base\base.py", line 189, in connect
self.connection = self.get_new_connection(conn_params)
File "D:\python27\lib\site-packages\django\db\backends\sqlite3\base.py", line 198, in get_new_connection
conn = Database.connect(**conn_params)
django.db.utils.OperationalError: unable to open database file | closed | 2018-04-02T15:06:16Z | 2018-04-09T03:20:47Z | https://github.com/Gerapy/Gerapy/issues/50 | [] | zhisiying | 2 |
amidaware/tacticalrmm | django | 1,437 | From the Software list on a computer it would be nice to have option to uninstall an application | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2023-02-16T19:22:50Z | 2023-02-16T19:35:18Z | https://github.com/amidaware/tacticalrmm/issues/1437 | [] | cwhitmore88 | 4 |
jumpserver/jumpserver | django | 14,254 | [Bug] 重启容器会删除持久化数据 | ### 产品版本
v4.2.0
### 版本类型
- [X] 社区版
- [ ] 企业版
- [ ] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [ ] 离线包安装
- [X] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
docker
### 🐛 缺陷描述
每次重启容器,会将持久化数据删除,比如录像,日志等
### 复现步骤
重启容器
### 期望结果
_No response_
### 补充信息
_No response_
### 尝试过的解决方案
_No response_ | closed | 2024-09-30T09:42:27Z | 2024-10-11T02:47:36Z | https://github.com/jumpserver/jumpserver/issues/14254 | [
"🐛 Bug"
] | CooperWanng | 2 |
LibrePhotos/librephotos | django | 902 | Opening shared album results in a blank page | # 🐛 Bug Report
* [ ] 📁 I've Included a ZIP file containing my librephotos `log` files
* [x] ❌ I have looked for similar issues (including closed ones)
* [x] 🎬 (If applicable) I've provided pictures or links to videos that clearly demonstrate the issue
## 📝 Description of issue:
When a user tries to open an album I shared, the page turns blank with a 404 error in the browser network tab and unauthorized entries on the backend log

## 🔁 How can we reproduce it:
I tried this on the demo server:
Create user, share album with said user, logout and login with the newly created user, navigate to the shared albums and try to open the shared album
To test directly I guess you can login to the demo server with test/thisisatest and navigate to https://demo2.librephotos.com/useralbum/1/
| closed | 2023-06-26T20:42:05Z | 2023-06-30T07:34:11Z | https://github.com/LibrePhotos/librephotos/issues/902 | [
"bug"
] | RandomHacks-Git | 3 |
ultralytics/yolov5 | deep-learning | 13,015 | scale_masks fucntion | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Dear Ultralytics!
I faced the problem when i wsa trying to resize the masks I got after the inference. In the documentation you have the function that does it:
```
def scale_masks(masks, shape, padding=True):
"""
Rescale segment masks to shape.
Args:
masks (torch.Tensor): (N, C, H, W).
shape (tuple): Height and width.
padding (bool): If True, assuming the boxes is based on image augmented by yolo style. If False then do regular
rescaling.
"""
# print('masks.shape[0:2]: ',masks.shape[0:2])
mh, mw = masks.shape[2:]
gain = min(mh / shape[0], mw / shape[1]) # gain = old / new
pad = [mw - shape[1] * gain, mh - shape[0] * gain] # wh padding
if padding:
pad[0] /= 2
pad[1] /= 2
top, left = (int(pad[1]), int(pad[0])) if padding else (0, 0) # y, x
bottom, right = (int(mh - pad[1]), int(mw - pad[0]))
masks = masks[..., top:bottom, left:right]
masks = F.interpolate(masks, shape, mode="bilinear", align_corners=False) # NCHW
return masks
```
after submitting `masks = scale_masks(results[0].masks, (w, h))` command, i received this error
```
Traceback (most recent call last):
File "proj.py", line 169, in <module>
masks = scale_masks(results[0].masks, (w, h))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "proj.py", line 60, in scale_masks
mh, mw = masks.shape[2:]
^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
```
and i do not understand what is wrong since there's not much i could have possibly disrupt in the process (the `results `variable is calculated as `results = model(img)`)
I would be really grateful for the help!
### Additional
_No response_ | closed | 2024-05-15T14:42:02Z | 2024-06-25T00:22:13Z | https://github.com/ultralytics/yolov5/issues/13015 | [
"question",
"Stale"
] | polinamalova0 | 2 |
miguelgrinberg/microblog | flask | 1 | setup instructions don't mention python.h dependency | I set up a brand new Ubuntu 12.04 LTS system in a virtual machine and ran the full set of apt-get updates, upgrades and dist-upgrades.
I then downloaded and decompressed the zip version of this project and ran:
./setup.py
sqlalchemy and coverage both choke on lack of python.h
The fix is to run :
sudo apt-get -y install python-dev
| closed | 2013-06-29T02:03:02Z | 2013-06-30T12:43:14Z | https://github.com/miguelgrinberg/microblog/issues/1 | [] | martinhbramwell | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.