repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
langmanus/langmanus | automation | 69 | ValueError: Received unsupported arguments {'method': 'json_mode'} | how handle | closed | 2025-03-20T08:13:32Z | 2025-03-20T09:00:00Z | https://github.com/langmanus/langmanus/issues/69 | [] | GitHubZkLn | 2 |
flairNLP/flair | nlp | 2,747 | resume training option for custom language model |
**Is your feature/enhancement request related to a problem? Please describe.**
I did find a way to resume the training from stored checkpoint for the language model.
**Describe the solution you'd like**
In the language_model_trainer.py, there exist a static method for load_checkpoint which return the languageModelTrainer(loss,op_state_dict ,model etc). but this method is of no use further.
also i added to the language_model_trainer.py another static method load_from_checkpoint() as described in the link https://github.com/flairNLP/flair/commit/c25c89f0a94c8c0879d052530502e60f3e8e421a
but still i cannot use it as it pops up error saying module ot registered and if i pass the checkpoint to the langauageModelTrainer as its dict so i get error its not subscriptable.
**Additional context**
Similar to NER model training resume logic in trainer.py ,is if possible to add code to the langauge_model_trainer.py.
Please let me know possibility and solutions so that i can add these code patches to enhance the resume training option in language model similar to NER model
regards,
Pushpalatha M | closed | 2022-04-29T11:23:10Z | 2022-11-01T15:05:02Z | https://github.com/flairNLP/flair/issues/2747 | [
"wontfix"
] | pushpalatha1405 | 1 |
ageitgey/face_recognition | python | 712 | pyinstaller face_recognition | * face_recognition version:
* Python version:3.7.1
* Operating System:windows 10
### Description
when i using pyinstaller to generate exe file, no problem, but i exe it, result as follow:
E:\Project\FaceRecognization\face_recognition_models\models\dlib_face_recognition_resnet_model_v1.dat could not be extracted!
fopen: Invalid argument
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
I don't know how ot do, i try install dlib again, but no gain
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| open | 2018-12-30T21:14:55Z | 2020-02-20T09:51:16Z | https://github.com/ageitgey/face_recognition/issues/712 | [] | RobertWang2 | 3 |
great-expectations/great_expectations | data-science | 10,849 | Incorrect validation_result["results"]["exception_info"] structure when raised_exception == True | **Describe the bug**
When raised_exception == True, exception_info has incorrect structure.
Instead of {'raised_exception': True, 'exception_traceback': 'The traceback', 'exception_message': 'some message'}, it has the following structure:
{"additional_key" : {'raised_exception': True, 'exception_traceback': 'The traceback', 'exception_message': 'some message'}}
**To Reproduce**
```
# df to validate
df = spark.sql("""
SELECT id , CASE WHEN id%4 = 0 THEN "NOT NULL" END AS colname
FROM range(1, 100)""")
# update expectation suite
suite_name = "e_simple_unit_test"
suite = context.suites.add_or_update (gx.ExpectationSuite(name=suite_name))
correct_column_name = gx.expectations.ExpectColumnValuesToNotBeNull (
column="colname", mostly=1, row_condition = "id%2 = 0", condition_parser = "spark")
incorrect_column_name = gx.expectations.ExpectColumnValuesToNotBeNull (
column="___colname___", mostly=1, row_condition = "id%2 = 0", condition_parser = "spark")
suite.add_expectation(correct_column_name)
suite.add_expectation(incorrect_column_name)
suite.save()
# update validation
data_source_name = data_source_configs["data_source_name"]
data_asset_name = data_source_configs["data_asset_name"]
batch_definition_name = data_source_configs["batch_definition_name"]
batch_definition = context.data_sources.get(data_source_name).get_asset(data_asset_name).get_batch_definition(batch_definition_name)
validation_definition_name = "unit_test_validation_definition"
validation_definition = gx.ValidationDefinition(
data=batch_definition, suite=suite, name=validation_definition_name
)
unit_test_validation_definition = context.validation_definitions.add_or_update(validation_definition)
# run the ValidationDefinition
validation_results = unit_test_validation_definition.run(
batch_parameters={"dataframe": df},
result_format = "COMPLETE")
results_dict = validation_results.to_json_dict()
for dct in results_dict["results"]:
if "exception_message" in dct["exception_info"].keys():
print("\nCorrect exception_info structure:")
elif "exception_message" not in dct["exception_info"].keys():
print("\nInorrect exception_info structure:")
print(dct["exception_info"])
```
returns -- >
```
Inorrect exception_info structure:
{"('column_values.nonnull.condition', '242ce27d28b7ac28fe08ad7be0377b1a', ())": {'exception_traceback': 'Traceback.......', 'exception_message': 'Error: The column "___colname___" in BatchData does not exist.', 'raised_exception': True}}
Correct exception_info structure:
{'raised_exception': False, 'exception_traceback': None, 'exception_message': None}
```
**Expected behavior**
```
Correct exception_info structure:
{'raised_exception': True, 'exception_traceback': 'Traceback.......', 'exception_message': 'Error: The column "___colname___" in BatchData does not exist.'}
Correct exception_info structure:
{'raised_exception': False, 'exception_traceback': None, 'exception_message': None}
```
**Environment (please complete the following information):**
- Great Expectations Version: [e.g. 1.3.1]
- Data Source: Spark
- Cloud environment: Databricks
| open | 2025-01-13T08:49:30Z | 2025-02-12T21:51:03Z | https://github.com/great-expectations/great_expectations/issues/10849 | [
"bug"
] | vasilijyaromenka | 4 |
deepspeedai/DeepSpeed | deep-learning | 5,636 | [BUG] 4-bit quantized models would repeatedly generate the same tokens when bf16.enabled is true | **Describe the bug**
When I set `bf16.enabled` to `true` and `weight_quantization.quantized_initialization`, the model would repeatedly generate the same token.
**To Reproduce**
Run the following code
```python
from typing import cast
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
from deepspeed.module_inject.containers.llama import LLAMALayerPolicy
from functools import wraps
if not getattr(LLAMALayerPolicy, "is_get_hidden_heads_patched", False):
# Apply the monkey patch copied from https://github.com/microsoft/DeepSpeed/pull/5624
@wraps(LLAMALayerPolicy.get_hidden_heads)
def patched_get_hidden_heads(self: LLAMALayerPolicy) -> tuple[int, int, float, int]:
client_module = cast(LlamaDecoderLayer, self.client_module)
hidden_heads = (
client_module.self_attn.q_proj.in_features,
client_module.self_attn.num_heads,
client_module.input_layernorm.variance_epsilon,
client_module.mlp.gate_proj.out_features,
)
return hidden_heads
LLAMALayerPolicy.get_hidden_heads = patched_get_hidden_heads
setattr(LLAMALayerPolicy, "is_get_hidden_heads_patched", True)
from os import environ
rank = 0
environ["RANK"] = str(rank)
local_rank = 0
environ["LOCAL_RANK"] = str(local_rank)
world_size = 1
environ["WORLD_SIZE"] = str(world_size)
deepspeed_config = {
"zero_optimization": {
"load_from_fp32_weights": False,
"stage": 3,
"zero_quantized_weights": True,
"zero_quantized_nontrainable_weights": True,
},
"train_micro_batch_size_per_gpu": 1,
"bf16": {"enabled": True},
"weight_quantization": {
"quantized_initialization": {
"num_bits": 4,
"group_size": 64,
"group_dim": 1,
"symmetric": False,
}
},
}
from transformers.integrations.deepspeed import HfDeepSpeedConfig
hf_deepspeed_config = HfDeepSpeedConfig(deepspeed_config)
import deepspeed.comm
deepspeed.comm.init_distributed(
dist_backend="nccl",
rank=rank,
world_size=world_size,
auto_mpi_discovery=False,
init_method=f"tcp://127.0.0.1:9999",
)
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"kevin009/babyllama-v0.6",
torch_dtype=torch.bfloat16,
use_flash_attention_2=True,
)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("kevin009/babyllama-v0.6")
from deepspeed.runtime.config import DeepSpeedConfig
from deepspeed import DeepSpeedEngine
deepspeed_engine = DeepSpeedEngine(
args={},
model=model,
config=deepspeed_config,
config_class=DeepSpeedConfig(deepspeed_config),
)
from transformers import GenerationConfig
with torch.no_grad():
deepspeed_engine.eval()
print(tokenizer.batch_decode(deepspeed_engine.generate(
torch.tensor([[tokenizer.bos_token_id]], dtype=torch.int, device=deepspeed_engine.device),
synced_gpus=True,
generation_config=GenerationConfig(max_new_tokens=20),
)))
```
Then the output is
```
Using quantizer for weights: CUDAQuantizer
[2024-06-10 21:48:40,386] [INFO] [partition_parameters.py:562:patch_init_and_builtins] Enable Zero3 engine with INT4 quantization.
[2024-06-10 21:48:40,670] [INFO] [partition_parameters.py:345:__exit__] finished initializing model - num_params = 1005, num_elems = 5.50B
[2024-06-10 21:48:44,741] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2024-06-10 21:48:44,743] [INFO] [logging.py:96:log_dist] [Rank 0] Creating ZeRO Offload
[2024-06-10 21:48:44,972] [INFO] [utils.py:779:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
[2024-06-10 21:48:44,973] [INFO] [utils.py:780:see_memory_usage] MA 2.96 GB Max_MA 3.33 GB CA 3.56 GB Max_CA 4 GB
[2024-06-10 21:48:44,974] [INFO] [utils.py:787:see_memory_usage] CPU Virtual Memory: used = 7.48 GB, percent = 23.8%
Parameter Offload: Total persistent parameters: 92160 in 45 params
[2024-06-10 21:48:45,191] [INFO] [utils.py:779:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
[2024-06-10 21:48:45,192] [INFO] [utils.py:780:see_memory_usage] MA 2.96 GB Max_MA 2.96 GB CA 3.56 GB Max_CA 4 GB
[2024-06-10 21:48:45,192] [INFO] [utils.py:787:see_memory_usage] CPU Virtual Memory: used = 7.48 GB, percent = 23.8%
[2024-06-10 21:48:45,193] [INFO] [config.py:996:print] DeepSpeedEngine configuration:
[2024-06-10 21:48:45,194] [INFO] [config.py:1000:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2024-06-10 21:48:45,194] [INFO] [config.py:1000:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
[2024-06-10 21:48:45,195] [INFO] [config.py:1000:print] amp_enabled .................. False
[2024-06-10 21:48:45,196] [INFO] [config.py:1000:print] amp_params ................... False
[2024-06-10 21:48:45,197] [INFO] [config.py:1000:print] autotuning_config ............ {
"enabled": false,
"start_step": null,
"end_step": null,
"metric_path": null,
"arg_mappings": null,
"metric": "throughput",
"model_info": null,
"results_dir": "autotuning_results",
"exps_dir": "autotuning_exps",
"overwrite": true,
"fast": true,
"start_profile_step": 3,
"end_profile_step": 5,
"tuner_type": "gridsearch",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"model_info_path": null,
"mp_size": 1,
"max_train_batch_size": null,
"min_train_batch_size": 1,
"max_train_micro_batch_size_per_gpu": 1.024000e+03,
"min_train_micro_batch_size_per_gpu": 1,
"num_tuning_micro_batch_sizes": 3
}
[2024-06-10 21:48:45,197] [INFO] [config.py:1000:print] bfloat16_enabled ............. True
[2024-06-10 21:48:45,198] [INFO] [config.py:1000:print] bfloat16_immediate_grad_update False
[2024-06-10 21:48:45,199] [INFO] [config.py:1000:print] checkpoint_parallel_write_pipeline False
[2024-06-10 21:48:45,199] [INFO] [config.py:1000:print] checkpoint_tag_validation_enabled True
[2024-06-10 21:48:45,200] [INFO] [config.py:1000:print] checkpoint_tag_validation_fail False
[2024-06-10 21:48:45,200] [INFO] [config.py:1000:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f03a121fd10>
[2024-06-10 21:48:45,201] [INFO] [config.py:1000:print] communication_data_type ...... None
[2024-06-10 21:48:45,202] [INFO] [config.py:1000:print] compile_config ............... enabled=False backend='inductor' kwargs={}
[2024-06-10 21:48:45,203] [INFO] [config.py:1000:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2024-06-10 21:48:45,203] [INFO] [config.py:1000:print] curriculum_enabled_legacy .... False
[2024-06-10 21:48:45,204] [INFO] [config.py:1000:print] curriculum_params_legacy ..... False
[2024-06-10 21:48:45,204] [INFO] [config.py:1000:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2024-06-10 21:48:45,204] [INFO] [config.py:1000:print] data_efficiency_enabled ...... False
[2024-06-10 21:48:45,205] [INFO] [config.py:1000:print] dataloader_drop_last ......... False
[2024-06-10 21:48:45,205] [INFO] [config.py:1000:print] disable_allgather ............ False
[2024-06-10 21:48:45,206] [INFO] [config.py:1000:print] dump_state ................... False
[2024-06-10 21:48:45,206] [INFO] [config.py:1000:print] dynamic_loss_scale_args ...... None
[2024-06-10 21:48:45,207] [INFO] [config.py:1000:print] eigenvalue_enabled ........... False
[2024-06-10 21:48:45,207] [INFO] [config.py:1000:print] eigenvalue_gas_boundary_resolution 1
[2024-06-10 21:48:45,208] [INFO] [config.py:1000:print] eigenvalue_layer_name ........ bert.encoder.layer
[2024-06-10 21:48:45,208] [INFO] [config.py:1000:print] eigenvalue_layer_num ......... 0
[2024-06-10 21:48:45,209] [INFO] [config.py:1000:print] eigenvalue_max_iter .......... 100
[2024-06-10 21:48:45,209] [INFO] [config.py:1000:print] eigenvalue_stability ......... 1e-06
[2024-06-10 21:48:45,210] [INFO] [config.py:1000:print] eigenvalue_tol ............... 0.01
[2024-06-10 21:48:45,210] [INFO] [config.py:1000:print] eigenvalue_verbose ........... False
[2024-06-10 21:48:45,211] [INFO] [config.py:1000:print] elasticity_enabled ........... False
[2024-06-10 21:48:45,211] [INFO] [config.py:1000:print] flops_profiler_config ........ {
"enabled": false,
"recompute_fwd_factor": 0.0,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2024-06-10 21:48:45,211] [INFO] [config.py:1000:print] fp16_auto_cast ............... None
[2024-06-10 21:48:45,213] [INFO] [config.py:1000:print] fp16_enabled ................. False
[2024-06-10 21:48:45,214] [INFO] [config.py:1000:print] fp16_master_weights_and_gradients False
[2024-06-10 21:48:45,214] [INFO] [config.py:1000:print] global_rank .................. 0
[2024-06-10 21:48:45,215] [INFO] [config.py:1000:print] grad_accum_dtype ............. None
[2024-06-10 21:48:45,215] [INFO] [config.py:1000:print] gradient_accumulation_steps .. 1
[2024-06-10 21:48:45,215] [INFO] [config.py:1000:print] gradient_clipping ............ 0.0
[2024-06-10 21:48:45,216] [INFO] [config.py:1000:print] gradient_predivide_factor .... 1.0
[2024-06-10 21:48:45,216] [INFO] [config.py:1000:print] graph_harvesting ............. False
[2024-06-10 21:48:45,217] [INFO] [config.py:1000:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
[2024-06-10 21:48:45,222] [INFO] [config.py:1000:print] initial_dynamic_scale ........ 1
[2024-06-10 21:48:45,223] [INFO] [config.py:1000:print] load_universal_checkpoint .... False
[2024-06-10 21:48:45,223] [INFO] [config.py:1000:print] loss_scale ................... 1.0
[2024-06-10 21:48:45,224] [INFO] [config.py:1000:print] memory_breakdown ............. False
[2024-06-10 21:48:45,224] [INFO] [config.py:1000:print] mics_hierarchial_params_gather False
[2024-06-10 21:48:45,224] [INFO] [config.py:1000:print] mics_shard_size .............. -1
[2024-06-10 21:48:45,225] [INFO] [config.py:1000:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False
[2024-06-10 21:48:45,225] [INFO] [config.py:1000:print] nebula_config ................ {
"enabled": false,
"persistent_storage_path": null,
"persistent_time_interval": 100,
"num_of_version_in_retention": 2,
"enable_nebula_load": true,
"load_path": null
}
[2024-06-10 21:48:45,226] [INFO] [config.py:1000:print] optimizer_legacy_fusion ...... False
[2024-06-10 21:48:45,226] [INFO] [config.py:1000:print] optimizer_name ............... None
[2024-06-10 21:48:45,227] [INFO] [config.py:1000:print] optimizer_params ............. None
[2024-06-10 21:48:45,227] [INFO] [config.py:1000:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
[2024-06-10 21:48:45,228] [INFO] [config.py:1000:print] pld_enabled .................. False
[2024-06-10 21:48:45,228] [INFO] [config.py:1000:print] pld_params ................... False
[2024-06-10 21:48:45,229] [INFO] [config.py:1000:print] prescale_gradients ........... False
[2024-06-10 21:48:45,229] [INFO] [config.py:1000:print] scheduler_name ............... None
[2024-06-10 21:48:45,229] [INFO] [config.py:1000:print] scheduler_params ............. None
[2024-06-10 21:48:45,230] [INFO] [config.py:1000:print] seq_parallel_communication_data_type torch.float32
[2024-06-10 21:48:45,230] [INFO] [config.py:1000:print] sparse_attention ............. None
[2024-06-10 21:48:45,231] [INFO] [config.py:1000:print] sparse_gradients_enabled ..... False
[2024-06-10 21:48:45,231] [INFO] [config.py:1000:print] steps_per_print .............. 10
[2024-06-10 21:48:45,232] [INFO] [config.py:1000:print] train_batch_size ............. 1
[2024-06-10 21:48:45,232] [INFO] [config.py:1000:print] train_micro_batch_size_per_gpu 1
[2024-06-10 21:48:45,233] [INFO] [config.py:1000:print] use_data_before_expert_parallel_ False
[2024-06-10 21:48:45,233] [INFO] [config.py:1000:print] use_node_local_storage ....... False
[2024-06-10 21:48:45,233] [INFO] [config.py:1000:print] wall_clock_breakdown ......... False
[2024-06-10 21:48:45,234] [INFO] [config.py:1000:print] weight_quantization_config ... q_type='symmetric' q_groups=1 enabled=True num_bits=8 quantized_initialization={'num_bits': 4, 'group_size': 64, 'group_dim': 1, 'symmetric': False} post_init_quant={}
[2024-06-10 21:48:45,234] [INFO] [config.py:1000:print] world_size ................... 1
[2024-06-10 21:48:45,235] [INFO] [config.py:1000:print] zero_allow_untested_optimizer False
[2024-06-10 21:48:45,235] [INFO] [config.py:1000:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500,000,000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=True load_from_fp32_weights=False elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50,000,000 param_persistence_threshold=100,000 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=True zero_quantized_nontrainable_weights=True zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
[2024-06-10 21:48:45,236] [INFO] [config.py:1000:print] zero_enabled ................. True
[2024-06-10 21:48:45,236] [INFO] [config.py:1000:print] zero_force_ds_cpu_optimizer .. True
[2024-06-10 21:48:45,236] [INFO] [config.py:1000:print] zero_optimization_stage ...... 3
[2024-06-10 21:48:45,237] [INFO] [config.py:986:print_user_config] json = {
"zero_optimization": {
"load_from_fp32_weights": false,
"stage": 3,
"zero_quantized_weights": true,
"zero_quantized_nontrainable_weights": true
},
"train_micro_batch_size_per_gpu": 1,
"bf16": {
"enabled": true
},
"weight_quantization": {
"quantized_initialization": {
"num_bits": 4,
"group_size": 64,
"group_dim": 1,
"symmetric": false
}
}
}
['<s> AltriAutres AltriAutres AltriAutres AltriAutres AltriAutres AltriAutres AltriAutres AltriAutres AltriAutres AltriAutres']
```
**Expected behavior**
The output should not be repeated "AltriAutres".
**ds_report output**
```
[2024-06-10 21:52:02,917] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/nixos/peftai/.venv/lib/python3.11/site-packages/torch']
torch version .................... 2.3.0+cu121
deepspeed install path ........... ['/home/nixos/peftai/.venv/lib/python3.11/site-packages/deepspeed']
deepspeed info ................... 0.14.2, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.2
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 15.67 GB
```
**Screenshots**
Not applicable
**System info (please complete the following information):**
- OS: NixOS unstable
- GPU count and types: 1 × GeForce RTX 3060
- Hugging Face Transformers/Accelerate/etc. versions
- see Additional context
- Python version
- Any other relevant info about your setup
**Docker context**
Not using Docker
**Additional context**
```
accelerate==0.23.0
aiofiles==23.2.1
aiohttp==3.8.6
aiohttp-cors==0.7.0
aiosignal==1.3.13
annotated-types==0.6.0
anyio==4.3.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==2.4.0
async-lru==2.0.4
async-timeout==4.0.3
asyncstdlib==3.10.9
attrs==23.1.0
autoawq==0.2.5
autoawq_kernels==0.0.6
autoflake==2.2.1
azure-cli==2.60.0
Babel==2.14.0
backcall==0.2.0
beautifulsoup4==4.12.2
bitsandbytes==0.43.0
black==24.3.0
bleach==6.1.0
cached_classproperty==1.0.1
cachetools==5.3.1
certifi==2023.7.22
cffi==1.16.0
charset-normalizer==3.3.0
click==8.1.7
cloudpickle==3.0.0
cmake==3.29.2
colorful==0.5.6
comm==0.1.4
coverage==7.5.1
cryptography==41.0.4
datasets==2.18.0
debugpy==1.8.1
decorator==5.1.1
deepmerge==2.0b0
deepspeed==0.14.2
defusedxml==0.7.1
dill==0.3.8
diskcache==5.6.3
distlib==0.3.8
distro==1.9.0
ecdsa==0.18.0
einops==0.7.0
executing==2.0.0
fastapi==0.110.0
fastjsonschema==2.18.1
filelock==3.12.4
flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu122torch2.3cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
fqdn==1.5.1
frozenlist==1.4.0
fsspec==2023.9.2
google-api-core==2.8.0
google-auth==2.29.0
googleapis-common-protos==1.56.1
gptcache==0.1.42
grpcio==1.63.0
guidance==0.0.64
h11==0.14.0
hiredis==2.2.3
hjson==3.1.0
httpcore==1.0.5
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.19.4
idna==3.4
immutables==0.20
iniconfig==2.0.0
interegular==0.3.3
ipykernel==6.25.2
ipython==8.16.1
ipywidgets==8.1.2
isoduration==20.11.0
isort==5.13.2
jaraco.functools==3.9.0
jedi==0.19.1
Jinja2==3.1.2
joblib==1.3.2
json5==0.9.24
jsonpointer==2.4
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.10.0
jupyter-lsp==2.2.4
jupyter_client==8.4.0
jupyter_core==5.4.0
jupyter_server==2.13.0
jupyter_server_terminals==0.5.3
jupyterlab==4.1.5
jupyterlab-pygments==0.2.2
jupyterlab_server==2.25.4
jupyterlab_widgets==3.0.10
lark==1.1.9
lazy-object-proxy==1.10.0
linkify-it-py==2.0.3
llvmlite==0.42.0
lm-format-enforcer==0.9.8
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib-inline==0.1.6
mdit-py-plugins==0.4.1
mdurl==0.1.2
memray==1.12.0
mistune==3.0.2
more-itertools==9.1.0
mpmath==1.3.0
msal==1.24.1
msgpack==1.0.8
multidict==6.0.4
multiprocess==0.70.16
mypy-extensions==1.0.0
nbclient==0.8.0
nbconvert==7.9.2
nbformat==5.9.2
nbval==0.11.0
nest-asyncio==1.5.8
networkx==3.1
ninja==1.11.1.1
nodeenv==1.8.0
notebook==7.1.2
notebook_shim==0.2.4
numba==0.59.1
numpy==1.26.0
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==12.550.52
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.4.99
nvidia-nvtx-cu12==12.1.105
openai==1.25.2
opencensus==0.11.4
opencensus-context==0.1.3
outlines==0.0.34
overrides==7.7.0
packaging==23.2
pandas==2.2.1
pandocfilters==1.5.0
parso==0.8.3
pathspec==0.12.1
peft==0.5.0
pexpect==4.8.0
pickleshare==0.7.5
platformdirs==3.11.0
pluggy==1.5.0
poetry==1.8.3
pre_commit==3.7.1
prometheus-fastapi-instrumentator==7.0.0
prometheus_client==0.20.0
prompt-toolkit==3.0.39
protobuf==5.26.0
psutil==5.9.5
ptyprocess==0.7.0
pure-eval==0.2.2
py-cord==2.4.1
py-cpuinfo==9.0.0
py-spy==0.3.14
pyarrow==15.0.2
pyarrow-hotfix==0.6
pyasn1==0.5.0
pyasn1_modules==0.4.0
pycparser==2.21
pydantic==2.7.3
pydantic_core==2.18.4
pyflakes==3.1.0
pyflyby==1.9.2
Pygments==2.16.1
pygtrie==2.5.0
PyJWT==2.8.0
pynvml==11.5.0
pyparsing==3.1.1
pyright==1.1.359
PySide6==6.6.3
PySide6_Addons==6.6.3
PySide6_Essentials==6.6.3
pytest==8.2.0
python-dateutil==2.8.2
python-dotenv==1.0.1
python-jose==3.3.0
python-json-logger==2.0.7
python-ulid==1.1.0
pytz==2024.1
pyxll==5.8.0
pyxll_jupyter==0.5.2
PyYAML==6.0.1
pyzmq==25.1.1
qtconsole==5.5.1
QtPy==2.4.1
ray==2.23.0
redis==4.6.0
redis-om==0.3.1
referencing==0.30.2
regex==2023.10.3
requests==2.31.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.7.1
rpds-py==0.10.6
rsa==4.9
safetensors==0.4.2
scipy==1.11.3
Send2Trash==1.8.2
sentencepiece==0.2.0
shiboken6==6.6.3
six==1.16.0
smart-open==7.0.4
sniffio==1.3.1
soupsieve==2.5
stack-data==0.6.3
starlette==0.36.3
sympy==1.12
terminado==0.18.1
textual==0.65.2
tiktoken==0.6.0
tinycss2==1.2.1
tokenizers==0.19.1
toml==0.10.2
torch==2.3.0
tornado==6.3.3
tqdm==4.66.1
traitlets==5.11.2
transformers==4.40.1
triton==2.3.0
typeguard==4.1.5
types-pyOpenSSL==23.2.0.2
types-python-dateutil==2.9.0.20240316
types-redis==4.6.0.7
typing_extensions==4.8.0
tzdata==2024.1
uc-micro-py==1.0.3
uri-template==1.3.0
urllib3==2.0.6
uvicorn==0.29.0
uvloop==0.19.0
virtualenv==20.26.2
vllm==0.4.2
vllm_nccl_cu12==2.18.1.0.4.0
vulnix==1.10.2.dev0
watchfiles==0.21.0
wcwidth==0.2.8
webcolors==1.13
webencodings==0.5.1
websocket-client==1.7.0
websockets==12.0
widgetsnbextension==4.0.10
wrapt==1.16.0
xformers==0.0.26.post1
xxhash==3.4.1
yarl==1.9.2
zstandard==0.22.0
```
| open | 2024-06-10T21:52:59Z | 2024-06-10T22:14:11Z | https://github.com/deepspeedai/DeepSpeed/issues/5636 | [
"bug",
"compression"
] | Atry | 1 |
gunthercox/ChatterBot | machine-learning | 2,209 | Help a newbee please. | Hi, im a very newbee dev, im working in a wpp bot,
```
`OSError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_1864/3484406297.py in <module>
93 conv = ['oi','olá','Tudo bem?','Estou bem!','O que você gosta de fazer?','Gosto de estudar Python e você?']
94 #No método train do Chatterbot o mesmo é treinado.
---> 95 botzin = wppbot('oi')
96 botzin.treina(conv)
~\AppData\Local\Temp/ipykernel_1864/3484406297.py in __init__(self, nome_bot)
14 def __init__(self, nome_bot):
15 #Setamos nosso bot e a forma que ele irá treinar.
---> 16 self.bot = ChatBot(nome_bot)
17 self.bot.set_trainer(ListTrainer)
18 #Setamos onde está nosso chromedriver.
~\anaconda3\envs\chatbot\lib\site-packages\chatterbot\chatterbot.py in __init__(self, name, **kwargs)
26 self.logic_adapters = []
27
---> 28 self.storage = utils.initialize_class(storage_adapter, **kwargs)
29
30 primary_search_algorithm = IndexedTextSearch(self, **kwargs)
~\anaconda3\envs\chatbot\lib\site-packages\chatterbot\utils.py in initialize_class(data, *args, **kwargs)
31 Class = import_module(data)
32
---> 33 return Class(*args, **kwargs)
34
35
~\anaconda3\envs\chatbot\lib\site-packages\chatterbot\storage\sql_storage.py in __init__(self, **kwargs)
18
19 def __init__(self, **kwargs):
---> 20 super().__init__(**kwargs)
21
22 from sqlalchemy import create_engine
~\anaconda3\envs\chatbot\lib\site-packages\chatterbot\storage\storage_adapter.py in __init__(self, *args, **kwargs)
19
20 self.tagger = PosLemmaTagger(language=kwargs.get(
---> 21 'tagger_language', languages.ENG
22 ))
23
~\anaconda3\envs\chatbot\lib\site-packages\chatterbot\tagging.py in __init__(self, language)
11 self.punctuation_table = str.maketrans(dict.fromkeys(string.punctuation))
12
---> 13 self.nlp = spacy.load(self.language.ISO_639_1.lower())
14
15 def get_bigram_pair_string(self, text):
~\anaconda3\envs\chatbot\lib\site-packages\spacy\__init__.py in load(name, **overrides)
19 if depr_path not in (True, False, None):
20 deprecation_warning(Warnings.W001.format(path=depr_path))
---> 21 return util.load_model(name, **overrides)
22
23
~\anaconda3\envs\chatbot\lib\site-packages\spacy\util.py in load_model(name, **overrides)
117 elif hasattr(name, 'exists'): # Path or Path-like to model data
118 return load_model_from_path(name, **overrides)
--> 119 raise IOError(Errors.E050.format(name=name))
120
121
OSError: [E050] Can't find model 'en'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.`
```
im getting this error when creating a class and defining my bots name, im doing something wrong?
```
`import os
import time
import re
from chatterbot.trainers import ListTrainer
from chatterbot import ChatBot
from selenium import webdriver
class wppbot:
dir_path = os.getcwd()
def __init__(self, nome_bot):
self.bot = ChatBot(nome_bot)
self.bot.set_trainer(ListTrainer)
self.chrome = self.dir_path+'\chromedriver.exe'
self.options = webdriver.ChromeOptions()
self.options.add_argument(r"user-data-dir="+self.dir_path+"\profile\wpp")
self.driver = webdriver.Chrome(self.chrome, chrome_options=self.options)
def inicia(self,nome_contato):
self.driver.get('https://web.whatsapp.com/')
self.driver.implicitly_wait(15)
self.caixa_de_pesquisa = self.driver.find_element_by_class_name('jN-F5')
self.caixa_de_pesquisa.send_keys(nome_contato)
time.sleep(2)
self.contato = self.driver.find_element_by_xpath('//span[@title = "{}"]'.format(nome_contato))
self.contato.click()
time.sleep(2)
def saudacao(self,frase_inicial):
self.caixa_de_mensagem = self.driver.find_element_by_class_name('_2S1VP')
if type(frase_inicial) == list:
for frase in frase_inicial:
self.caixa_de_mensagem.send_keys(frase)
time.sleep(1)
self.botao_enviar = self.driver.find_element_by_class_name('_35EW6')
self.botao_enviar.click()
time.sleep(1)
else:
return False
def escuta(self):
post = self.driver.find_elements_by_class_name('_3_7SH')
ultimo = len(post) - 1
texto = post[ultimo].find_element_by_css_selector('span.selectable-text').text
return texto
def responde(self,texto):
response = self.bot.get_response(texto)
response = str(response)
response = 'bot: ' + response
self.caixa_de_mensagem = self.driver.find_element_by_class_name('_2S1VP')
self.caixa_de_mensagem.send_keys(response)
time.sleep(1)
self.botao_enviar = self.driver.find_element_by_class_name('_35EW6')
self.botao_enviar.click()
def treina(self,nome_pasta):
for treino in os.listdir(nome_pasta):
conversas = open(nome_pasta+'/'+treino, 'r').readlines()
self.bot.train(conversas)
conv = ['oi','olá','Tudo bem?','Estou bem!','O que você gosta de fazer?','Gosto de estudar Python e você?']
botzin = wppbot('oi')
botzin.treina(conv)`
```
sorry for take your time, i hope it isnt a dumby issue. thanks for atention | closed | 2021-10-22T14:49:07Z | 2021-12-09T12:01:48Z | https://github.com/gunthercox/ChatterBot/issues/2209 | [] | GabrielMendesdc | 1 |
pyppeteer/pyppeteer | automation | 407 | how to send websocket frame | Hello,
How can I send a websocket frame using pyppeteer? I found this but in javascript:
```
const prototype = await page.evaluateHandle("WebSocket.prototype");
const socketInstances = await page.queryObjects(prototype);
await page.evaluate((instances) => {
let instance = instances[0];
instance.send('Hello');
}, socketInstances);
```
| closed | 2022-09-10T18:33:54Z | 2023-01-03T14:01:19Z | https://github.com/pyppeteer/pyppeteer/issues/407 | [] | vinifr | 2 |
freqtrade/freqtrade | python | 10,800 | Downloading trades is always 0%. | <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: Mac OS
* Python Version: Python 3.9.6 (`python -V`)
* CCXT version: Nothing (`pip freeze | grep ccxt`)
* Freqtrade Version: freqtrade 2024.9 (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
## Your question
*Ask the question you have not been able to find an answer in the [Documentation](https://www.freqtrade.io/en/latest/)*
As I have use
' docker-compose run --rm freqtrade download-data --timeframes 1h --exchange binance --pairs BTC/USDT:USDT'
Everything looks like ok, but with only problem that it is always 0%.
<img width="1001" alt="Screenshot 2024-10-15 at 23 27 29" src="https://github.com/user-attachments/assets/68641e4d-1d02-4dc3-912b-a2a0f727b3a0">
So I try to use a proxy with docker compose, write the docker-compose.yml with
```
environment:
- HTTP_PROXY=http://127.0.0.1:7890
- HTTPS_PROXY=http://127.0.0.1:7890
- NO_PROXY=localhost,127.0.0.1
```
Well this is also not working. And
` curl -x http://127.0.0.1:7890 -L http://google.com`
is ok.
And I notice it show a website when timeout, as
"https://fapi.binance.com/fapi/v1/aggTrades?symbol=BTCUSDT&limit=1000&fromId=2264573484", this I can visit on Chrome.
So, does there any method I can download with proxy? Or freqtrade support direclty download backtest data manually? Or might be I need to find a server which can directly acess 'google.com'?
| closed | 2024-10-15T15:47:44Z | 2024-10-15T16:02:25Z | https://github.com/freqtrade/freqtrade/issues/10800 | [
"Question"
] | beiluo97 | 3 |
polakowo/vectorbt | data-visualization | 544 | portfolio['some_ticker'].stats() : call in parallel | Hi,
Once the portfolio is created, I need to iterate each column/ticker and call .stats on it individually.
This is really slow and takes ages if the number of tickers is large.
Is there any way to optimize this, to get the .stats for all the columns in one operation?
I read somewhere that we can use 'use_ray=True', but I have no clue where or how to use that in this context | closed | 2022-12-18T14:39:28Z | 2024-03-16T10:41:01Z | https://github.com/polakowo/vectorbt/issues/544 | [
"stale"
] | wordjelly | 2 |
postmanlabs/httpbin | api | 483 | The deployment is currently unavailable | I constantly get this error when using httpbin.org...
```
curl -X GET "https://httpbin.org/get" -H "accept: application/json"
{"status":"503","description":"The deployment is currently unavailable"}
``` | closed | 2018-07-06T11:08:48Z | 2018-07-06T11:19:16Z | https://github.com/postmanlabs/httpbin/issues/483 | [] | it-can | 2 |
JaidedAI/EasyOCR | machine-learning | 1,117 | when with GPU.follow the blow error:RuntimeError: generic_type: type "_CudaDeviceProperties" is already registered! | 
| open | 2023-08-16T07:02:17Z | 2023-08-16T07:02:17Z | https://github.com/JaidedAI/EasyOCR/issues/1117 | [] | xiayuer0114 | 0 |
Textualize/rich | python | 3,371 | How to create multiline description for progress bar | I've already found that i can create extra value using `{task.fields[message]}`
My goal is to display something like this:
```
Custom value text
Description ================ (progress bar)
```
How can i add custom fields above progress bar so it'll stick at the bottom of terminal with progress? | closed | 2024-06-04T21:31:03Z | 2024-06-05T15:58:26Z | https://github.com/Textualize/rich/issues/3371 | [] | pingpongterminator300 | 3 |
sktime/sktime | scikit-learn | 7,887 | [BUG] `_safe_import` does not work if `pkg_name` is passed | `_safe_import` does not work if `pkg_name` is passed.
The reason is simple: `path_list` is accessed before the variable exists.
This is a very basic failure, due to missing tests - which we should add.
Simple failing example - which also should be added as a test.
```python
from pytorch_forecasting.utils._dependencies import _safe_import
BaseObject = _safe_import("skbase.base.BaseObject", pkg_name="scikit-base")
```
We should add at least eight tests:
* with and without `pkg_name`
* in a case where package name is identical to import name, or not
* case where imported object exists, or does not exist
FYI @jgyasu | closed | 2025-02-23T12:19:32Z | 2025-03-06T19:57:53Z | https://github.com/sktime/sktime/issues/7887 | [
"bug",
"module:base-framework"
] | fkiraly | 1 |
rgerum/pylustrator | matplotlib | 38 | File Not Found | I receive the following error when attempting to run the example.
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\<>\\AppData\\Local\\Temp\\ipykernel_16232\\1628941459.py'
How to resolve?
Thanks. | closed | 2022-06-05T11:53:50Z | 2022-06-06T17:55:37Z | https://github.com/rgerum/pylustrator/issues/38 | [] | ftippett | 6 |
iperov/DeepFaceLab | deep-learning | 5,492 | SAEHD training on GPU run the pause command after start in Terminal | Hello,
My PC: Acer aspire 7, Core i 7 9th generation, nvidia geforce GTX 1050, Windows 10 home
When I run SAEHD-training on GPU he run the pause command and say some thing like: "Press any key to continue..." after Start. On CPU work every thing fine!
My batch size is 4!
So my CMD is on German but look:

"Drücken sie eine belibige Taste..." mean "Press any key to continue..."
Tanks for your help😀! | open | 2022-03-13T09:31:14Z | 2023-06-08T23:18:48Z | https://github.com/iperov/DeepFaceLab/issues/5492 | [] | Pips01 | 6 |
xlwings/xlwings | automation | 2,028 | The description in main.py | Windows 11
xlwings:0.24.7, Excel:16.0, Python: 3.9.5
In main.py, there is a description of Class App, the word is "An app corresponds to an Excel instance and should normally be used as context manager to make sure that everything is properly cleaned **uup** again and to prevent zombie processes."
obviously, the word "uup" is wrong, should be "up".
| closed | 2022-09-25T12:23:11Z | 2022-09-26T15:11:47Z | https://github.com/xlwings/xlwings/issues/2028 | [] | cwjcw | 1 |
dsdanielpark/Bard-API | api | 88 | why keyerror "image" happens | <img width="830" alt="image" src="https://github.com/dsdanielpark/Bard-API/assets/82095274/b9333272-0083-4bc4-8268-df068e3c0bb4">
I'm wondering why the error pops up, even we can use the try except to handle this, but nothing changes. If I input a normal text like :"hi", it will return nothing since it's still be recognized as an error.
Thanks a lot. | closed | 2023-07-01T12:41:25Z | 2023-07-03T06:30:21Z | https://github.com/dsdanielpark/Bard-API/issues/88 | [] | Xiansssss | 2 |
pallets-eco/flask-wtf | flask | 459 | FlaskForm object returns wrong data after dynamically changed with JavaScript | My environment: Python 3.9.2, Flask==2.0.1, Flask-WTF==0.15.1
I have a simple form:
```python
class RunReportForm(FlaskForm):
report_name_1 = BooleanField('First report')
report_name_2 = BooleanField('Second report')
additional = StringField('Additional reports')
run = SubmitField('Run')
```
and in my views, I import it and use it:
```python
@app.route('/', methods=['GET', 'POST'])
def index():
form = RunReportForm(request.form)
if form.validate_on_submit():
print([itm for itm in form])
return render_template('index.html', form=form)
```
and I have a JS script that when you input anything into the StringField, when it reaches a comma, it adds a new BooleanField to the form with the label text up to that comma (and clears that StringField). So, when you submit the form, instead of giving me the latest state of the form, it gives me its initial state (i.e., the one defined in the RunReportForm). | closed | 2021-07-19T20:35:57Z | 2021-08-04T00:35:16Z | https://github.com/pallets-eco/flask-wtf/issues/459 | [] | NimaBavari | 1 |
desec-io/desec-stack | rest-api | 265 | Don't forward PDNSException status codes to API user | https://github.com/desec-io/desec-stack/blob/master/api/desecapi/exceptions.py#L10
e.g. if Domain.keys() raises PDNSException, the user will get HTTP 404. Should be 500. | closed | 2019-11-15T19:48:57Z | 2024-10-07T16:53:19Z | https://github.com/desec-io/desec-stack/issues/265 | [
"bug",
"api"
] | peterthomassen | 0 |
LAION-AI/Open-Assistant | python | 2,912 | Chats disappeared | It looks like conversations have disappeared - Instead I was met with commercials for your partners plastered across where the conversations were stored | closed | 2023-04-25T23:22:24Z | 2023-04-28T23:36:07Z | https://github.com/LAION-AI/Open-Assistant/issues/2912 | [] | einarpetersen | 2 |
man-group/arctic | pandas | 194 | Feature Request: Read recent K items before T | Suppose the trading hour is `09:00 ~ 15:00` every weekday. And after time `T`, a new 5-min candlestick between `[T-5min, T)` will be computed and appended to a Arctic store (`ChunkStore` or `VersionStore`).
Now at 09:05 on Monday, I would like to get the latest 3 5-min candlesticks of a symbol:
```
bar between [14:50, 14:55) on last Friday
bar between [14:55, 15:00) on last Friday
bar between [09:00, 09:05) on this Monday
```
I can't find a public read API to execute such kind of query. i.e.:
**read last 3 records before 09_05_this__monday**
| closed | 2016-08-05T16:04:08Z | 2019-01-04T10:25:45Z | https://github.com/man-group/arctic/issues/194 | [
"wontfix"
] | mckelvin | 1 |
geopandas/geopandas | pandas | 3,531 | BUG: df.hvplot.points(size=) error | - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
>>> g = [Point(x,y) for x,y in np.array([(0.801, 0.31),(0.264, 0.242),(0.356, 0.147)])]
>>> p = np.array([107, 67, 67])
>>> df = GeoDataFrame({'geometry': g, 'p': p})
>>> df.hvplot(size='p')
ValueError [Call holoviews.ipython.show_traceback() for details]
Screen sizes must be positive
```
#### Problem description
This errors when it should display a plot.
#### Expected Output
I would expect this not to error and the sizes of the points to be adjusted.
It seems like something might be doing a unique somewhere, because if I change `p` to `[107, 67, 66]` (I changed the last entry from 67 to 66), this works.
I understand that it is possible (necessary?) to do `sizes=dim('p')`, however I get the same issue (again, when I use `[107, 67, 66]`, it works, however the sizes of the dots are not the same as without `dim`)
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
executable : /venv/bin/python3
machine : Linux-6.8.10-200.fc39.x86_64-x86_64-with-glibc2.35
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.11.4
GEOS lib : None
GDAL : 3.9.1
GDAL data dir: /venv/lib/python3.10/site-packages/pyogrio/gdal_data/
PROJ : 9.4.1
PROJ data dir: /venv/lib/python3.10/site-packages/pyproj/proj_dir/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 1.0.1
hvplot : 0.11.2
numpy : 1.26.3
pandas : 2.2.3
pyproj : 3.7.0
shapely : 2.0.6
pyogrio : 0.10.0
geoalchemy2: None
geopy : 2.4.1
matplotlib : 3.8.2
mapclassify: None
fiona : None
psycopg : 3.2.3
psycopg2 : None
pyarrow : None
</details>
| closed | 2025-03-21T18:12:59Z | 2025-03-21T20:26:55Z | https://github.com/geopandas/geopandas/issues/3531 | [
"bug",
"needs triage"
] | scott-vsi | 3 |
pytest-dev/pytest-xdist | pytest | 635 | Add new --dist option 'loadgroup' | ### Intro
There are currently several options for distributing tests,
but there is still no suitable option for the following cases:
### Case 1
In this case, it is efficient to divide all tests into different sessions.
```python
@pytest.mark.parametrize('param', [A, B, C, D])
def test_something_heavy(param):
do_something_heavy_test
```
### Case 2
In this case, it is efficient to run all tests in the same session.
```python
def test_something_light_1(heavy_fixture_cannot_filelock):
do_something_light_test
def test_something_light_2(heavy_fixture_cannot_filelock):
do_something_light_test
def test_something_light_3(heavy_fixture_cannot_filelock):
do_something_light_test
```
### Limit
If you use the loadscope option, all tests in Case 1 are performed in same session,
If the load option is used, all tests of Case 2 may be performed in different sessions.
### Suggestion
Use the following group mark and specify the name through the parameter.
Then, tests with the same name are executed in the same session.
```python
@pytest.mark.group(name="same_session")
def test_something_light_1(heavy_fixture_cannot_filelock):
do_something_light_test
@pytest.mark.group(name="same_session")
def test_something_light_2(heavy_fixture_cannot_filelock):
do_something_light_test
@pytest.mark.group(name="same_session")
def test_something_light_3(heavy_fixture_cannot_filelock):
do_something_light_test
```
| closed | 2021-03-15T01:56:29Z | 2021-03-15T02:00:32Z | https://github.com/pytest-dev/pytest-xdist/issues/635 | [] | dohyeop-sub | 0 |
pyqtgraph/pyqtgraph | numpy | 2,216 | Annotations off-screen affect the autoscale "Visible Data Only" even when they're not visible | ### Short description
<!-- This should summarize the issue. -->
### Code to reproduce
In https://github.com/pyqtgraph/pyqtgraph/blob/master/pyqtgraph/examples/text.py, zoom the x-axis to the region from 10--20, and try to autoscale the y-axis with the "Visible Data Only" box checked.
Note that commenting out lines 24 and 29 fix the issue, proving that the TextItem and ArrowItem are what's causing the problem.
### Expected behavior
The y-axis should scale from around -0.1 to 0.1.
### Real behavior
The y-axis will be scaled from around -0.1 to 1.3 to include the "This is the peak" text and arrow.
### Tested environment(s)
* PyQtGraph version: 0.12.4
* Qt Python binding: PySide6 6.2.3 Qt 6.2.3
* Python version: 3.9.7
* NumPy version: 1.21.4
* Operating system: macOS Big Sur 11.6.4
* Installation method: pip | open | 2022-03-07T17:09:11Z | 2022-03-07T17:09:11Z | https://github.com/pyqtgraph/pyqtgraph/issues/2216 | [] | EfremBraun | 0 |
sebp/scikit-survival | scikit-learn | 439 | 'cosine' kernel in FastKernelSurvivalSVM still in documentation but not working in 0.22.2 | **Describe the bug**
In `scikit-survival/sksurv/svm/survival_svm.py`:

'cosine' kernel is not included in the 'kernel' options, but it is still described in the documentation.
**Versions**
 | closed | 2024-03-19T09:25:06Z | 2024-04-02T15:33:55Z | https://github.com/sebp/scikit-survival/issues/439 | [] | aliciaolivaresgil | 0 |
litestar-org/polyfactory | pydantic | 366 | Bug: Broken union type generation since "polyfactory<2.6" (pydantic_factory) | ### Description
Value generation for `union_field: str | list[str]` types is broken since polyfactory<2.6.
Sample of broken results:
```
{'union_field': ['xHTRynXkQXaHKksrCLan', ['mlAkGEPvArmUXfHMUDvh']]}
{'union_field': ['AOQqrBoBIUXjkkDazQMu', ['EDNDtAdsaLdPVrSjwrDo']]}
{'union_field': ['cVPkYHEIYQOEVCbYEOiS', ['evgTnOLFzcVsbaZWjmim']]}
```
### URL to code causing the issue
_No response_
### MCVE
```python
import pydantic
from polyfactory.factories.pydantic_factory import ModelFactory
class UnionModel(pydantic.BaseModel):
union_field: str | list[str]
class UnionFactory(ModelFactory):
__model__ = UnionModel
for _ in range(100):
print(UnionFactory.process_kwargs())
```
### Steps to reproduce
```bash
1. Install "polyfactory<2.6"
2. Run MCVE
3. Notice correct results (sample):
{'union_field': ['pxmZfDPIiXJBzmcMiDFC']}
{'union_field': ['qmaITGzIrhtIbwXSNCHF']}
{'union_field': ['FTTgnfVwmySLgdbylTkQ']}
{'union_field': ['EonBSyUuDseCjXhuzONc']}
{'union_field': ['QqBzlNBMKRrlLuEmiDBl']}
{'union_field': 'IfiTUXnKFaCgnrvCnEpi'}
{'union_field': ['GZaprKiCagdtrSNciVQa']}
```
4. Install "polyfactory>=2.6"
5. Run MCVE
6. Notice incorrect results (sample):
```
{'union_field': ['xHTRynXkQXaHKksrCLan', ['mlAkGEPvArmUXfHMUDvh']]}
{'union_field': ['AOQqrBoBIUXjkkDazQMu', ['EDNDtAdsaLdPVrSjwrDo']]}
{'union_field': ['cVPkYHEIYQOEVCbYEOiS', ['evgTnOLFzcVsbaZWjmim']]}
```
```
### Screenshots
_No response_
### Logs
_No response_
### Release Version
polyfactory>=2.6,<3
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2023-09-15T17:52:10Z | 2025-03-20T15:53:07Z | https://github.com/litestar-org/polyfactory/issues/366 | [
"bug"
] | realitycheck | 4 |
microsoft/MMdnn | tensorflow | 184 | Converted Resent50 from mxnet is predicting label incorrectly | Ubuntu 14.04
Python version: 2.7
Tensorflow 1.4.0 with GPU
Pre-trained model path: download using mmdownload
Running scripts:
```
mkdir checkpoint
mmdownload -f mxnet -n imagenet1k-resnet-50 -o ./
mmtoir -f mxnet -n resnet-50-symbol.json -w resnet-50-0000.params -d resnet50 --inputShape 3 299 299
mmtocode -f tensorflow --IRModelPath resnet50.pb --IRWeightPath resnet50.npy --dstModelPath mx_resnet50.py
python -m mmdnn.conversion.examples.tensorflow.imagenet_test -n mx_resnet50.py -w resnet50.npy --dump checkpoint/mx_resnet50.ckpt
```
I successfully got mx_resnet50.py
```
import tensorflow as tf
__weights_dict = dict()
is_train = False
def load_weights(weight_file):
import numpy as np
if weight_file == None:
return
try:
weights_dict = np.load(weight_file).item()
except:
weights_dict = np.load(weight_file, encoding='bytes').item()
return weights_dict
def KitModel(weight_file = None):
global __weights_dict
__weights_dict = load_weights(weight_file)
data = tf.placeholder(tf.float32, shape = (None, 299, 299, 3), name = 'data')
bn_data = batch_normalization(data, variance_epsilon=1.99999994948e-05, name='bn_data')
conv0_pad = tf.pad(bn_data, paddings = [[0L, 0L], [3L, 3L], [3L, 3L], [0L, 0L]])
conv0 = convolution(conv0_pad, group=1, strides=[2, 2], padding='VALID', name='conv0')
bn0 = batch_normalization(conv0, variance_epsilon=1.99999994948e-05, name='bn0')
relu0 = tf.nn.relu(bn0, name = 'relu0')
pooling0_pad = tf.pad(relu0, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]], constant_values=float('-Inf'))
pooling0 = tf.nn.max_pool(pooling0_pad, [1, 3, 3, 1], [1, 2, 2, 1], padding='VALID', name='pooling0')
stage1_unit1_bn1 = batch_normalization(pooling0, variance_epsilon=1.99999994948e-05, name='stage1_unit1_bn1')
stage1_unit1_relu1 = tf.nn.relu(stage1_unit1_bn1, name = 'stage1_unit1_relu1')
stage1_unit1_conv1 = convolution(stage1_unit1_relu1, group=1, strides=[1, 1], padding='VALID', name='stage1_unit1_conv1')
stage1_unit1_sc = convolution(stage1_unit1_relu1, group=1, strides=[1, 1], padding='VALID', name='stage1_unit1_sc')
stage1_unit1_bn2 = batch_normalization(stage1_unit1_conv1, variance_epsilon=1.99999994948e-05, name='stage1_unit1_bn2')
stage1_unit1_relu2 = tf.nn.relu(stage1_unit1_bn2, name = 'stage1_unit1_relu2')
stage1_unit1_conv2_pad = tf.pad(stage1_unit1_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage1_unit1_conv2 = convolution(stage1_unit1_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage1_unit1_conv2')
stage1_unit1_bn3 = batch_normalization(stage1_unit1_conv2, variance_epsilon=1.99999994948e-05, name='stage1_unit1_bn3')
stage1_unit1_relu3 = tf.nn.relu(stage1_unit1_bn3, name = 'stage1_unit1_relu3')
stage1_unit1_conv3 = convolution(stage1_unit1_relu3, group=1, strides=[1, 1], padding='VALID', name='stage1_unit1_conv3')
plus0 = stage1_unit1_conv3 + stage1_unit1_sc
stage1_unit2_bn1 = batch_normalization(plus0, variance_epsilon=1.99999994948e-05, name='stage1_unit2_bn1')
stage1_unit2_relu1 = tf.nn.relu(stage1_unit2_bn1, name = 'stage1_unit2_relu1')
stage1_unit2_conv1 = convolution(stage1_unit2_relu1, group=1, strides=[1, 1], padding='VALID', name='stage1_unit2_conv1')
stage1_unit2_bn2 = batch_normalization(stage1_unit2_conv1, variance_epsilon=1.99999994948e-05, name='stage1_unit2_bn2')
stage1_unit2_relu2 = tf.nn.relu(stage1_unit2_bn2, name = 'stage1_unit2_relu2')
stage1_unit2_conv2_pad = tf.pad(stage1_unit2_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage1_unit2_conv2 = convolution(stage1_unit2_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage1_unit2_conv2')
stage1_unit2_bn3 = batch_normalization(stage1_unit2_conv2, variance_epsilon=1.99999994948e-05, name='stage1_unit2_bn3')
stage1_unit2_relu3 = tf.nn.relu(stage1_unit2_bn3, name = 'stage1_unit2_relu3')
stage1_unit2_conv3 = convolution(stage1_unit2_relu3, group=1, strides=[1, 1], padding='VALID', name='stage1_unit2_conv3')
plus1 = stage1_unit2_conv3 + plus0
stage1_unit3_bn1 = batch_normalization(plus1, variance_epsilon=1.99999994948e-05, name='stage1_unit3_bn1')
stage1_unit3_relu1 = tf.nn.relu(stage1_unit3_bn1, name = 'stage1_unit3_relu1')
stage1_unit3_conv1 = convolution(stage1_unit3_relu1, group=1, strides=[1, 1], padding='VALID', name='stage1_unit3_conv1')
stage1_unit3_bn2 = batch_normalization(stage1_unit3_conv1, variance_epsilon=1.99999994948e-05, name='stage1_unit3_bn2')
stage1_unit3_relu2 = tf.nn.relu(stage1_unit3_bn2, name = 'stage1_unit3_relu2')
stage1_unit3_conv2_pad = tf.pad(stage1_unit3_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage1_unit3_conv2 = convolution(stage1_unit3_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage1_unit3_conv2')
stage1_unit3_bn3 = batch_normalization(stage1_unit3_conv2, variance_epsilon=1.99999994948e-05, name='stage1_unit3_bn3')
stage1_unit3_relu3 = tf.nn.relu(stage1_unit3_bn3, name = 'stage1_unit3_relu3')
stage1_unit3_conv3 = convolution(stage1_unit3_relu3, group=1, strides=[1, 1], padding='VALID', name='stage1_unit3_conv3')
plus2 = stage1_unit3_conv3 + plus1
stage2_unit1_bn1 = batch_normalization(plus2, variance_epsilon=1.99999994948e-05, name='stage2_unit1_bn1')
stage2_unit1_relu1 = tf.nn.relu(stage2_unit1_bn1, name = 'stage2_unit1_relu1')
stage2_unit1_conv1 = convolution(stage2_unit1_relu1, group=1, strides=[1, 1], padding='VALID', name='stage2_unit1_conv1')
stage2_unit1_sc = convolution(stage2_unit1_relu1, group=1, strides=[2, 2], padding='VALID', name='stage2_unit1_sc')
stage2_unit1_bn2 = batch_normalization(stage2_unit1_conv1, variance_epsilon=1.99999994948e-05, name='stage2_unit1_bn2')
stage2_unit1_relu2 = tf.nn.relu(stage2_unit1_bn2, name = 'stage2_unit1_relu2')
stage2_unit1_conv2_pad = tf.pad(stage2_unit1_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage2_unit1_conv2 = convolution(stage2_unit1_conv2_pad, group=1, strides=[2, 2], padding='VALID', name='stage2_unit1_conv2')
stage2_unit1_bn3 = batch_normalization(stage2_unit1_conv2, variance_epsilon=1.99999994948e-05, name='stage2_unit1_bn3')
stage2_unit1_relu3 = tf.nn.relu(stage2_unit1_bn3, name = 'stage2_unit1_relu3')
stage2_unit1_conv3 = convolution(stage2_unit1_relu3, group=1, strides=[1, 1], padding='VALID', name='stage2_unit1_conv3')
plus3 = stage2_unit1_conv3 + stage2_unit1_sc
stage2_unit2_bn1 = batch_normalization(plus3, variance_epsilon=1.99999994948e-05, name='stage2_unit2_bn1')
stage2_unit2_relu1 = tf.nn.relu(stage2_unit2_bn1, name = 'stage2_unit2_relu1')
stage2_unit2_conv1 = convolution(stage2_unit2_relu1, group=1, strides=[1, 1], padding='VALID', name='stage2_unit2_conv1')
stage2_unit2_bn2 = batch_normalization(stage2_unit2_conv1, variance_epsilon=1.99999994948e-05, name='stage2_unit2_bn2')
stage2_unit2_relu2 = tf.nn.relu(stage2_unit2_bn2, name = 'stage2_unit2_relu2')
stage2_unit2_conv2_pad = tf.pad(stage2_unit2_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage2_unit2_conv2 = convolution(stage2_unit2_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage2_unit2_conv2')
stage2_unit2_bn3 = batch_normalization(stage2_unit2_conv2, variance_epsilon=1.99999994948e-05, name='stage2_unit2_bn3')
stage2_unit2_relu3 = tf.nn.relu(stage2_unit2_bn3, name = 'stage2_unit2_relu3')
stage2_unit2_conv3 = convolution(stage2_unit2_relu3, group=1, strides=[1, 1], padding='VALID', name='stage2_unit2_conv3')
plus4 = stage2_unit2_conv3 + plus3
stage2_unit3_bn1 = batch_normalization(plus4, variance_epsilon=1.99999994948e-05, name='stage2_unit3_bn1')
stage2_unit3_relu1 = tf.nn.relu(stage2_unit3_bn1, name = 'stage2_unit3_relu1')
stage2_unit3_conv1 = convolution(stage2_unit3_relu1, group=1, strides=[1, 1], padding='VALID', name='stage2_unit3_conv1')
stage2_unit3_bn2 = batch_normalization(stage2_unit3_conv1, variance_epsilon=1.99999994948e-05, name='stage2_unit3_bn2')
stage2_unit3_relu2 = tf.nn.relu(stage2_unit3_bn2, name = 'stage2_unit3_relu2')
stage2_unit3_conv2_pad = tf.pad(stage2_unit3_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage2_unit3_conv2 = convolution(stage2_unit3_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage2_unit3_conv2')
stage2_unit3_bn3 = batch_normalization(stage2_unit3_conv2, variance_epsilon=1.99999994948e-05, name='stage2_unit3_bn3')
stage2_unit3_relu3 = tf.nn.relu(stage2_unit3_bn3, name = 'stage2_unit3_relu3')
stage2_unit3_conv3 = convolution(stage2_unit3_relu3, group=1, strides=[1, 1], padding='VALID', name='stage2_unit3_conv3')
plus5 = stage2_unit3_conv3 + plus4
stage2_unit4_bn1 = batch_normalization(plus5, variance_epsilon=1.99999994948e-05, name='stage2_unit4_bn1')
stage2_unit4_relu1 = tf.nn.relu(stage2_unit4_bn1, name = 'stage2_unit4_relu1')
stage2_unit4_conv1 = convolution(stage2_unit4_relu1, group=1, strides=[1, 1], padding='VALID', name='stage2_unit4_conv1')
stage2_unit4_bn2 = batch_normalization(stage2_unit4_conv1, variance_epsilon=1.99999994948e-05, name='stage2_unit4_bn2')
stage2_unit4_relu2 = tf.nn.relu(stage2_unit4_bn2, name = 'stage2_unit4_relu2')
stage2_unit4_conv2_pad = tf.pad(stage2_unit4_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage2_unit4_conv2 = convolution(stage2_unit4_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage2_unit4_conv2')
stage2_unit4_bn3 = batch_normalization(stage2_unit4_conv2, variance_epsilon=1.99999994948e-05, name='stage2_unit4_bn3')
stage2_unit4_relu3 = tf.nn.relu(stage2_unit4_bn3, name = 'stage2_unit4_relu3')
stage2_unit4_conv3 = convolution(stage2_unit4_relu3, group=1, strides=[1, 1], padding='VALID', name='stage2_unit4_conv3')
plus6 = stage2_unit4_conv3 + plus5
stage3_unit1_bn1 = batch_normalization(plus6, variance_epsilon=1.99999994948e-05, name='stage3_unit1_bn1')
stage3_unit1_relu1 = tf.nn.relu(stage3_unit1_bn1, name = 'stage3_unit1_relu1')
stage3_unit1_conv1 = convolution(stage3_unit1_relu1, group=1, strides=[1, 1], padding='VALID', name='stage3_unit1_conv1')
stage3_unit1_sc = convolution(stage3_unit1_relu1, group=1, strides=[2, 2], padding='VALID', name='stage3_unit1_sc')
stage3_unit1_bn2 = batch_normalization(stage3_unit1_conv1, variance_epsilon=1.99999994948e-05, name='stage3_unit1_bn2')
stage3_unit1_relu2 = tf.nn.relu(stage3_unit1_bn2, name = 'stage3_unit1_relu2')
stage3_unit1_conv2_pad = tf.pad(stage3_unit1_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage3_unit1_conv2 = convolution(stage3_unit1_conv2_pad, group=1, strides=[2, 2], padding='VALID', name='stage3_unit1_conv2')
stage3_unit1_bn3 = batch_normalization(stage3_unit1_conv2, variance_epsilon=1.99999994948e-05, name='stage3_unit1_bn3')
stage3_unit1_relu3 = tf.nn.relu(stage3_unit1_bn3, name = 'stage3_unit1_relu3')
stage3_unit1_conv3 = convolution(stage3_unit1_relu3, group=1, strides=[1, 1], padding='VALID', name='stage3_unit1_conv3')
plus7 = stage3_unit1_conv3 + stage3_unit1_sc
stage3_unit2_bn1 = batch_normalization(plus7, variance_epsilon=1.99999994948e-05, name='stage3_unit2_bn1')
stage3_unit2_relu1 = tf.nn.relu(stage3_unit2_bn1, name = 'stage3_unit2_relu1')
stage3_unit2_conv1 = convolution(stage3_unit2_relu1, group=1, strides=[1, 1], padding='VALID', name='stage3_unit2_conv1')
stage3_unit2_bn2 = batch_normalization(stage3_unit2_conv1, variance_epsilon=1.99999994948e-05, name='stage3_unit2_bn2')
stage3_unit2_relu2 = tf.nn.relu(stage3_unit2_bn2, name = 'stage3_unit2_relu2')
stage3_unit2_conv2_pad = tf.pad(stage3_unit2_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage3_unit2_conv2 = convolution(stage3_unit2_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage3_unit2_conv2')
stage3_unit2_bn3 = batch_normalization(stage3_unit2_conv2, variance_epsilon=1.99999994948e-05, name='stage3_unit2_bn3')
stage3_unit2_relu3 = tf.nn.relu(stage3_unit2_bn3, name = 'stage3_unit2_relu3')
stage3_unit2_conv3 = convolution(stage3_unit2_relu3, group=1, strides=[1, 1], padding='VALID', name='stage3_unit2_conv3')
plus8 = stage3_unit2_conv3 + plus7
stage3_unit3_bn1 = batch_normalization(plus8, variance_epsilon=1.99999994948e-05, name='stage3_unit3_bn1')
stage3_unit3_relu1 = tf.nn.relu(stage3_unit3_bn1, name = 'stage3_unit3_relu1')
stage3_unit3_conv1 = convolution(stage3_unit3_relu1, group=1, strides=[1, 1], padding='VALID', name='stage3_unit3_conv1')
stage3_unit3_bn2 = batch_normalization(stage3_unit3_conv1, variance_epsilon=1.99999994948e-05, name='stage3_unit3_bn2')
stage3_unit3_relu2 = tf.nn.relu(stage3_unit3_bn2, name = 'stage3_unit3_relu2')
stage3_unit3_conv2_pad = tf.pad(stage3_unit3_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage3_unit3_conv2 = convolution(stage3_unit3_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage3_unit3_conv2')
stage3_unit3_bn3 = batch_normalization(stage3_unit3_conv2, variance_epsilon=1.99999994948e-05, name='stage3_unit3_bn3')
stage3_unit3_relu3 = tf.nn.relu(stage3_unit3_bn3, name = 'stage3_unit3_relu3')
stage3_unit3_conv3 = convolution(stage3_unit3_relu3, group=1, strides=[1, 1], padding='VALID', name='stage3_unit3_conv3')
plus9 = stage3_unit3_conv3 + plus8
stage3_unit4_bn1 = batch_normalization(plus9, variance_epsilon=1.99999994948e-05, name='stage3_unit4_bn1')
stage3_unit4_relu1 = tf.nn.relu(stage3_unit4_bn1, name = 'stage3_unit4_relu1')
stage3_unit4_conv1 = convolution(stage3_unit4_relu1, group=1, strides=[1, 1], padding='VALID', name='stage3_unit4_conv1')
stage3_unit4_bn2 = batch_normalization(stage3_unit4_conv1, variance_epsilon=1.99999994948e-05, name='stage3_unit4_bn2')
stage3_unit4_relu2 = tf.nn.relu(stage3_unit4_bn2, name = 'stage3_unit4_relu2')
stage3_unit4_conv2_pad = tf.pad(stage3_unit4_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage3_unit4_conv2 = convolution(stage3_unit4_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage3_unit4_conv2')
stage3_unit4_bn3 = batch_normalization(stage3_unit4_conv2, variance_epsilon=1.99999994948e-05, name='stage3_unit4_bn3')
stage3_unit4_relu3 = tf.nn.relu(stage3_unit4_bn3, name = 'stage3_unit4_relu3')
stage3_unit4_conv3 = convolution(stage3_unit4_relu3, group=1, strides=[1, 1], padding='VALID', name='stage3_unit4_conv3')
plus10 = stage3_unit4_conv3 + plus9
stage3_unit5_bn1 = batch_normalization(plus10, variance_epsilon=1.99999994948e-05, name='stage3_unit5_bn1')
stage3_unit5_relu1 = tf.nn.relu(stage3_unit5_bn1, name = 'stage3_unit5_relu1')
stage3_unit5_conv1 = convolution(stage3_unit5_relu1, group=1, strides=[1, 1], padding='VALID', name='stage3_unit5_conv1')
stage3_unit5_bn2 = batch_normalization(stage3_unit5_conv1, variance_epsilon=1.99999994948e-05, name='stage3_unit5_bn2')
stage3_unit5_relu2 = tf.nn.relu(stage3_unit5_bn2, name = 'stage3_unit5_relu2')
stage3_unit5_conv2_pad = tf.pad(stage3_unit5_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage3_unit5_conv2 = convolution(stage3_unit5_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage3_unit5_conv2')
stage3_unit5_bn3 = batch_normalization(stage3_unit5_conv2, variance_epsilon=1.99999994948e-05, name='stage3_unit5_bn3')
stage3_unit5_relu3 = tf.nn.relu(stage3_unit5_bn3, name = 'stage3_unit5_relu3')
stage3_unit5_conv3 = convolution(stage3_unit5_relu3, group=1, strides=[1, 1], padding='VALID', name='stage3_unit5_conv3')
plus11 = stage3_unit5_conv3 + plus10
stage3_unit6_bn1 = batch_normalization(plus11, variance_epsilon=1.99999994948e-05, name='stage3_unit6_bn1')
stage3_unit6_relu1 = tf.nn.relu(stage3_unit6_bn1, name = 'stage3_unit6_relu1')
stage3_unit6_conv1 = convolution(stage3_unit6_relu1, group=1, strides=[1, 1], padding='VALID', name='stage3_unit6_conv1')
stage3_unit6_bn2 = batch_normalization(stage3_unit6_conv1, variance_epsilon=1.99999994948e-05, name='stage3_unit6_bn2')
stage3_unit6_relu2 = tf.nn.relu(stage3_unit6_bn2, name = 'stage3_unit6_relu2')
stage3_unit6_conv2_pad = tf.pad(stage3_unit6_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage3_unit6_conv2 = convolution(stage3_unit6_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage3_unit6_conv2')
stage3_unit6_bn3 = batch_normalization(stage3_unit6_conv2, variance_epsilon=1.99999994948e-05, name='stage3_unit6_bn3')
stage3_unit6_relu3 = tf.nn.relu(stage3_unit6_bn3, name = 'stage3_unit6_relu3')
stage3_unit6_conv3 = convolution(stage3_unit6_relu3, group=1, strides=[1, 1], padding='VALID', name='stage3_unit6_conv3')
plus12 = stage3_unit6_conv3 + plus11
stage4_unit1_bn1 = batch_normalization(plus12, variance_epsilon=1.99999994948e-05, name='stage4_unit1_bn1')
stage4_unit1_relu1 = tf.nn.relu(stage4_unit1_bn1, name = 'stage4_unit1_relu1')
stage4_unit1_conv1 = convolution(stage4_unit1_relu1, group=1, strides=[1, 1], padding='VALID', name='stage4_unit1_conv1')
stage4_unit1_sc = convolution(stage4_unit1_relu1, group=1, strides=[2, 2], padding='VALID', name='stage4_unit1_sc')
stage4_unit1_bn2 = batch_normalization(stage4_unit1_conv1, variance_epsilon=1.99999994948e-05, name='stage4_unit1_bn2')
stage4_unit1_relu2 = tf.nn.relu(stage4_unit1_bn2, name = 'stage4_unit1_relu2')
stage4_unit1_conv2_pad = tf.pad(stage4_unit1_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage4_unit1_conv2 = convolution(stage4_unit1_conv2_pad, group=1, strides=[2, 2], padding='VALID', name='stage4_unit1_conv2')
stage4_unit1_bn3 = batch_normalization(stage4_unit1_conv2, variance_epsilon=1.99999994948e-05, name='stage4_unit1_bn3')
stage4_unit1_relu3 = tf.nn.relu(stage4_unit1_bn3, name = 'stage4_unit1_relu3')
stage4_unit1_conv3 = convolution(stage4_unit1_relu3, group=1, strides=[1, 1], padding='VALID', name='stage4_unit1_conv3')
plus13 = stage4_unit1_conv3 + stage4_unit1_sc
stage4_unit2_bn1 = batch_normalization(plus13, variance_epsilon=1.99999994948e-05, name='stage4_unit2_bn1')
stage4_unit2_relu1 = tf.nn.relu(stage4_unit2_bn1, name = 'stage4_unit2_relu1')
stage4_unit2_conv1 = convolution(stage4_unit2_relu1, group=1, strides=[1, 1], padding='VALID', name='stage4_unit2_conv1')
stage4_unit2_bn2 = batch_normalization(stage4_unit2_conv1, variance_epsilon=1.99999994948e-05, name='stage4_unit2_bn2')
stage4_unit2_relu2 = tf.nn.relu(stage4_unit2_bn2, name = 'stage4_unit2_relu2')
stage4_unit2_conv2_pad = tf.pad(stage4_unit2_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage4_unit2_conv2 = convolution(stage4_unit2_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage4_unit2_conv2')
stage4_unit2_bn3 = batch_normalization(stage4_unit2_conv2, variance_epsilon=1.99999994948e-05, name='stage4_unit2_bn3')
stage4_unit2_relu3 = tf.nn.relu(stage4_unit2_bn3, name = 'stage4_unit2_relu3')
stage4_unit2_conv3 = convolution(stage4_unit2_relu3, group=1, strides=[1, 1], padding='VALID', name='stage4_unit2_conv3')
plus14 = stage4_unit2_conv3 + plus13
stage4_unit3_bn1 = batch_normalization(plus14, variance_epsilon=1.99999994948e-05, name='stage4_unit3_bn1')
stage4_unit3_relu1 = tf.nn.relu(stage4_unit3_bn1, name = 'stage4_unit3_relu1')
stage4_unit3_conv1 = convolution(stage4_unit3_relu1, group=1, strides=[1, 1], padding='VALID', name='stage4_unit3_conv1')
stage4_unit3_bn2 = batch_normalization(stage4_unit3_conv1, variance_epsilon=1.99999994948e-05, name='stage4_unit3_bn2')
stage4_unit3_relu2 = tf.nn.relu(stage4_unit3_bn2, name = 'stage4_unit3_relu2')
stage4_unit3_conv2_pad = tf.pad(stage4_unit3_relu2, paddings = [[0L, 0L], [1L, 1L], [1L, 1L], [0L, 0L]])
stage4_unit3_conv2 = convolution(stage4_unit3_conv2_pad, group=1, strides=[1, 1], padding='VALID', name='stage4_unit3_conv2')
stage4_unit3_bn3 = batch_normalization(stage4_unit3_conv2, variance_epsilon=1.99999994948e-05, name='stage4_unit3_bn3')
stage4_unit3_relu3 = tf.nn.relu(stage4_unit3_bn3, name = 'stage4_unit3_relu3')
stage4_unit3_conv3 = convolution(stage4_unit3_relu3, group=1, strides=[1, 1], padding='VALID', name='stage4_unit3_conv3')
plus15 = stage4_unit3_conv3 + plus14
bn1 = batch_normalization(plus15, variance_epsilon=1.99999994948e-05, name='bn1')
relu1 = tf.nn.relu(bn1, name = 'relu1')
pool1 = tf.nn.avg_pool(relu1, [1] + relu1.get_shape().as_list()[1:-1] + [1], strides = [1] * 4, padding = 'VALID', name = 'pool1')
flatten0 = tf.contrib.layers.flatten(pool1)
fc1 = tf.layers.dense(flatten0, 1000, kernel_initializer = tf.constant_initializer(__weights_dict['fc1']['weights']), bias_initializer = tf.constant_initializer(__weights_dict['fc1']['bias']), use_bias = True)
softmax = tf.nn.softmax(fc1, name = 'softmax')
return data, softmax
def batch_normalization(input, name, **kwargs):
mean = tf.Variable(__weights_dict[name]['mean'], name = name + "_mean", trainable = is_train)
variance = tf.Variable(__weights_dict[name]['var'], name = name + "_var", trainable = is_train)
offset = tf.Variable(__weights_dict[name]['bias'], name = name + "_bias", trainable = is_train) if 'bias' in __weights_dict[name] else None
scale = tf.Variable(__weights_dict[name]['scale'], name = name + "_scale", trainable = is_train) if 'scale' in __weights_dict[name] else None
return tf.nn.batch_normalization(input, mean, variance, offset, scale, name = name, **kwargs)
def convolution(input, name, group, **kwargs):
w = tf.Variable(__weights_dict[name]['weights'], trainable=is_train, name=name + "_weight")
if group == 1:
layer = tf.nn.convolution(input, w, **kwargs)
else:
weight_groups = tf.split(w, num_or_size_splits=group, axis=-1)
xs = tf.split(input, num_or_size_splits=group, axis=-1)
convolved = [tf.nn.convolution(x, weight, **kwargs) for
(x, weight) in zip(xs, weight_groups)]
layer = tf.concat(convolved, axis=-1)
if 'bias' in __weights_dict[name]:
b = tf.Variable(__weights_dict[name]['bias'], trainable=is_train, name=name + "_bias")
layer = layer + b
return layer
```
But when I load the weight and feed images, the output are always equal to 818.
Please help. | closed | 2018-05-07T12:56:56Z | 2018-07-05T05:01:35Z | https://github.com/microsoft/MMdnn/issues/184 | [] | LiYingwei | 3 |
albumentations-team/albumentations | deep-learning | 1,586 | [tech debt] Merge `ShiftScaleRotate` and `Affine` | Both do the same, but `Affine` is much faster.
1. Merge two classes.
2. Add Deprecated warning to `ShiftScaleRotate` | closed | 2024-03-15T19:38:27Z | 2024-05-09T00:57:11Z | https://github.com/albumentations-team/albumentations/issues/1586 | [
"good first issue",
"Tech debt"
] | ternaus | 1 |
thtrieu/darkflow | tensorflow | 1,108 | libstdc++.so.6: version `GLIBCXX_3.4.22' not found | I am using Ubuntu 16.04 on VMware .
I have do custom image detection on class solar_images.
I alreaday installed tensorflow (required version tensorflow for this project).
I carefully Installed OPencv, Anaconda jupyter.
I run this project creating environment.
But executing below command for training .........the first error comes is (given below link)
https://github.com/thtrieu/darkflow/issues/1107
for solving this error
i read blogs and stackoverflow and they suggested to reinstall tensorflow.
i have done reinstalling tensorflow but still not working.
so i remove the current tensorflow and installed the tensorflow of previous version.
after executing below command we get error which are shown in Pic.
I executing this command
python flow --model cfg/yolo-1c.cfg --load bin/yolo.weights --train --annotation new_model_data/annotations --dataset new_model_data/images --epoch 400
this error comes out.

| closed | 2019-12-20T12:41:40Z | 2020-01-02T08:13:52Z | https://github.com/thtrieu/darkflow/issues/1108 | [] | ankitAMD | 2 |
raphaelvallat/pingouin | pandas | 39 | bayesfactor_pearson return different results than correlationBF | See https://github.com/cran/BayesFactor/blob/0a1fe0bedf62549a466c9ec5db8b8b5a0217f0c6/R/correlationBF.R | closed | 2019-05-30T22:21:05Z | 2019-06-01T22:40:03Z | https://github.com/raphaelvallat/pingouin/issues/39 | [
"invalid :triangular_flag_on_post:"
] | raphaelvallat | 4 |
docarray/docarray | fastapi | 1,659 | docs: add "coming from langchain" section | ### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.docarray.org) and still think this feature is missing
### Description
Mention built-in vectorstores as well as DocArrayRetriever
### Affected Components
- [ ] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [ ] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [ ] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | closed | 2023-06-19T11:09:35Z | 2023-06-19T15:12:56Z | https://github.com/docarray/docarray/issues/1659 | [] | jupyterjazz | 0 |
pandas-dev/pandas | data-science | 60,363 | DOC: Add examples for float_format in to_csv documentation | ### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html
### Documentation problem
The float_format parameter in to_csv is explained but lacks examples. Users might struggle to understand how to apply this parameter effectively without concrete examples in the documentation.
### Suggested fix for documentation
I suggest adding examples for float_format to make the documentation more beginner-friendly. Examples could include:
```
# Format floats to two decimal places
df.to_csv("example1.csv", float_format="%.2f")
# Use scientific notation
df.to_csv("example2.csv", float_format="{:.2e}".format)
``` | closed | 2024-11-19T18:11:21Z | 2024-12-03T20:31:36Z | https://github.com/pandas-dev/pandas/issues/60363 | [
"Docs",
"IO CSV"
] | felicijo | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 646 | Train with a batch size | Hey,
If I've understood the code and the method correctly, the images are processed one by one and the Gaussian optimisations are processed iteratively. Is it conceptually possible to perform gaussian splatting with a batch of images and gaussians?
If so, would a lot of code have to be changed?
I imagine that the merging/splitting part of the gaussians has to be synchronised between GPUs, but are there any other constraints?
Thank you in advance | closed | 2024-02-01T09:51:04Z | 2024-02-09T21:51:49Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/646 | [] | LoickCh | 2 |
hpcaitech/ColossalAI | deep-learning | 6,112 | [BUG]: ColossalAI Inference example response empty result without error | ### Is there an existing issue for this bug?
- [X] I have searched the existing issues
### 🐛 Describe the bug
Git commit: 2f583c1549(Current master branch)
## code(Example code in colossalai inference readme):
```
import torch
import transformers
import colossalai
from colossalai.inference import InferenceEngine, InferenceConfig
from pprint import pprint
colossalai.launch_from_torch()
model_path = "lmsys/vicuna-7b-v1.3"
model = transformers.LlamaForCausalLM.from_pretrained(model_path).cuda()
tokenizer = transformers.AutoTokenizer.from_pretrained(model_path)
inference_config = InferenceConfig(
dtype=torch.float16,
max_batch_size=4,
max_input_len=1024,
max_output_len=512,
use_cuda_kernel=True,
)
engine = InferenceEngine(model, tokenizer, inference_config, verbose=True)
prompts = ['Who is the best player in the history of NBA?']
response = engine.generate(prompts)
pprint(response)
```
## run command:
colossalai run --nproc_per_node 1 speed.py
## Output:
```
/data/miniconda/envs/torch/lib/python3.10/site-packages/diffusers/models/transformers/transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead.
deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
/data/coding/ColossalAI/colossalai/shardformer/layer/normalization.py:45: UserWarning: Please install apex from source (https://github.com/NVIDIA/apex) to use the fused RMSNorm kernel
warnings.warn("Please install apex from source (https://github.com/NVIDIA/apex) to use the fused RMSNorm kernel")
[11/04/24 11:04:32] INFO colossalai - colossalai - INFO:
/data/coding/ColossalAI/colossalai/initialize.py:75
launch
INFO colossalai - colossalai - INFO: Distributed
environment is initialized, world size: 1
/data/miniconda/envs/torch/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Loading checkpoint shards: 100%|██████████| 2/2 [00:17<00:00, 8.83s/it]
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
/data/miniconda/envs/torch/lib/python3.10/site-packages/torch/utils/cpp_extension.py:1967: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
[extension] Time taken to load inference_ops_cuda op: 0.16129255294799805 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.001485586166381836 seconds
[11/04/24 11:05:06] WARNING colossalai - colossalai.inference.utils - WARNING:
/data/coding/ColossalAI/colossalai/inference/utils.
py:162 can_use_flash_attn2
WARNING colossalai - colossalai.inference.utils - WARNING:
flash_attn2 has not been installed yet, we will use
triton flash attn instead.
[11/04/24 11:05:06] INFO colossalai - colossalai.inference.core.llm_engine -
INFO:
/data/coding/ColossalAI/colossalai/inference/core/l
lm_engine.py:158 init_model
INFO colossalai - colossalai.inference.core.llm_engine -
INFO: the device is cuda:0
INFO colossalai - colossalai.inference.core.llm_engine -
INFO:
/data/coding/ColossalAI/colossalai/inference/core/l
lm_engine.py:163 init_model
INFO colossalai - colossalai.inference.core.llm_engine -
INFO: Before the shard, Rank: [0], model size:
12.551277160644531 GB, model's device is: cuda:0
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0019431114196777344 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0009531974792480469 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007824897766113281 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007727146148681641 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007011890411376953 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008337497711181641 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0006923675537109375 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007014274597167969 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007956027984619141 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0006723403930664062 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007219314575195312 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007529258728027344 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.00080108642578125 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0010461807250976562 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007071495056152344 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007612705230712891 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007638931274414062 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0009360313415527344 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008411407470703125 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0010635852813720703 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008685588836669922 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0010421276092529297 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008721351623535156 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0009806156158447266 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008914470672607422 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0010721683502197266 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008542537689208984 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008599758148193359 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008606910705566406 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008687973022460938 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008411407470703125 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0009608268737792969 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008566379547119141 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0009701251983642578 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008494853973388672 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0009267330169677734 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0010409355163574219 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0009996891021728516 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0009884834289550781 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0010025501251220703 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.001371622085571289 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008530616760253906 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008502006530761719 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008380413055419922 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0010218620300292969 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008378028869628906 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008902549743652344 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008327960968017578 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008392333984375 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008347034454345703 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008482933044433594 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008289813995361328 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008499622344970703 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008411407470703125 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008337497711181641 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008308887481689453 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008511543273925781 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008406639099121094 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008447170257568359 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0008463859558105469 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007150173187255859 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007104873657226562 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007312297821044922 seconds
[extension] Loading the JIT-built inference_ops_cuda kernel during runtime now
[extension] Time taken to load inference_ops_cuda op: 0.0007383823394775391 seconds
[11/04/24 11:05:08] INFO colossalai - colossalai.inference.core.llm_engine -
INFO:
/data/coding/ColossalAI/colossalai/inference/core/l
lm_engine.py:193 init_model
INFO colossalai - colossalai.inference.core.llm_engine -
INFO: After the shard, Rank: [0], model size:
12.551277160644531 GB, model's device is: cuda:0
INFO colossalai - colossalai.inference.core.llm_engine -
INFO:
/data/coding/ColossalAI/colossalai/inference/core/l
lm_engine.py:208 init_model
INFO colossalai - colossalai.inference.core.llm_engine -
INFO: Rank [0], Model Weight Max Occupy 2.33984375
GB, Model size: 12.551277160644531 GB
[11/04/24 11:05:08] INFO colossalai -
colossalai.inference.kv_cache.kvcache_manager -
INFO:
/data/coding/ColossalAI/colossalai/inference/kv_cac
he/kvcache_manager.py:98 __init__
INFO colossalai -
colossalai.inference.kv_cache.kvcache_manager -
INFO: Allocating K cache with shape: (384, 32, 16,
16, 8), V cache with shape: (384, 32, 16, 128)
consisting of 384 blocks.
INFO colossalai -
colossalai.inference.kv_cache.kvcache_manager -
INFO:
/data/coding/ColossalAI/colossalai/inference/kv_cac
he/kvcache_manager.py:115 __init__
INFO colossalai -
colossalai.inference.kv_cache.kvcache_manager -
INFO: Allocated 3.00 GB of KV cache on device
cuda:0.
[]
====== Training on All Nodes =====
127.0.0.1: success
====== Stopping All Nodes =====
127.0.0.1: finish
```
### Environment
pytorch=2.3.1
python=3.10
nvidia-smi
V100 32G, with CUDA=12.4 | closed | 2024-11-04T03:06:27Z | 2025-01-08T03:51:52Z | https://github.com/hpcaitech/ColossalAI/issues/6112 | [
"bug"
] | GuangyaoZhang | 2 |
autogluon/autogluon | computer-vision | 4,541 | [BUG] uv pip install autogluon fails | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
I can't install autogluon with `uv pip install autogluon`, I get "No solution found when resolving dependencies"
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
It should be installed normally, without failing to resolve dependencies
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
Here is a link to the devcontainer.json file that raises this problem: [devcontainer.json](https://gist.github.com/gabrieltomasin/dcd7e3a7022e8a351ab98f83395c6fc4)
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->
```
[26818 ms] Start: Run in container: /bin/sh -c pip install -U pip && pip install -U setuptools wheel && pip install -U uv && uv venv && uv pip install torch==2.3.1 torchvision==0.18.1 --index-url https://download.pytorch.org/whl/cpu
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pip in /usr/local/lib/python3.11/site-packages (24.0)
Collecting pip
Downloading pip-24.2-py3-none-any.whl.metadata (3.6 kB)
Downloading pip-24.2-py3-none-any.whl (1.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 7.6 MB/s eta 0:00:00
Installing collected packages: pip
Successfully installed pip-24.2
[notice] A new release of pip is available: 24.0 -> 24.2
[notice] To update, run: pip install --upgrade pip
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: setuptools in /usr/local/lib/python3.11/site-packages (69.0.3)
Collecting setuptools
Downloading setuptools-75.1.0-py3-none-any.whl.metadata (6.9 kB)
Requirement already satisfied: wheel in /usr/local/lib/python3.11/site-packages (0.44.0)
Downloading setuptools-75.1.0-py3-none-any.whl (1.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 7.5 MB/s eta 0:00:00
Installing collected packages: setuptools
Successfully installed setuptools-75.1.0
Defaulting to user installation because normal site-packages is not writeable
Collecting uv
Downloading uv-0.4.21-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)
Downloading uv-0.4.21-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.6/13.6 MB 8.8 MB/s eta 0:00:00
Installing collected packages: uv
Successfully installed uv-0.4.21
Using CPython 3.11.10 interpreter at: /usr/local/bin/python
Creating virtual environment at: .venv
× No solution found when resolving dependencies:
╰─▶ Because torch==2.3.1 has no wheels with a matching Python implementation tag and you require torch==2.3.1, we can conclude that your
requirements are unsatisfiable.
What's next:
Try Docker Debug for seamless, persistent debugging tools in any container or image → docker debug eb059f3934144d327768900406603a6312aef112c3
c3c74f9c78280df12bb2ce
Learn more at https://docs.docker.com/go/debug-cli/
[39802 ms] postCreateCommand failed with exit code 1. Skipping any further user-provided commands.
``` | closed | 2024-10-16T11:06:47Z | 2024-11-01T21:09:57Z | https://github.com/autogluon/autogluon/issues/4541 | [
"bug: unconfirmed",
"Needs Triage",
"install"
] | gabrieltomasin | 1 |
plotly/dash | data-visualization | 2,858 | [BUG] Fix overlay_style in dcc.Loading | dash>= 2.17.0
The `overlay_style` prop in `dcc.Loading` should apply only to the background and not the spinner component. You can see it in the docs - here is the example:
This could be tagged as a "Good First Issue". If someone doesn't get to it first, I think I can fix it :slightly_smiling_face:
```python
import time
from dash import Dash, Input, Output, callback, html, dcc, no_update
import dash_bootstrap_components as dbc
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = dbc.Container(
[
dbc.Button("Start", id="loading-overlay-button", n_clicks=0),
dcc.Loading(
[dbc.Alert("My Data", id="loading-overlay-output", className="h4 p-4 mt-3")],
overlay_style={"visibility":"visible", "filter": "blur(2px)"},
type="circle",
),
]
)
@callback(
Output("loading-overlay-output", "children"),
Input("loading-overlay-button", "n_clicks"),
)
def load_output(n):
if n:
time.sleep(1)
return f"Data updated {n} times."
return no_update
if __name__ == "__main__":
app.run(debug=True)
``` | closed | 2024-05-13T17:15:04Z | 2024-05-15T18:17:08Z | https://github.com/plotly/dash/issues/2858 | [
"bug",
"sev-2"
] | AnnMarieW | 0 |
pydantic/pydantic | pydantic | 10,683 | `exclude_defaults` is broken for Optional fields with `default_factory` when set to `None` | ### Initial Checks
- [X] I have searched GitHub for a duplicate issue and I'm sure this is something new
- [X] I have searched Google & StackOverflow for a solution and couldn't find anything
- [X] I have read and followed [the docs](https://docs.pydantic.dev) and still think this is a bug
- [X] I am confident that the issue is with pydantic (not my code, or another library in the ecosystem like [FastAPI](https://fastapi.tiangolo.com) or [mypy](https://mypy.readthedocs.io/en/stable))
### Description
Hello! First of all thank you for this fantastic framework and for maintaining the old version.
When having a model with an `Optional` field that has a `default_factory`, when that field is set to `None`, the resulting serialisation of that model lacks that field, instead of including it with the `None` value.
I've attached a test case inspired by the [official](https://github.com/pydantic/pydantic/blob/5ebcdc13b83fba5da34ad9b0f008f7b4faf89396/tests/test_main.py#L1106) one.
I believe that somewhere during the model creation, the `ModelField.default` is set to `None` instead of `Undefined`, breaking the exclude_defaults check at this [line](https://github.com/pydantic/pydantic/blob/5ebcdc13b83fba5da34ad9b0f008f7b4faf89396/pydantic/main.py#L857).
### Example Code
```Python
def test_exclude_defaults():
class Model(BaseModel):
nullable_default_factory: Optional[str] = Field(default_factory=lambda: "a")
m = Model(nullable_default_factory=None)
assert m.dict(exclude_defaults=True) == {
'nullable_default_factory': None,
}
```
### Python, Pydantic & OS Version
```Text
pydantic version: 1.10.18
pydantic compiled: True
install path: /tmp/latest-pydantic1.venv/lib/python3.11/site-packages/pydantic
python version: 3.11.2 (main, Aug 26 2024, 07:20:54) [GCC 12.2.0]
platform: Linux-6.1.0-25-amd64-x86_64-with-glibc2.36
optional deps. installed: ['typing-extensions']
```
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [ ] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [X] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc. | closed | 2024-10-22T09:32:39Z | 2024-10-24T10:50:40Z | https://github.com/pydantic/pydantic/issues/10683 | [
"bug V1"
] | z-tux-mind | 2 |
iperov/DeepFaceLive | machine-learning | 142 | converting LIA model to onnx | Hi there, I try to convert LIA model from pytorch to onnx but failed with unsupported operator like 'aten::qr'. I notice that you have successfully converted it and your converted onnx works fine (with only minor difference compared to original pytorch version). Can you share some insight about the conversion ?
best wishes | closed | 2023-03-02T02:29:42Z | 2023-03-02T04:48:56Z | https://github.com/iperov/DeepFaceLive/issues/142 | [] | linfang010 | 1 |
PablocFonseca/streamlit-aggrid | streamlit | 63 | How to use custom cell renderers? | I am using the staggrid component and want to embed buttons which then open up a window to show some details about the row, something like [this](https://www.ag-grid.com/javascript-data-grid/component-cell-renderer/#example-simple)
For this to work I would need to inject a custom cell renderer (= js class) it into the existing staggrid component to be able to use it. Does anybody know how this could be done?
I’m not exactly a frontend developer, maybe it could also be a general javascript file that’s sort of globally defined for the streamlit website. I just need AgGrid to be able to see it. | closed | 2022-02-01T09:21:03Z | 2024-04-04T17:53:17Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/63 | [
"enhancement",
"question"
] | thunderbug1 | 2 |
aimhubio/aim | tensorflow | 2,664 | Better scaling w.r.t. to the number of runs | ## 🚀 Feature
Improve scalability of Aim w.r.t. the number of runs.
### Motivation
I wanted to try Aim instead of TensorBoard, but it did not scale to my setup.
Namely, I have a repository similar to [this one](https://github.com/Yura52/tabular-dl-num-embeddings). TensorBoard files are stored next to the "DONE" files (for example, see [this](https://github.com/Yura52/tabular-dl-num-embeddings/tree/main/exp/mlp/california/0_evaluation/0) directory; and there are _many_ directories like this).
UPDATE: the specific number of runs I have is almost 4000.
I faced two issues:
1. (minor issue) A slow conversion from tensorboard to aim.
2. (critical issue) Aim UI does not allow watching the "runs" page because of "Too many open files".
(Though I understand that my use case may be out of scope for Aim)
P.S. It seems that both issues are caused by the storage model of Aim: it has a "central" storage, while TensorBoard does not have any storage and avoids all the related problems. Is my understanding correct? If it is, then I am curious what is the motivation behind going for the central storage? | open | 2023-04-19T18:17:28Z | 2023-06-16T05:46:13Z | https://github.com/aimhubio/aim/issues/2664 | [
"type / enhancement"
] | Yura52 | 4 |
plotly/dash | dash | 3,016 | [BUG] Make a minor release updating plotly bundle to 2.35.2 or newer to fix maplibre | I got the pip package of dash, version 2.18.1.
Would it be possible to make a new release that updated plotly from 2.35.0 to 2.35.2? We have an offline application, and the bundled plotly (v2.35.0) is trying to get maplibre-gl.js from some CDN, instead of having it bundled, and they fixed that on plotly 2.35.2, but the latest stable dash release has not been updated accordingly.
Best regards,
Arturo | closed | 2024-09-24T23:57:28Z | 2024-09-25T19:37:44Z | https://github.com/plotly/dash/issues/3016 | [] | pupitetris | 2 |
s3rius/FastAPI-template | asyncio | 89 | Add gunicorn startup option. | Gunicorn with uvicorn workers is faster than raw uvicorn. This feature might be useful for folks who want to gain more speed to their projects. | closed | 2022-06-21T12:20:38Z | 2023-07-31T13:09:50Z | https://github.com/s3rius/FastAPI-template/issues/89 | [] | s3rius | 3 |
ultralytics/ultralytics | computer-vision | 19,194 | segmentation labeling question - closed curve, donut | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Does segmentation have to be labeled in closed curve form? like cls x1 y1 y2 y2 y3 y3 y1 y1.
And when you have a doughnut-like shape, you want the outer contour to be o1 ox2 oy2 oy3 oy4 oy4 and the inner contour to be ix1 ix1 ix2 ix2 ix3 iy3, you want to move it from the nearest point and the inner contour to be a closed curve. Is that right?
ox2 oy2 - when ix1 iy1 is closest.
ox1 oy1 **ox2 oy2 ix1 iy1** ix2 iy2 ix3 iy3 **ix1 iy1 ox2 oy2** ox3 oy3 ox4 oy4
### Additional
_No response_ | open | 2025-02-12T02:51:56Z | 2025-02-12T04:35:35Z | https://github.com/ultralytics/ultralytics/issues/19194 | [
"question",
"segment"
] | Leo-aetech | 2 |
allenai/allennlp | nlp | 5,090 | Use PyTorch data loading infrastructure? | Hello,
I love AllenNLP and I incredibly appreciate the work that AllenNLP group at AI2 has done on this :)
However, I feel like recently I've been constantly dealing with bugs due to weird behaviours in the data loading process. So I started wondering: is there any specific reason why we had to reimplement the entire data loading logic? I was thinking to a very simple adaptation of the AllenNLP infrastructure. This is my personal opinion and might be heavily biased by my use cases therefore please let me know if I'm wrong.
Currently, we implement a `DatasetReader` that deals with transforming text to `Instance` objects. Most of the time, these instances are incredibly big and occupy a lot of memory making training on large datasets pretty much impossible. My experience with lazy loading hasn't been very successful so far with workers hanging waiting for others to actually generate the instances.
So I was thinking to the following solution that can be summarised as follows:
1. Implement a PyTorch `Dataset` or `IterableDataset` that handles the logic of reading the raw data. The user will decide whether they want a lazy dataset or not. I believe that this logic can be somehow abstracted away later on. For simplicity let's assume that the user decides this and not AllenNLP;
2. Implement a `DatasetTransform` that has a method `text_to_instance` which, given a raw input datum, returns an `Instance` object;
3. The `__get_item__` of a `Dataset` calls in turn `text_to_instance` of the `DatasetTransform` specifically designed for that dataset. A `Dataset` will assume that the raw data have been loaded in memory as a list (or dictionary). An `IterableDataset` can be easily be implemented following the default `PyTorch` logic;
4. The Trainer uses a PyTorch `DataLoader` to load the instances and uses `allennlp_collate_fn` to batch the data following the AllenNLP padding strategy.
For all these component we can create AllenNLP registrable wrappers so that they can be easily integrated in an AllenNLP experiment. In this way, we can literally reuse the already robust and reliable backbone infrastructure that PyTorch offers still benefiting from everything that you guys have already beautifully implemented.
@epwalsh @dirkgr @matt-gardner Am I oversimplifying this? I'd love to hear your thoughts on this!
Thanks a lot for your help,
Alessandro | closed | 2021-04-02T09:08:20Z | 2021-04-16T16:10:26Z | https://github.com/allenai/allennlp/issues/5090 | [
"Feature request",
"stale"
] | aleSuglia | 6 |
jupyter-incubator/sparkmagic | jupyter | 636 | [BUG]python version >=3.8.0 cannot jupyter install pysparkkerrnel | **Describe the bug**
Hello! I'm doing a vscode extension tool. I just found that python version >=3.8.0 cannot jupyter install pysparkkerrnel. However, it can work if python <3.8.0. our tool use ‘jupyter install sparkmagic’ and ‘jupyter.exe kernelspec install pysparkkernel’. As the picture shown, the link is here: https://pypi.org/project/sparkmagic/0.15.0/#history, it doesn’t apply to python 3.8.0. Do we have any plan to include python 3.8 in the near future or any method to workaround? Thanks a lot!



**To Reproduce**
Steps to reproduce the behavior.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Versions:**
- SparkMagic
- Livy (if you know it)
- Spark
**Additional context**
Add any other context about the problem here.
| closed | 2020-03-30T14:16:25Z | 2021-06-05T19:33:48Z | https://github.com/jupyter-incubator/sparkmagic/issues/636 | [] | zesluo | 12 |
Kitware/trame | data-visualization | 167 | Patch VTK's `serializeInstance` to change `print` statements to warnings | `serializeInstance` in `Web/Python/vtkmodules/web/render_window_serializer.py` of VTK uses `print` statements to issue warnings. We should patch this in trame to use warnings so as to have better outputs than:

Further, I'm wondering if we want to add a flag to suppress these warnings? | closed | 2022-12-14T18:08:22Z | 2023-01-09T22:59:52Z | https://github.com/Kitware/trame/issues/167 | [] | banesullivan | 2 |
explosion/spaCy | machine-learning | 13,147 | The en_core_web_trf model results in zero output | ### Discussed in https://github.com/explosion/spaCy/discussions/13145
<div type='discussions-op-text'>
<sup>Originally posted by **HarounAbdelsamad** November 22, 2023</sup>
I tried training the en_core_web_trf model based on datasets i have but after training and evaluation the fscore, recall and precision are all zero. I tried using the small model works fine. I changed the code so that the transformer component is added to the pipe and also use another config file for this. Here is my code for reference:
Could anybody help me or direct me towards the issue?
[code.txt](https://github.com/explosion/spaCy/files/13442430/code.txt)
</div> | closed | 2023-11-23T08:04:50Z | 2023-12-24T00:02:25Z | https://github.com/explosion/spaCy/issues/13147 | [
"training",
"feat / transformer"
] | HarounAbdelsamad | 2 |
litestar-org/polyfactory | pydantic | 110 | Bug: ParameterError is thrown for valid ranges in conint | Continued my experiments.
This code
```python
class Test(BaseModel):
a: conint(gt=10, lt=12) # type: ignore[valid-type]
class TestFactory(ModelFactory):
__model__ = Test
result = TestFactory.build()
```
produces
```python
Traceback (most recent call last):
File "/tmp/test_proj/test.py", line 24, in <module>
result = TestFactory.build()
File "/tmp/test_proj/venv/lib/python3.10/site-packages/pydantic_factories/factory.py", line 716, in build
kwargs[field_name] = cls.get_field_value(model_field, field_parameters=kwargs.get(field_name, {}))
File "/tmp/test_proj/venv/lib/python3.10/site-packages/pydantic_factories/factory.py", line 603, in get_field_value
return cls._handle_constrained_field(model_field=model_field)
File "/tmp/test_proj/venv/lib/python3.10/site-packages/pydantic_factories/factory.py", line 263, in _handle_constrained_field
return handle_constrained_int(field=cast("ConstrainedInt", outer_type))
File "/tmp/test_proj/venv/lib/python3.10/site-packages/pydantic_factories/constraints/integer.py", line 18, in handle_constrained_int
minimum, maximum = get_constrained_number_range(
File "/tmp/test_proj/venv/lib/python3.10/site-packages/pydantic_factories/value_generators/constrained_number.py", line 59, in get_constrained_number_range
raise ParameterError("maximum value must be greater than minimum value")
pydantic_factories.exceptions.ParameterError: maximum value must be greater than minimum value
```
Instead I expected it to produce `11` which matches the constraints `10 < a < 12` for integer `a`.
It seems that `conint(ge=10, le=10)` will throw the same error while it should not because `10` is a valid value. | closed | 2022-11-05T13:45:04Z | 2022-11-09T04:12:34Z | https://github.com/litestar-org/polyfactory/issues/110 | [] | jtraub | 1 |
errbotio/errbot | automation | 827 | Inject HTML directly to HipChat backend | From what I can see, it is not possible to insert HTML directly into a response to HipChat, but only through a Markdown template. This disallows the use of table formatting. Is there a way to do so which I'm missing, or is this not implemented? Is there any reason to not allow the use of HTML directly?
| closed | 2016-08-01T17:12:05Z | 2016-08-02T07:14:26Z | https://github.com/errbotio/errbot/issues/827 | [] | dtroberts | 1 |
babysor/MockingBird | pytorch | 612 | Synthesizer loss increases/diverges under training with GPU | **Summary[问题简述(一句话)]**
If I use CPU to train the synthesizer, under the fine-tuning methodology, I get good results and the loss has been decreasing over time. However, when I moved the models over to an Ubuntu container, running ROCm for GPU acceleration using the AMD graphics cards, the loss actually diverges.
Has anyone else experienced this, and if so, how did you solve it?
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
Ubuntu 20.04
ROCm 5.1, using RX580
Pytorch 1.11
aidatatang_200zh
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
| open | 2022-06-11T04:51:22Z | 2022-06-13T15:29:47Z | https://github.com/babysor/MockingBird/issues/612 | [] | tcchau | 1 |
waditu/tushare | pandas | 871 | 能否在指数列表中增加中证500指数 | 当前的指数代码: (sh=上证指数 sz=深圳成指 hs300=沪深300指数 sz50=上证50 zxb=中小板 cyb=创业板),能否增加中证500:000905,中证800:000906 这二个代码? | closed | 2018-12-17T06:32:43Z | 2018-12-18T01:33:16Z | https://github.com/waditu/tushare/issues/871 | [] | stockwiner | 1 |
mitmproxy/pdoc | api | 678 | Error importing module: no signature found for builtin type `<class 'type I wrote'>` | #### Problem Description
When attempting to use `pdoc` on [my package](https://github.com/JesseTG/libretro.py), I get a stack trace similar to the following when trying to view the doc page for any module:
```
Traceback (most recent call last):
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/web.py", line 82, in handle_request
out = render.html_module(
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/render.py", line 106, in html_module
return env.get_template("module.html.jinja2").render(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/templates/default/module.html.jinja2", line 311, in top-level template code
{%- if loop.nextitem -%}
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/templates/default/frame.html.jinja2", line 36, in top-level template code
{% block body %}
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/templates/default/frame.html.jinja2", line 42, in block 'body'
{% block content %}{% endblock %}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/templates/default/module.html.jinja2", line 101, in block 'content'
{% block module_contents %}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/templates/default/module.html.jinja2", line 108, in block 'module_contents'
{{ member(m) }}
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/jinja2/runtime.py", line 777, in _invoke
rv = self._func(*arguments)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/templates/default/module.html.jinja2", line 198, in template
{{ function(doc) }}
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/jinja2/runtime.py", line 777, in _invoke
rv = self._func(*arguments)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/templates/default/module.html.jinja2", line 176, in template
{{- fn.signature | format_signature(colon=True) | linkify }}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/render_helpers.py", line 358, in linkify
re.sub(
File "/usr/lib/python3.12/re/__init__.py", line 186, in sub
return _compile(pattern, flags).sub(repl, string, count)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/render_helpers.py", line 343, in linkify_repl
doc is not None and context["is_public"](doc).strip()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/jinja2/runtime.py", line 777, in _invoke
rv = self._func(*arguments)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/templates/default/module.html.jinja2", line 242, in template
{% if "@private" in doc.docstring %}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/jinja2/environment.py", line 485, in getattr
return getattr(obj, attribute)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/functools.py", line 995, in __get__
val = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/home/jesse/.virtualenvs/libretro.py/lib/python3.12/site-packages/pdoc/doc.py", line 594, in docstring
+ str(inspect.signature(self.obj)).replace(" -> None", "")
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/inspect.py", line 3327, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/inspect.py", line 3071, in from_callable
return _signature_from_callable(obj, sigcls=cls,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/inspect.py", line 2633, in _signature_from_callable
raise ValueError(
ValueError: no signature found for builtin type <class 'libretro.api.content.retro_system_info'>
```
#### Steps to reproduce the behavior:
1. On Windows or Linux, clone [this repository](https://github.com/JesseTG/libretro.py).
2. `cd` to the cloned repo.
3. Run `pdoc src/libretro`.
4. Go to the hosted doc site.
5. Select any module.
6. You will see the above stack trace (or something similar).
#### System Information
```
pdoc: 14.4.0
Python: 3.12.2
Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
```
Additionally:
```
pdoc: 14.4.0
Python: 3.12.2
Platform: Windows-10-10.0.19045-SP0
```
| closed | 2024-04-10T01:51:03Z | 2024-07-10T13:31:24Z | https://github.com/mitmproxy/pdoc/issues/678 | [
"bug"
] | JesseTG | 3 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 113 | api 返回错误 | https://api.douyin.wtf/api?url=https://www.tiktok.com/@/video/7167614344241499418
`
{
"url": "https://www.tiktok.com/@/video/7167614344241499418",
"endpoint": "/api/",
"total_time": 0.2007,
"status": "failed",
"message": "返回数据为空,无法处理!/Return data is empty and cannot be processed!"
}
`
| closed | 2022-12-02T10:54:04Z | 2022-12-02T11:59:08Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/113 | [] | jiujiude | 2 |
plotly/dash | plotly | 2,818 | [BUG] Dash Testing: `wait_for_text_to_equal` may incorrectly succeed when used with text `"None"` | **Describe your context**
- replace the result of `pip list | grep dash` below
```
dash 2.16.1
dash-core-components 2.0.0
dash-dangerously-set-inner-html 0.0.2
dash-flow-example 0.0.5
dash-html-components 2.0.0
dash-table 5.0.0
dash-testing-stub 0.0.2
```
**Describe the bug**
When `wait_for_text_to_equal` is used to wait for the text `"None"`, the function will often succeed even when you would reasonably expect it to fail.
I think this is part of the reason why the regression in #2733 wasn't caught by the tests.
This behavior is demonstrated by the following test case:
```python
import dash
from dash import html
def test_wait_for_text_to_equal_none(dash_duo):
app = dash.Dash(__name__)
app.layout = html.Div(id="my-div", children="Hello world")
dash_duo.start_server(app)
dash_duo.wait_for_text_to_equal("#my-div", "None", timeout=4)
```
**Expected behavior**
The test should fail because the contents of the `#my-div` div are never equal to `None` or `"None"`.
**Actual behavior**
The test passes.
**Explanation**
This happens because `wait_for_text_to_equal` checks not only the text content of the element, but also the value of the `value` attribute. ([see here](https://github.com/plotly/dash/blob/f7f8fb4c5893506e35cdeaec141310a95fe1486a/dash/testing/wait.py#L110C13-L113C14)).
If `value` is not defined we get a value of `None`, which is then converted to a string and therefore matches the string `"None"`.
So `dash_duo.wait_for_text_to_equal("#my-div", "None")` _always_ succeeds unless the target element has a defined `value`.
**Proposed solutions**
IMO the cleanest solution would be to modify `wait_for_text_to_equal` to check _only_ the element's text, and add a new function `wait_for_value_to_equal` which checks the value (or a generalized `wait_for_attr_to_equal` function). This would break backwards compatibility.
Alternatively we could have `wait_for_text_to_equal` ignore `value` if value is not defined, or issue a warning when used with the text `"None"`. | closed | 2024-03-27T19:22:12Z | 2024-04-19T16:32:10Z | https://github.com/plotly/dash/issues/2818 | [] | emilykl | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,679 | Oracle async support | Oracle has a new branch at https://github.com/oracle/python-oracledb/tree/asyncio-support using real non-blocking in their thin client. we need to prototype against this to make sure we can work with what they are doing, and then prepare a real patch.
we can of course build off of the new `connectors/asyncio.py` and while we can start in the 2.1 branch, it should be backportable to 2.0 as we have the asyncio connector in 2.0 as well.
with this dialect we will then have asyncio support for 100% of our backends.
Issue with some more information: https://github.com/oracle/python-oracledb/issues/258 | closed | 2023-11-23T01:36:08Z | 2024-01-03T21:35:33Z | https://github.com/sqlalchemy/sqlalchemy/issues/10679 | [
"oracle",
"use case",
"asyncio"
] | zzzeek | 20 |
holoviz/panel | plotly | 7,179 | Broken API docstring format for pn.extension() | The docstring formatting here is broken:
https://panel.holoviz.org/api/panel.config.html#panel.config.panel_extension
I think this `: Example` format in many docstrings isn't processed correctly?
Should this be fixed in nbsphinx or should the docstrings be reformatted?
<img width="1319" alt="Screenshot 2024-08-23 at 21 19 16" src="https://github.com/user-attachments/assets/71b19f1a-46fd-4058-8a07-3509defed28f">
| open | 2024-08-23T19:22:57Z | 2024-08-23T19:23:58Z | https://github.com/holoviz/panel/issues/7179 | [] | cdeil | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 155 | Not able to import deeplabv3 | I'm getting an error `AttributeError: module has no attribute 'DeepLabV3'`.
I've tried both pypi and latest source code available from github. Can please someone help me? | closed | 2020-03-02T15:08:25Z | 2020-03-02T16:27:40Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/155 | [] | asdspal | 2 |
JoeanAmier/TikTokDownloader | api | 113 | 这样填写不对吗,下载的始终是第一个链接的作品 | 
| closed | 2023-12-26T11:17:49Z | 2023-12-26T11:46:48Z | https://github.com/JoeanAmier/TikTokDownloader/issues/113 | [] | ywj861 | 1 |
idealo/imagededup | computer-vision | 207 | HEIC support? | Hi,
Would it be hard and/or time consuming to enable HEIC format support?
Meanwhile, as a workaround, I'm doing HEIC->jpeg conversion, running the tool and then - mapping file names to original (HEIC) ones.
Thank you! | open | 2023-10-25T03:03:40Z | 2023-11-21T19:22:29Z | https://github.com/idealo/imagededup/issues/207 | [] | ink-splatters | 1 |
Miserlou/Zappa | flask | 1,770 | Broken unicode query parameters in django | https://github.com/Miserlou/Zappa/pull/1311 for https://github.com/Miserlou/Zappa/issues/1199 has broke passing unicode string to django, since django rest framework which does the url decoding does not expect an iso-8859-1 string.
## Possible Fix
https://github.com/GeoThings/Zappa/commit/cba59878d97be10a9e70257d8ce34658ca1e03e2
## Steps to Reproduce
1. Make a request with query parameters containing unicode.(`空氣盒子`) `/some_apis?filter=%E7%A9%BA%E6%B0%A3%E7%9B%92%E5%AD%90`
2. write a handler matching `/some_api`
3. log inside your handler `request.query_params.get('filter', None)` to see `空氣çå`
## Your Environment
* Zappa version used: Zappa 0.47.1 (django 1.11.16)
* Operating System and Python version: Amazon Linux: 4.14.77-70.59.amzn1.x86_64, Python 2.7.15
If possible fix is acceptable, will create a pull request. | open | 2019-01-28T07:26:01Z | 2019-01-28T07:26:37Z | https://github.com/Miserlou/Zappa/issues/1770 | [] | ambientlight | 0 |
huggingface/peft | pytorch | 2,014 | QLora with DeepSpeed support | ### System Info
peft: 0.12.1.dev0
accelerate: 0.33.0.dev0
transformers: 4.45.0.dev0
platform: ubuntu22.04 LTS
python 3.10.12
hardward: NVIDIA RTX2080TI * 4
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/pacman100/LLM-Workshop/tree/main/personal_copilot/training
I've been following this [article](https://huggingface.co/blog/personal-copilot) to finetune a model.
[run_peft.sh](https://github.com/pacman100/LLM-Workshop/blob/main/personal_copilot/training/run_peft.sh) works on my machine but only use single GPU, so i want to use accelerate + deepspeed to split model into multiple GPUs to train larger model.
DeepSpeed with no quantization also works on my machine. But as long as i enabled quantization, it will raise an error:
ValueError: Model was not initialized with `Zero-3` despite being configured for DeepSpeed Zero-3. Please re-initialize your model via `Model.from_pretrained(...)` or `Model.from_config(...)` after creating your `TrainingArguments`!
So my question is does QLora support deepspeed now, and if so, what is correct way to run it?
### Expected behavior
Expect QLora + DeepSpeed will run on multiple GPUs without error. | closed | 2024-08-18T01:06:30Z | 2024-08-19T10:44:10Z | https://github.com/huggingface/peft/issues/2014 | [] | ysj1173886760 | 5 |
docarray/docarray | pydantic | 1,726 | feat: implement "update" for Milvus | Milvus does not directly support the functionality to update existing data.
The workaround is to delete+index data that you want to update. | open | 2023-07-24T11:28:29Z | 2023-07-24T12:29:22Z | https://github.com/docarray/docarray/issues/1726 | [
"good-first-issue",
"area/document-index"
] | jupyterjazz | 1 |
electricitymaps/electricitymaps-contrib | data-visualization | 7,320 | "Installed capacity" label is shown for aggregated data | **Describe the bug**
"Installed capacity" label should not be shown for aggregated data
**To Reproduce**
Steps to reproduce the behavior:
1. Pick any zone
2. Select Yearly view
**Expected behavior**
"Installed capacity" label should not be shown for aggregated data.
**Screenshots**
<img width="446" alt="image" src="https://github.com/user-attachments/assets/b0ee6e71-7205-423b-9c6a-1b2e6b85c735">
| closed | 2024-10-14T16:22:48Z | 2024-11-22T14:36:36Z | https://github.com/electricitymaps/electricitymaps-contrib/issues/7320 | [
"bug 🐞",
"help wanted",
"frontend 🎨",
"good first issue"
] | corradio | 4 |
home-assistant/core | python | 140,754 | Implement Image Generation with Gemini | ### The problem
Google has launched free image generation via the Gemini API. https://developers.googleblog.com/en/experiment-with-gemini-20-flash-native-image-generation/
I'd like to implement this in the existing `google_generative_ai_conversation.generate_content` action.
However, unlike OpenAI, the API returns image data as inline bytes only, without any fetchable URL. Specifically, the `parts` array will include both text parts and `inline_data` parts, which contain `mime_type` and `data` as a `bytes`.
How should I implement this?
Options include:
- Add a new parameter to `generate_content` to specify a folder to save all `inline_data` response parts to. However, there are no filenames.
- Add a new `generate_image` action with the same parameters as `generate_content`, but also accepting a filename to save the image as. However, this would make it impossible to generate multiple images in a single call (which is fully supported by the API)
- Store returned images in memory and add a new action to save them as a followup
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
google_generative_ai_conversation
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-16T19:59:42Z | 2025-03-24T05:30:03Z | https://github.com/home-assistant/core/issues/140754 | [
"integration: google_generative_ai_conversation"
] | SLaks | 3 |
opengeos/leafmap | streamlit | 751 | leafmap on deepnote causes exception: "field() got an unexpected keyword argument 'alias'" | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- [deepnote notebook ](https://deepnote.com/app/uclab_potsdam/leafmap-test-b970f216-68cd-4a93-932c-c61747b4a580)
- Python 3.9
### Description
I am trying to work with leafmap in Jupyter Notebooks on Deepnote.com and I am having trouble getting leafmap to be properly imported.
### What I Did
```
!pip install leafmap
import leafmap
```
Without getting to any mapping fun, the output is the following:
`Exception: field() got an unexpected keyword argument 'alias'`
Any pointers or suggestions are appreciated. Thank you! | closed | 2024-06-10T20:54:37Z | 2024-06-16T04:06:29Z | https://github.com/opengeos/leafmap/issues/751 | [
"bug"
] | nrchtct | 3 |
donnemartin/data-science-ipython-notebooks | numpy | 2 | Add instructions to configure IPython/PySpark for python 3, now supported with Spark 1.4 | Reported by [core_dumpd](http://www.reddit.com/user/core_dumpd) on [Reddit /r/DataScience](http://www.reddit.com/r/datascience/comments/3ar1bd/continually_updated_data_science_python_notebooks/).
Solution seems to be discussed in Stack Overflow [here](http://stackoverflow.com/questions/30279783/apache-spark-how-to-use-pyspark-with-python-3).
core_dumpd reports the following works, need to confirm and update repo:
I end up running this:
`PYSPARK_DRIVER_PYTHON_OPTS="notebook --profile=pyspark" /usr/local/spark/bin/pyspark`
With:
`PYSPARK_PYTHON=/opt/anaconda/bin/ipython PYSPARK_DRIVER_PYTHON=/opt/anaconda/bin/ipython`
I'm running on docker based on sequenceiq/hadoop-docker:latest with Spark/MiniConda added on top. The only real config options in the profile are for the ip = '*' and open_browser = False.
| closed | 2015-06-24T01:54:29Z | 2015-07-04T13:02:51Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/2 | [
"enhancement"
] | donnemartin | 1 |
bendichter/brokenaxes | matplotlib | 87 | Strange Diagonal Lines in plot | Hello people, I'm trying to produce a double broken axes figure. I did research online but nothing seems to solve this problem, unless I made some unidentified dumb error. The problem consists of a "buggy" diagonal line in the plot.

```python
import matplotlib.pylab as plt
import numpy as np
from brokenaxes import brokenaxes
# DS1
t = np.array([0, 10, 20, 30, 40, 50, 60, 70, 220, 230, 240])
a2 = np.array([28, 61, 65, 67, 77, 78, 81, 80, 87, 87, 88])
fig = plt.figure(figsize=(8, 4))
baxes = brokenaxes(xlims=((-5,80), (200,250)), ylims=((0,5), (24,92)), hspace=.2)
baxes.plot(t, a2, 'r',label='DS1')
baxes.legend(loc=4)
baxes.set_xlabel('time (s)')
baxes.set_ylabel('Data plot')
plt.show()
```
Thank you. | closed | 2022-11-08T17:20:42Z | 2022-11-08T20:59:18Z | https://github.com/bendichter/brokenaxes/issues/87 | [] | murilosc | 6 |
NVIDIA/pix2pixHD | computer-vision | 327 | Regarding the inclusion of classification criteria during training. | I would like to ask whether it is possible to include classification criteria during training, for example, I am training a model for generating house layouts

For example, with the given image, I want to achieve the generation based on the input boundaries and the categories that need to be generated within those boundaries. How can I achieve this?
| open | 2023-10-29T12:27:33Z | 2023-10-29T12:31:09Z | https://github.com/NVIDIA/pix2pixHD/issues/327 | [] | masonghao1 | 0 |
robinhood/faust | asyncio | 656 | Last message in kafka is not getting processed with faust take() method | As there is already a ticket available on a similar issue regarding offset lag is always 1 even after processing the last record, but this is a different issue where the last message is not getting processed.
i"m using faust `1.10.4`
## Steps to reproduce
```
add some 10 messages with one partition in kafka and try reading with below code:
@app.agent(input_topic, concurrency=1)
async def my_task(tasks):
async for my_task in tasks.take(record_per_partition, within=poll_interval):
assert len(my_task) > 0
asyncio.gather(*(process_payload(json.loads(args.decode('utf-8'))) for args in my_task))
```
the last message is not getting processed with faust take() method, it's happening only if I use take method (it's not happening with stream.events() or any other method)
## Expected behavior
It should process all the records available in kafka\
# Versions
* Python version : 3.6.9
* Faust version 1.10.4
* Operating system: Linux
| open | 2020-09-23T15:23:59Z | 2020-09-24T05:44:16Z | https://github.com/robinhood/faust/issues/656 | [] | sivasai-quartic | 1 |
pytorch/pytorch | numpy | 149,290 | as_subclass doesn't work under TorchDispatchMode | ### 🐛 Describe the bug
We have a torch.Tensor subclass which shares autograd history with a passed in `data` torch.Tensor using the `as_subclass` method. This works well except in the case where we use `TorchDispatchMode`:
```
import torch
from torch.utils._python_dispatch import TorchDispatchMode
class Foo(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args, kwargs=None):
return func(*args, **(kwargs or {}))
class MyTensor(torch.Tensor):
def __new__(cls, data: torch.Tensor):
return data.as_subclass(cls)
t1 = torch.rand(10, requires_grad=True)
t2 = t1 + t1
m1 = MyTensor(t2)
with Foo():
m2 = MyTensor(t2)
```
This fails, with the following error:
```
Traceback (most recent call last):
File "test.py", line 18, in <module>
m2 = MyTensor(t2)
^^^^^^^^^^^^
File "test.py", line 11, in __new__
return data.as_subclass(cls)
^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Creating a new Tensor subclass MyTensor but the raw Tensor object is already associated to a python object of type Tensor
```
We can't use `make_subclass` or `make_wrapper_subclass` since those lose the autograd history of the passed in Tensor. Is there anyway to achieve what we're looking?
### Versions
2.4
cc @Chillee @ezyang @zou3519 @albanD @samdow | open | 2025-03-17T03:39:42Z | 2025-03-17T15:33:27Z | https://github.com/pytorch/pytorch/issues/149290 | [
"triaged",
"module: __torch_dispatch__"
] | pritamdamania87 | 0 |
pyro-ppl/numpyro | numpy | 1,812 | How can I gibbs before HMC/NUTS? | I am currently doing a work using `HMCGibbs`. I found that it always sample several times with `model` part for `NUTS` or `HMC` and then runs into the `gibbs_fn`. However, my program need to apply `gibbs_fn` first and skip all those definitions on distirbutions related to `gibbs_site` and variables `hmc_site` are initialized defined.
Is it possible? It seems that HMCGibbs does not support such order.
[https://github.com/pyro-ppl/numpyro/blob/401e364c323aed35ca3235b5c92971b7449dab85/numpyro/infer/hmc_gibbs.py#L166-L170](https://github.com/pyro-ppl/numpyro/blob/401e364c323aed35ca3235b5c92971b7449dab85/numpyro/infer/hmc_gibbs.py#L166-L170)
A minimal example could be like this:
```python
from jax import random
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
from numpyro.infer import MCMC, NUTS, HMCGibbs
def model():
x = numpyro.sample("x", dist.Normal(0.0, 2.0))
y = numpyro.sample("y", dist.Normal(0.0, 2.0))
numpyro.sample("obs", dist.Normal(x + y, 1.0), obs=jnp.array([1.0]))
def gibbs_fn(rng_key, gibbs_sites, hmc_sites): # NEED run first
y = hmc_sites['y'] # NEED: initialized first not sample from model
x = gibbs_sites['x']
new_x = dist.Normal(0.8 * (1-y), jnp.sqrt(0.8)).sample(rng_key)
return {'x': x+new_x}
``` | closed | 2024-06-10T10:59:39Z | 2024-06-15T01:44:39Z | https://github.com/pyro-ppl/numpyro/issues/1812 | [
"enhancement"
] | disadone | 8 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 91 | New to Python | Hi there,
I am just looking to find a job and streamline the application process. I have never used Python before and am trying to figure out how this all works; what packages to get, what to paste where, and how to make sure everything is running properly. I know everything is listed out in the READ ME section, but there's a lot of lingo I don't know as I have never done this before.
I believe a video walkthrough where we can watch you set it up would be really helpful to follow along on setting it up for ourselves, I have seen some other people in this thread running into issues, so hopefully a video walkthrough would reduce the amount of issues and any confusion. Let me know if this is possible!
| closed | 2024-08-27T19:22:27Z | 2024-09-02T08:16:04Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/91 | [] | pacman20011 | 8 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,886 | [NoDriver] - get_position(), save_screenshot() may be incorrect | `get_position` returns an incorrect value when the **top of the page** is not displayed.
Since `get_position` returns the `y` value from the **top of the displayed view**.
If get_position is not correct,
- then `save_screenshot` will be incorrect
- and other functions that will depend on `get_position` | open | 2024-05-15T21:35:47Z | 2024-05-15T21:36:00Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1886 | [] | gnori-zon | 0 |
strawberry-graphql/strawberry-django | graphql | 284 | Way to split up query across django apps | Rather a question than an issue:
Is there a way to split up the Query object across different django apps and create the central Query by inheriting from the app-specific ones in a similar fashion as it is possible in graphene-django? Something like:
```python
# schema.py
import strawberry
from fruits.types import Query as FruitQueries
from vegetables.types import Query as VegetableQueries
@strawberry.type
class Query(FruitQueries, VegetableQueries):
"""All available queries for this schema."""
...
# will include 'fruits' from FruitsQueries and 'vegetables' from VegetableQueries
schema = strawberry.Schema(
query=Query,
)
```
with
```python
# fruits.types.py
import strawberry
import strawberry_django
from . import models
@strawberry_django.type(models.Fruit)
class Fruit:
name: str
color: str
@strawberry.type
class Query:
fruits: list[Fruit] = strawberry.django.field()
```
and
```python
# vegetables.types.py
import strawberry
import strawberry_django
from . import models
@strawberry_django.type(models.Vegetable)
class Vegetable:
name: str
color: str
@strawberry.type
class Query:
vegetables: list[Vegetable] = strawberry.django.field()
``` | closed | 2023-07-07T12:15:48Z | 2025-03-20T15:57:12Z | https://github.com/strawberry-graphql/strawberry-django/issues/284 | [
"question"
] | TWeidi | 4 |
pydata/pandas-datareader | pandas | 594 | new stooq datareader no longer downloads indices | The stooq 0.7.0 datareader downloads individual symbols provided as either "AAPL" or "AAPL.US". But it will no longer download indices (e.g. "^SPX"), and returns an empty dataframe. Apparently, stooq.py was rewritten to automatically append ".US" in the absence of any other country indicator. But indices do not take a country indicator (it's "^SPX", not "^SPX.US"). For now, I have simply replaced the new stooq.py with the version 0.6.0 one. But a fix would be welcome.
| closed | 2018-11-02T00:45:41Z | 2019-09-18T08:10:32Z | https://github.com/pydata/pandas-datareader/issues/594 | [] | EcoFin | 5 |
amidaware/tacticalrmm | django | 876 | Initial screen after install support x32 and x64 mesh agents | Have 1st screen in rmm admin gui on new server install support x32 and x64 mesh agent uploads. | closed | 2021-12-19T18:34:34Z | 2022-02-03T01:14:34Z | https://github.com/amidaware/tacticalrmm/issues/876 | [
"enhancement"
] | silversword411 | 3 |
jupyter/nbgrader | jupyter | 1,711 | Fetched assignments can be seen by other users if home directory has `x` permission | Archlinux, nbgrader 0.8.1, jupyterhub 3.0.0, jupyter-notebook 6.5.2
On a server with many users I have a umask of 077 to protect users' directories from potential listing and reading of their files. The reason is that users can use `public_html` which requires `home` directory to be searchable (`x` permission) and `public_html` to be readable.
I noticed that the generated assignments get read permission as default as well as the fetched ones. This way the fetched directories are readable by other users if the user has made their `home` searchable. For example ` cd /home/user1/assignment; ls` issued by another `user2` of the same class is then successful. Even `user2` cannot list `home/user1`, they know that the student probably downloaded the assignment and can copy it.
I have the hunch that nbgrader does not respect `umask` due to the patch #688.
### Expected behavior
I would expect that the fetched assignments are not readable.
### Actual behavior
Fetched assignment directory is searchable and the files are readable.
### Steps to reproduce the behavior
(Set umask to 077), generate assignment, fetch assignment, `ls -al assignment`
---
If someone can confirm this or has an alternative solution, I would be glad to hear that. | open | 2022-12-13T21:36:21Z | 2024-03-21T12:44:04Z | https://github.com/jupyter/nbgrader/issues/1711 | [
"bug"
] | goekce | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,123 | Any sound I record provide the same result | Hi, thanks for the tool,
For some reason, any sound I record provides the same result as it Soundwave the deception is talking to me, no matter how many times I run the program or the samples are grouped.
Any idea how to solve it? | closed | 2022-10-02T08:12:31Z | 2023-01-08T08:55:13Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1123 | [] | ezrabest | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 754 | Unet训练和测试代码为什么这么复杂? | 与分类任务相比,Unet的训练、评估和测试代码看起来非常复杂,写了很多utils的代码文件,请问这是必要的吗?还是说可以简化代码呢?
另外,请问为什么分割任务的Loss计算是Dice Loss加上交叉熵损失呢?可以只使用Dice Loss吗? | closed | 2023-10-16T08:49:34Z | 2023-11-20T14:49:49Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/754 | [] | ghost | 1 |
gunthercox/ChatterBot | machine-learning | 1,851 | CHATTERBOT INSTALLATION ERROR | I am trying to pip install the chatterbot but there is always this error:-
C:\Users\User\TRIAL\Chatbot>pip install chatterbot
Collecting chatterbot
Using cached https://files.pythonhosted.org/packages/6c/0e/dac0d82f34f86bf509cf5ef3e2dfc5aa7d444bd843a2330ceb7d854f84f2/ChatterBot-1.0.5-py2.py3-none-any.whl
Collecting nltk<4.0,>=3.2
Using cached https://files.pythonhosted.org/packages/f6/1d/d925cfb4f324ede997f6d47bea4d9babba51b49e87a767c170b77005889d/nltk-3.4.5.zip
Collecting mathparse<0.2,>=0.1
Using cached https://files.pythonhosted.org/packages/c3/e5/4910fb85950cb960fcf3f5aabe1c8e55f5c9201788a1c1302b570a7e1f84/mathparse-0.1.2-py3-none-any.whl
Collecting spacy<2.2,>=2.1
Using cached https://files.pythonhosted.org/packages/1f/e2/46650d03c7ff2b57ed7af211d41c3f606540f7adea92b5af65fcf9f605c0/spacy-2.1.9.tar.gz
Installing build dependencies ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\user\appdata\local\programs\python\python38-32\python.exe' 'c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\User\AppData\Local\Temp\pip-build-env-_m534apv\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel>0.32.0,<0.33.0' Cython 'cymem>=2.0.2,<2.1.0' 'preshed>=2.0.1,<2.1.0' 'murmurhash>=0.28.0,<1.1.0' 'thinc>=7.0.8,<7.1.0'
cwd: None
Complete output (199 lines):
Collecting setuptools
Using cached https://files.pythonhosted.org/packages/d9/de/554b6310ac87c5b921bc45634b07b11394fe63bc4cb5176f5240addf18ab/setuptools-41.6.0-py2.py3-none-any.whl
Collecting wheel<0.33.0,>0.32.0
Using cached https://files.pythonhosted.org/packages/ff/47/1dfa4795e24fd6f93d5d58602dd716c3f101cfd5a77cd9acbe519b44a0a9/wheel-0.32.3-py2.py3-none-any.whl
Collecting Cython
Using cached https://files.pythonhosted.org/packages/22/03/510503cfbf20f62810a9548c9be13ab86181f00cca9a3a56717c4595d952/Cython-0.29.14-cp38-cp38-win32.whl
Collecting cymem<2.1.0,>=2.0.2
Using cached https://files.pythonhosted.org/packages/8b/dc/0976e04cc46f86e0dd3ee3797ec68057eaafebf31daca9a076dc138b9920/cymem-2.0.2.tar.gz
Collecting preshed<2.1.0,>=2.0.1
Using cached https://files.pythonhosted.org/packages/0b/14/c9aa735cb9c131545fc9e23031baccb87041ac9215b3d75f99e3cf18f6a3/preshed-2.0.1.tar.gz
Collecting murmurhash<1.1.0,>=0.28.0
Using cached https://files.pythonhosted.org/packages/22/e9/411be1845f1ac07ae3bc40a4b19ba401819baed4fa63b4f5ef28b2300eb4/murmurhash-1.0.2.tar.gz
Collecting thinc<7.1.0,>=7.0.8
Using cached https://files.pythonhosted.org/packages/92/39/ea2a3d5b87fd52fc865fd1ceb7b91dca1f85e227d53e7a086d260f6bcb93/thinc-7.0.8.tar.gz
ERROR: Command errored out with exit status 1:
command: 'c:\users\user\appdata\local\programs\python\python38-32\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-hkpbpz6t\\thinc\\setup.py'"'"'; __file__='"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-hkpbpz6t\\thinc\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\User\AppData\Local\Temp\pip-install-hkpbpz6t\thinc\pip-egg-info'
cwd: C:\Users\User\AppData\Local\Temp\pip-install-hkpbpz6t\thinc\
Complete output (179 lines):
Could not locate executable g77
Could not locate executable f77
Could not locate executable ifort
Could not locate executable ifl
Could not locate executable f90
Could not locate executable DF
Could not locate executable efl
Could not locate executable gfortran
Could not locate executable f95
Could not locate executable g95
Could not locate executable efort
Could not locate executable efc
Could not locate executable flang
don't know how to compile Fortran code on platform 'nt'
'svnversion' is not recognized as an internal or external command,
operable program or batch file.
non-existing path in 'numpy\\distutils': 'site.cfg'
Running from numpy source directory.
C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\setup.py:418: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
run_build = parse_setuppy_commands()
C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
self.calc_info()
C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
self.calc_info()
C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
self.calc_info()
C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
if getattr(self, '_calc_info_{}'.format(lapack))():
C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
if getattr(self, '_calc_info_{}'.format(lapack))():
c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
Traceback (most recent call last):
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\bdist_egg.py", line 163, in run
self.run_command("egg_info")
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\command\egg_info.py", line 26, in run
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\command\build_src.py", line 142, in run
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\command\build_src.py", line 153, in build_sources
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\command\build_src.py", line 286, in build_library_sources
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\command\build_src.py", line 369, in generate_sources
File "numpy\core\setup.py", line 667, in get_mathlib_info
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\command\config.py", line 241, in try_link
self._check_compiler()
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\command\config.py", line 54, in _check_compiler
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\_msvccompiler.py", line 253, in initialize
vc_env = _get_vc_env(plat_spec)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 171, in msvc14_get_vc_env
return EnvironmentInfo(plat_spec, vc_min_ver=14.0).return_env()
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 1075, in __init__
self.si = SystemInfo(self.ri, vc_ver)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 547, in __init__
vc_ver or self._find_latest_available_vs_ver())
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\msvc.py", line 561, in _find_latest_available_vs_ver
raise distutils.errors.DistutilsPlatformError(
distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules
yield saved
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\setup.py", line 443, in <module>
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\setup.py", line 435, in setup_package
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\core.py", line 163, in setup
raise SystemExit("error: " + str(msg))
SystemExit: error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 1144, in run_setup
run_setup(setup_script, args)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 253, in run_setup
raise
File "c:\users\user\appdata\local\programs\python\python38-32\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\users\user\appdata\local\programs\python\python38-32\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 166, in save_modules
saved_exc.resume()
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\_vendor\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules
yield saved
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context
yield
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\setup.py", line 443, in <module>
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\setup.py", line 435, in setup_package
File "C:\Users\User\AppData\Local\Temp\easy_install-86j8a63z\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\distutils\core.py", line 163, in setup
raise SystemExit("error: " + str(msg))
SystemExit: error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\User\AppData\Local\Temp\pip-install-hkpbpz6t\thinc\setup.py", line 261, in <module>
setup_package()
File "C:\Users\User\AppData\Local\Temp\pip-install-hkpbpz6t\thinc\setup.py", line 201, in setup_package
setup(
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\__init__.py", line 144, in setup
_install_setup_requires(attrs)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\__init__.py", line 139, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\dist.py", line 717, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\pkg_resources\__init__.py", line 780, in resolve
dist = best[req.key] = env.best_match(
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\pkg_resources\__init__.py", line 1065, in best_match
return self.obtain(req, installer)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\pkg_resources\__init__.py", line 1077, in obtain
return installer(requirement)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\dist.py", line 787, in fetch_build_egg
return cmd.easy_install(req)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 679, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 705, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 890, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 1158, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\setuptools\command\easy_install.py", line 1146, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\user\appdata\local\programs\python\python38-32\python.exe' 'c:\users\user\appdata\local\programs\python\python38-32\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\User\AppData\Local\Temp\pip-build-env-_m534apv\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools 'wheel>0.32.0,<0.33.0' Cython 'cymem>=2.0.2,<2.1.0' 'preshed>=2.0.1,<2.1.0' 'murmurhash>=0.28.0,<1.1.0' 'thinc>=7.0.8,<7.1.0' Check the logs for full command output.
Please help me correct this issue. | closed | 2019-11-02T20:13:38Z | 2021-12-23T21:40:09Z | https://github.com/gunthercox/ChatterBot/issues/1851 | [] | Sowren25 | 3 |
inducer/pudb | pytest | 115 | has trouble tracing importlib._bootstrap in python3.3 | I'm doing an experiment with sys.path_hooks, and pudb is having trouble tracing through it.
pudb seems to fail to find the source for `/usr/lib/python3.3/importlib/_bootstrap.py`, which results in the variables windowlet being smashed to the left.
To reproduce:
``` sh
$ PYTHONPATH=dummypath python3.3 foo.py
```
`foo.py`:
``` python
from __future__ import print_function
class NoPathHook(object):
def __init__(self, syspath):
if syspath.endswith('/dummypath'):
pass
else:
raise ImportError
@staticmethod
def find_module(module):
if module == 'example_thingy_doesnt_exist':
import pudb.b
def register():
import sys
sys.path_hooks.insert(0, NoPathHook)
sys.path_importer_cache.clear()
def main():
register()
try:
import example_thingy_doesnt_exist
except ImportError as error:
if error.name == 'example_thingy_doesnt_exist':
pass
else:
raise
print('DONE')
if __name__ == '__main__':
exit(main())
```
| open | 2014-04-21T17:51:57Z | 2017-04-14T13:35:18Z | https://github.com/inducer/pudb/issues/115 | [] | bukzor | 9 |
quokkaproject/quokka | flask | 599 | handle extensions for static file and generate it based on rules | Static file extension handle
https://github.com/rochacbruno/quokka_ng/issues/73 | open | 2018-02-07T01:42:55Z | 2018-02-07T01:42:55Z | https://github.com/quokkaproject/quokka/issues/599 | [
"1.0.0",
"hacktoberfest"
] | rochacbruno | 0 |
recommenders-team/recommenders | data-science | 1,364 | [BUG] integration tests must not be executed if smoke test fail | ### Description
<!--- Describe your issue/bug/request in detail -->
With the last change, integration test are executed even if smoke tests fail, see example: https://dev.azure.com/best-practices/recommenders/_build/results?buildId=46125&view=logs&j=5264e576-3c6f-51f6-f055-fab409685f20&t=b3018297-00ec-509d-8d8a-45865fb67b06
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
FYI @gramhagen | closed | 2021-04-02T06:39:48Z | 2021-04-08T17:01:50Z | https://github.com/recommenders-team/recommenders/issues/1364 | [
"bug"
] | miguelgfierro | 3 |
fbdesignpro/sweetviz | pandas | 85 | Possible error in the correlations plot header | <img width="850" alt="Screenshot 2021-03-22 at 12 11 44" src="https://user-images.githubusercontent.com/35999411/111966428-f700f880-8b07-11eb-8058-a70afb838f1b.png">
looks like you wanted to say 'column' here instead of 'row' | closed | 2021-03-22T09:13:35Z | 2022-04-16T20:30:28Z | https://github.com/fbdesignpro/sweetviz/issues/85 | [
"documentation"
] | DanilZherebtsov | 2 |
tflearn/tflearn | tensorflow | 650 | Tutorial for one shot learning | i think it is a good idea to give a simple example of one shot learning | open | 2017-03-06T09:33:12Z | 2017-05-10T17:22:07Z | https://github.com/tflearn/tflearn/issues/650 | [] | aryopg | 1 |
horovod/horovod | deep-learning | 3,292 | Gradient Checkpointing with Distributed Triplet Loss training and conditional torch.no_grad parts of the code | **Environment:**
1. Framework: PyTorch
2. Framework version: 1.9.0
3. Horovod version: 0.23.0
4. MPI version: 3.1.2
5. CUDA version: 10.1
6. NCCL version:
7. Python version: 3.6.2
8. Spark / PySpark version: -
9. Ray version:
10. OS and version: Linux
11. GCC version:
12. CMake version: 3.18.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? N/A
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? N/A
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
Hello, I'm training Triplets Model for Semantic Similarity task. I'm using gradient checkpointing to optimize memory consumption, and I experience strange bug during training. Essentially, my problem has two sides - on one hand, I'm training the model on available dataset of triplets, on the other I keep holdout reference dataset, which I use every n iterations, to pick triplets, based on reference embeddings matrix obtained based on the holdout dataset.
During training following error appears:
```python
AssertionError: Gradients were computed more than backward_passes_per_step times before call to step(). Increase backward_passes_per_step to accumulate gradients locally.
```
The bug appears in following fasion:
1. If I turn on gradient checkpointing, don't recalculate reference embeddings matrix and train with triplets- it's OK
2. If I turn off gradient checkpointing, do recalculate reference matrix, and train with triplets - It's OK
3. If I turn on gradient checkpointing, do recalculate reference matrix, BUT train with normal loss, not triplet (hashed out part of training loop code) - it's OK
4. If I turn on gradient checkpointing, do recalculate negative reference matrix, and train with triplets - the error appears
The fourth option is the one I'm interested in, but I cannot understand what might be the true reason for that bug to appear - is it gradient checkpointing (if I turn it off there are no errors), or is it the additional recalculation of reference embeddings with torch.no_grad part (without it gradient checkpointing works), or is it the triplet training fashion, with 3 forward passes in training loop?
The only thing that helped me get option 4. running, was to set parameter `backward_passes_per_step` of `hvd.DistributedOptimizer` to some big number, i.e. 10, though as I understand it shouldn't be done, as it forces optimizer to store gradients for 10 steps before performing actual step on each machine.
I'll be very thankful for any help with this issue - I'm not sure if its bug, or if its my fault, hopefully I'm not putting this issue into wrong category
**Reproduce Steps:**
Simplified version of this training scheme, which allows to reproduce the error (I launch the script with `horovodrun -np 2 python test_script_issue.py`):
```python
import os
import copy
import horovod.torch as hvd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.multiprocessing as mp
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm
from functools import partial
from transformers import AutoConfig, AutoModel
def dump_embeddings(dataloader: DistributedSampler, model: nn.Module, device: torch.device, disable_tqdm: bool) -> torch.tensor:
model.eval()
vectors = []
for batch in tqdm(dataloader, leave=False, disable=disable_tqdm):
encoded = model(**batch)[0][:, 0, :]
vectors.append(encoded)
vectors = torch.cat(vectors)
model.train()
return vectors
def triplet_loss_fn(anchor, positive, negative, margin, distance_function):
positive_dist = distance_function(anchor, positive)
negative_dist = distance_function(anchor, negative)
output = torch.clamp(positive_dist - negative_dist + margin, min=0.0)
return output
def run_training():
# Initialize Horovod
hvd.init()
torch.set_num_threads(1)
torch.cuda.set_device(hvd.local_rank())
device = torch.device('cuda', hvd.local_rank())
# Define datasets...
train_dataset = [{'sent1': {'input_ids': torch.randint(0, 200, (16, 256)), 'attention_mask': torch.ones(16, 256)}, #for training with triplet loss
'sent2': {'input_ids': torch.randint(0, 200, (16, 256)), 'attention_mask': torch.ones(16, 256)},
'sent3': {'input_ids': torch.randint(0, 200, (16, 256)), 'attention_mask': torch.ones(16, 256)}
}]*8
# train_dataset = [{'input_ids': torch.randint(0, 200, (16, 256)), 'attention_mask': torch.ones(16, 256)}]*8 #for training with normal loss function, not triplet
train_sampler = DistributedSampler(train_dataset, num_replicas=hvd.size(), rank=hvd.rank())
train_loader = DataLoader(train_dataset, batch_size=1, sampler=train_sampler)#, collate_fn = lambda x: {k:v.squeeze(0).to(device) for k,v in x[0].items()}) #for training with normal loss function, not triplet
reference_dataset = [{'input_ids': torch.randint(0, 200, (16, 500)), 'attention_mask': torch.ones(16, 500)}]*8
train_sample_ref = DistributedSampler(reference_dataset, num_replicas=hvd.size(), rank=hvd.rank())
train_loader_ref = DataLoader(reference_dataset, batch_size=1, sampler=train_sample_ref, collate_fn = lambda x: {k:v.squeeze(0).to(device) for k,v in x[0].items()})
# Build model...
metric = lambda x,y: 1.0 - F.cosine_similarity(x, y)
criterion = partial(triplet_loss_fn, distance_function=metric, margin=0.1)
config = AutoConfig.from_pretrained('distilroberta-base')
model = AutoModel.from_pretrained('distilroberta-base', config=config, add_pooling_layer=False)
model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)
optimizer = hvd.DistributedOptimizer(optimizer, named_parameters=model.named_parameters(), backward_passes_per_step=1)
hvd.broadcast_parameters(model.state_dict(), root_rank=0)
#calculate reference embeddings matrix part:
print('dumping embeddings...')
with torch.no_grad():
reference_embeddings = dump_embeddings(train_loader_ref, model, device, False)
reference_embeddings = hvd.mpi_ops.allgather(reference_embeddings, name='gather_embs')
#make sure all references are calculated before starting the training
hvd.mpi_ops.barrier()
#enabling gradient checkpointing:
model.gradient_checkpointing_enable()
for batch_idx, b in enumerate(train_loader):
print(batch_idx)
optimizer.zero_grad()
# out = model(**b)[0][:, 0, :] #for training with normal loss, not triplet
# loss = out.mean(dim=1).mean()
sent1 = {k:v.squeeze(0).to(device) for k,v in b['sent1'].items()} # for training with triplet loss
sent2 = {k:v.squeeze(0).to(device) for k,v in b['sent2'].items()}
sent3 = {k:v.squeeze(0).to(device) for k,v in b['sent3'].items()}
emb1 = model(**sent1)[0][:, 0, :]
emb2 = model(**sent2)[0][:, 0, :]
emb3 = model(**sent3)[0][:, 0, :]
losses = criterion(emb1, emb2, emb3)
loss = losses.mean()
loss.backward()
optimizer.step()
def main():
run_training()
if __name__ == "__main__":
main()
```
| open | 2021-11-26T14:44:44Z | 2022-04-26T13:20:26Z | https://github.com/horovod/horovod/issues/3292 | [
"bug"
] | rafaljanwojcik | 1 |
custom-components/pyscript | jupyter | 131 | 'TypeError: exceptions must derive from BaseException' when missing argument label | The following code fails with an unhelpful exception:
```python
persistent_notification.create("foo")
```
```
Exception in <jupyter_0> line 1:
persistent_notification.create("foo")
^
TypeError: exceptions must derive from BaseException
```
The correct code is `persistent_notification.create(message="foo")` but the error message doesn't hint at this at all.
PyScript version: 1.0.0 (eb4dde9c72c5bb25533820082ad69660495389c7)
(Thanks for pyscript, it's a very nice way to write automations and data processing services.) | closed | 2021-01-01T03:43:46Z | 2021-01-01T05:59:41Z | https://github.com/custom-components/pyscript/issues/131 | [] | huonw | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 413 | Do a speed profile | We should do a speed profile, to identify the time taken in each part so as to see if we can speed up some obvious parts. | open | 2017-06-23T04:21:25Z | 2017-07-28T07:11:26Z | https://github.com/scikit-optimize/scikit-optimize/issues/413 | [] | MechCoder | 5 |
ray-project/ray | python | 50,656 | [Core] Plugable storage backend besides Redis | ### Description
Redis as metadata storage backend has it only limitation, e.g. it can only guarantee eventual consistency instead of strong consistency.
It would be nice to be able to extend storage backend for the following reasons, as far as I can see.
1. uses prefer availability or consistency than performance
2. better cache engine than Redis
### Use case
_No response_ | open | 2025-02-17T03:15:30Z | 2025-03-22T00:55:10Z | https://github.com/ray-project/ray/issues/50656 | [
"enhancement",
"P2",
"core"
] | zhengy001 | 1 |
numba/numba | numpy | 9,968 | Request for Thread-local Timing Functions to Support Parallel Load Balancing |
# Feature request
## Description:
I'm requesting the addition of timing functions (e.g., support for time.time()) to enable precise execution time measurement within prange parallel loops. This would help implement dynamic load balancing strategies for subsequent parallel executions.
## Use Case:
When using prange with large computational workloads, different threads/chunks may complete their tasks at different rates due to varying input complexity or system resource contention. Currently numba.set_parallel_chunksize() provides static partitioning that doesn't work will in my case. | closed | 2025-03-10T08:21:19Z | 2025-03-11T18:18:48Z | https://github.com/numba/numba/issues/9968 | [
"duplicate",
"feature_request"
] | game-difficulty | 2 |
developmentseed/lonboard | data-visualization | 658 | Enable repeating map view | https://deck.gl/docs/whats-new#world-repeating-in-web-mercator-maps | open | 2024-09-30T13:35:19Z | 2024-09-30T13:35:19Z | https://github.com/developmentseed/lonboard/issues/658 | [] | kylebarron | 0 |
apify/crawlee-python | automation | 706 | Trigger docs build after updating the changelog | Currently the changelog is updated with a `[ci skip]` commits so it only gets incorporated in the docs after a delay. | closed | 2024-11-18T10:19:51Z | 2024-11-22T12:25:08Z | https://github.com/apify/crawlee-python/issues/706 | [
"bug",
"t-tooling",
"infrastructure"
] | janbuchar | 0 |
LAION-AI/Open-Assistant | machine-learning | 2,903 | Feature request: Nearly unlimited token length | I would like to ask if it would be somehow possible AND feasible to implement this paper into openassistant:
https://arxiv.org/pdf/2304.11062.pdf
Even it shows the examples for a BERT like model, it somehow should be possible to be adapted to a decoder prefered method (GPT like). | open | 2023-04-25T15:05:12Z | 2023-04-25T15:05:50Z | https://github.com/LAION-AI/Open-Assistant/issues/2903 | [
"feature",
"ml"
] | snapo | 0 |
marimo-team/marimo | data-science | 3,332 | persistent_cache raises "AssertionError: Unexpected block" | ### Describe the bug
using `mo.persistent_cache` in what looks to be a pretty straightforward way (that I believe was working fine earlier):
```
with mo.persistent_cache(name="nutrient_estimates"):
nutrient_estimates = [
dispatch_estimate(row) for _, row in df.iterrows()
]
```
is now failing:
```
marimo._save.cache.CacheException: Failure during save.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/gabriel/.cache/uv/archive-v0/l5uPLiRitB8IhPMJat0pz/lib/python3.12/site-packages/marimo/_runtime/executor.py", line 141, in execute_cell
exec(cell.body, glbls)
Cell marimo://app/api/queue/notebooks/nutrient_estimation_tests_marimo.py#cell=cell-14, line 1, in <module>
with mo.persistent_cache(name="nutrient_estimates"):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gabriel/.cache/uv/archive-v0/l5uPLiRitB8IhPMJat0pz/lib/python3.12/site-packages/marimo/_save/save.py", line 500, in __exit__
raise instance from CacheException("Failure during save.")
Cell marimo://app/api/queue/notebooks/nutrient_estimation_tests_marimo.py#cell=cell-14, line 3, in <module>
dispatch_estimate(row) for _, row in df.iterrows()
^^
File "/home/gabriel/.cache/uv/archive-v0/l5uPLiRitB8IhPMJat0pz/lib/python3.12/site-packages/marimo/_save/save.py", line 438, in _trace
pre_module, save_module = ExtractWithBlock(lineno - 1).visit(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gabriel/.pyenv/versions/3.12.8/lib/python3.12/ast.py", line 407, in visit
return visitor(node)
^^^^^^^^^^^^^
File "/home/gabriel/.cache/uv/archive-v0/l5uPLiRitB8IhPMJat0pz/lib/python3.12/site-packages/marimo/_save/ast.py", line 113, in generic_visit
return ExtractWithBlock(self.target_line).generic_visit(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gabriel/.cache/uv/archive-v0/l5uPLiRitB8IhPMJat0pz/lib/python3.12/site-packages/marimo/_save/ast.py", line 125, in generic_visit
assert isinstance(on_line[0], ast.With), "Unexpected block."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Unexpected block.
```
### Environment
<details>
```
$ marimo env
{
"marimo": "0.10.9",
"OS": "Linux",
"OS Version": "5.10.102.1-microsoft-standard-WSL2",
"Processor": "x86_64",
"Python Version": "3.12.8",
"Binaries": {
"Browser": "--",
"Node": "v18.18.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.20.1",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.18.0",
"pymdown-extensions": "10.13",
"pyyaml": "6.0.2",
"ruff": "0.8.4",
"starlette": "0.45.1",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.1"
},
"Optional Dependencies": {}
}
```
</details>
### Code to reproduce
_No response_ | closed | 2025-01-03T06:58:13Z | 2025-01-03T10:12:57Z | https://github.com/marimo-team/marimo/issues/3332 | [
"bug"
] | gabrielgrant | 1 |
quantumlib/Cirq | api | 6,543 | New scipy release breaks the CI trhough quimb | scipy 1.13.0 was released half an hour ago and is breaking our CI

| closed | 2024-04-02T22:26:43Z | 2024-09-03T19:52:16Z | https://github.com/quantumlib/Cirq/issues/6543 | [
"kind/health",
"triage/accepted"
] | NoureldinYosri | 1 |
tensorflow/tensor2tensor | deep-learning | 1,736 | module 'tensorflow' has no attribute 'to_float' | ### Description
TensorFlow 2.0 has no attribute `to_float`.
### Environment information
```
OS: Windows 10 Home Edition
$ pip freeze | grep tensor
mesh-tensorflow==0.1.4
tensor2tensor==1.14.1
tensorboard==2.0.1
tensorflow==2.0.0
tensorflow-datasets==1.3.0
tensorflow-estimator==2.0.1
tensorflow-gan==2.0.0
tensorflow-hub==0.7.0
tensorflow-metadata==0.15.0
tensorflow-probability==0.7.0
$ python -V
Python 3.7.5
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
$ conda create -n test python=3.7
$ conda activate test
$ pip install tensor2tensor[tensorflow]
$ python -c "from tensor2tensor.models.transformer import Transformer"
# Error logs:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\me\Anaconda3\envs\nlp\lib\site-packages\tensor2tensor\models\__init__.py", line 25, in <module>
from tensor2tensor.layers import modalities # pylint: disable=g-import-not-at-top
File "C:\Users\me\Anaconda3\envs\nlp\lib\site-packages\tensor2tensor\layers\modalities.py", line 28, in <module>
from tensor2tensor.layers import common_attention
File "C:\Users\me\Anaconda3\envs\nlp\lib\site-packages\tensor2tensor\layers\common_attention.py", line 954, in <module>
def attention_bias_to_padding(attention_bias, cast_fn=tf.to_float):
AttributeError: module 'tensorflow' has no attribute 'to_float'
```
| open | 2019-11-05T01:13:53Z | 2022-05-28T04:38:23Z | https://github.com/tensorflow/tensor2tensor/issues/1736 | [] | shuiruge | 3 |
NullArray/AutoSploit | automation | 638 | Unhandled Exception (361856182) | Autosploit version: `3.0`
OS information: `Linux-4.19.0-kali1-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py -c -q *** --proxy socks5://127.0.0.1:9050 --random-agent`
Error meesage: `SOCKSHTTPSConnectionPool(host='censys.io', port=443): Max retries exceeded with url: /api/v1/search/ipv4 (Caused by NewConnectionError('<urllib3.contrib.socks.SOCKSHTTPSConnection object at 0x7f922b46dad0>: Failed to establish a new connection: [Errno 111] Connection refused',))`
Error traceback:
```
Traceback (most recent call):
File "/root/Desktop/AutoSploit/autosploit/main.py", line 110, in main
AutoSploitParser().single_run_args(opts, loaded_tokens, loaded_exploits)
File "/root/Desktop/AutoSploit/lib/cmdline/cmd.py", line 189, in single_run_args
save_mode=search_save_mode
File "/root/Desktop/AutoSploit/api_calls/censys.py", line 45, in search
raise AutoSploitAPIConnectionError(str(e))
errors: SOCKSHTTPSConnectionPool(host='censys.io', port=443): Max retries exceeded with url: /api/v1/search/ipv4 (Caused by NewConnectionError('<urllib3.contrib.socks.SOCKSHTTPSConnection object at 0x7f922b46dad0>: Failed to establish a new connection: [Errno 111] Connection refused',))
```
Metasploit launched: `False`
| closed | 2019-04-07T15:38:35Z | 2019-04-17T18:33:00Z | https://github.com/NullArray/AutoSploit/issues/638 | [] | AutosploitReporter | 0 |
flairNLP/fundus | web-scraping | 64 | No meaningful value in Article source field | I would expect the `source` field of `Article` to contain information on the article source. I.e. if an article was crawled from welt.de, I would expect `source` to contain the value `DieWelt` or `WELT`. Similarly, if an article was crawled from FAZ I would expect this field to contain the string `FAZ`.
However, when I run this code:
```python
from src.library.collection import PublisherCollection
from src.scraping.pipeline import AutoPipeline
pipeline = AutoPipeline(PublisherCollection.de_de.FAZ)
for article in pipeline.run(max_articles=5):
print(article.source)
```
It just prints:
```console
<src.scraping.crawler.crawler.RSSCrawler object at 0x7f9f2699af10>
<src.scraping.crawler.crawler.RSSCrawler object at 0x7f9f2699af10>
<src.scraping.crawler.crawler.RSSCrawler object at 0x7f9f2699af10>
<src.scraping.crawler.crawler.RSSCrawler object at 0x7f9f2699af10>
<src.scraping.crawler.crawler.RSSCrawler object at 0x7f9f2699af10>
```
i.e. a reference to the crawler object.
Is this desired behavior? Is there any way for me to get from an `Article` the information which source it is from (aside from parsing the `url` field)?
| closed | 2023-03-07T20:15:13Z | 2023-03-09T22:51:20Z | https://github.com/flairNLP/fundus/issues/64 | [
"question"
] | alanakbik | 2 |
pydata/pandas-datareader | pandas | 3 | Decide on the package name | Currently, @hayd changed the package name to `pandas_datareader`. Is everybody satisfied with that? (to be clear: it is about the name that is `import`ed)
Personally, I find it a bit long, and I think I also don't really like the underscore in the name. But of course, it is just personal taste! (I justed wanted to bring up the discussion to have this now, and not later, and deliberately decide on this to keep it as it is or to change it).
What about just `datareader`, or `pddatareader` (but that is a bit difficult with the two 'd's)
| closed | 2015-01-15T22:12:58Z | 2015-03-26T03:10:29Z | https://github.com/pydata/pandas-datareader/issues/3 | [] | jorisvandenbossche | 9 |
matplotlib/matplotlib | data-science | 29,229 | [Bug]: Icons do not work with GTK | ### Bug summary
When using GTK as backend, a bunch of warnings are shown in the terminal due to missing icons and the UI does not show icons in the toolbar.
### Code for reproduction
```Python
import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 5])
plt.ylabel('some numbers')
plt.show()
```
### Actual outcome
```
(python:35385): Gtk-WARNING **: 22:07:42.455: Failed to load icon <path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/home-symbolic.svg: Failed to open file “<path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/home-symbolic.svg”: No such file or directory
(python:35385): Gtk-WARNING **: 22:07:42.455: Failed to load icon <path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/back-symbolic.svg: Failed to open file “<path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/back-symbolic.svg”: No such file or directory
(python:35385): Gtk-WARNING **: 22:07:42.455: Failed to load icon <path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/forward-symbolic.svg: Failed to open file “<path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/forward-symbolic.svg”: No such file or directory
(python:35385): Gtk-WARNING **: 22:07:42.455: Failed to load icon<path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/move-symbolic.svg: Failed to open file “<path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/move-symbolic.svg”: No such file or directory
(python:35385): Gtk-WARNING **: 22:07:42.455: Failed to load icon <path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/zoom_to_rect-symbolic.svg: Failed to open file “<path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/zoom_to_rect-symbolic.svg”: No such file or directory
(python:35385): Gtk-WARNING **: 22:07:42.455: Failed to load icon <path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/subplots-symbolic.svg: Failed to open file “<path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/subplots-symbolic.svg”: No such file or directory
(python:35385): Gtk-WARNING **: 22:07:42.455: Failed to load icon <path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/filesave-symbolic.svg: Failed to open file “<path_to_project>/venv/lib/python3.13/site-packages/matplotlib/mpl-data/images/filesave-symbolic.svg”: No such file or directory
```

### Expected outcome
Icons are shown properly
### Additional information
Steps:
```
python -m venv venv
. venv/bin/activate
pip install -U pip
pip install -U matplotlib
pip install -U PyGObject
# put code for reproduction in a file `main.py`
python main.py
```
### Operating system
Ubuntu
### Matplotlib Version
3.9.3
### Matplotlib Backend
gtk4agg
### Python version
3.13.0
### Jupyter version
_No response_
### Installation
pip | closed | 2024-12-04T21:13:20Z | 2024-12-13T06:09:28Z | https://github.com/matplotlib/matplotlib/issues/29229 | [] | bakku | 9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.