repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
supabase/supabase-py | flask | 464 | Handling the Password Reset for a user by Supabase itself. | **Is your feature request related to a problem? Please describe.**
The function `supabase.auth.reset_password_email(email)` only sends a reset mail to the particular email address mentioned, but not actually handles the password reset of that account, unlike firebase. This function `reset_password_email()` should not only just send a password reset mail but also handle password reset with a unique form link specific to that user, so that the particular user can change the password. I hope that clarifies the problem.
**Describe the solution you'd like**
The function `reset_password_email(email)` should generate a unique supabase link which would contain a form to reset the password and this link should be sent to the particular mail as a password reset mail. This way supabase users wouldn't have to worry about password reset themselves. They just have to call this function and rest, we would handle. ***I would be more than happy to take up on this issue and contribute to the supabase community!***
**Describe alternatives you've considered**
***Firebase Authentication*** is a clear distinctive alternative for this. They have this function `send_password_reset_email()` that does the same thing described above.
**Additional context**
Do checkout this Loom:- [Firebase Password Reset Flow](https://www.loom.com/share/211759ff187e4a8f9dd5d14dd434c7e5)
| closed | 2023-06-14T11:37:08Z | 2023-06-14T14:47:43Z | https://github.com/supabase/supabase-py/issues/464 | [] | MBSA-INFINITY | 2 |
ageitgey/face_recognition | machine-learning | 786 | I make dataset with about 200 photos. 40% of them dont have faces on this photos (nature pictures). | I make dataset with about 200 photos. 40% of them dont have faces on this photos (nature pictures).
I enter this command
face_recognition --show-distance true ./pictures_of_people_i_know/ ./unknown_pictures/
And script give me lot of warnings from "/pictures_of_people_i_know/" folder like
WARNING: No faces found in ./pictures_of_people_i_know/******. Ignoring file.
WARNING: No faces found in ./pictures_of_people_i_know/******. Ignoring file.
WARNING: No faces found in ./pictures_of_people_i_know/******. Ignoring file.
So I need function to delete this photos from my folder, not just warning, I NEED DELETE THEM
How can I get this ? | open | 2019-03-27T01:34:56Z | 2019-03-27T02:55:07Z | https://github.com/ageitgey/face_recognition/issues/786 | [] | xSNYPSx | 1 |
streamlit/streamlit | deep-learning | 9,984 | `st.altair_chart` does not show with a good size if the title is too long | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
`st.altair_chart` fails to properly display `alt.Chart` with title that exceed the container width.
When using st.altair_chart, charts with titles that do not fit within the container width are rendered poorly.
In the example below, two `st.altair_chart` instances use the same `alt.Chart` object. The first chart has sufficient space to display the entire title, resulting in a clear presentation. In contrast, the second chart has limited space, causing the title to be truncated and the chart to appear distorted.
<img width="652" alt="image" src="https://github.com/user-attachments/assets/3f520e5e-7677-4d68-a6d0-c7c66ffb05cd">
If the title length is increased to affect the first chart as well, it will also render poorly, indicating that the issue is not related to the `use_container_width` parameter in `st.altair_chart`.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-9984)
```Python
# create a very basic alt.chart
import streamlit as st
import altair as alt
import pandas as pd
# Create a simple dataframe
df = pd.DataFrame({"x": [1, 2, 3, 4, 5], "y": [10, 20, 30, 40, 50]})
# Create a simple chart
chart = (
alt.Chart(
data=df,
title="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed nec purus euismod, ultricies nunc nec, ultricies nunc.",
)
.mark_line()
.encode(x="x", y="y")
)
# Render the chart
st.altair_chart(chart, use_container_width=True)
st.altair_chart(chart, use_container_width=False)
```
### Steps To Reproduce
Run the previous code
### Expected Behavior
The chart should be always displayed with the appropriate width.
If the title exceeds the available space, it should be truncated at the point where it reaches the container's width limit without affecting the chart.
### Current Behavior
If the `alt.Chart` title is too long, the title is truncated (which is good), but the chart is rendered with an incorrect width, leading to a poor visual presentation.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.2
- Python version: 3.10.11
- Operating System: Windows 11
- Browser: Chrome
### Additional Information | open | 2024-12-09T16:37:58Z | 2024-12-17T22:55:03Z | https://github.com/streamlit/streamlit/issues/9984 | [
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.altair_chart"
] | RubenCata | 2 |
plotly/dash-core-components | dash | 828 | [BUG] dcc.Location search loses quoting | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 1.13.2
dash-bootstrap-components 0.7.2
dash-core-components 1.10.1
dash-daq 0.2.1
dash-google-auth 0.1.2
dash-html-components 1.0.3
dash-renderer 1.5.0
dash-table 4.8.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Describe the bug**
If I set the search property of a dcc.Location element via callback the argument loses proper url quoting
**Expected behavior**
I set `'?arg=value%20with%20spaces'`
But then see `?arg=value with spaces` in the navigation bar of my browser
**Screenshots**
If applicable, add screenshots or screen recording to help explain your problem.
| open | 2020-06-19T15:17:03Z | 2020-06-19T15:42:25Z | https://github.com/plotly/dash-core-components/issues/828 | [] | jauerb | 1 |
python-restx/flask-restx | api | 23 | Using GetHub Actions instead of Travis CI | I would like to make a proposal to use GitHub Actions instead of Travis CI.
In my opinion and from my experience, the GitHub Actions works better and a bit faster than Travis CI.
The most important enhancement is that everything is stored on the GitHub.
GitHub workflows allows to test on Linux, Mac and Windows VMs for free for the open source projects.
All developers can test their code in their forks just pushing a code into a brach, open the actions (in there fork, not here) and see the results.
Examples of the Push and Pull Request checks are here: https://github.com/SVilgelm/flask-restx/pull/3/checks
it's a PR to master branch in my fork.
Example of the release workflow: https://github.com/SVilgelm/flask-restx/runs/410117918?check_suite_focus=true
and the pypi: https://pypi.org/project/flask-restx-svilgelm-test/99.99.99/
So, let's discuss. | closed | 2020-01-29T21:24:55Z | 2020-01-31T16:23:11Z | https://github.com/python-restx/flask-restx/issues/23 | [] | SVilgelm | 2 |
kymatio/kymatio | numpy | 852 | Unnecessary arguments in `core/scattering1d/scattering1d` | Not a big priority, but an easy simplification. The current prototype of `scattering` is: (phew!)
```python
scattering1d(x, pad, unpad, backend, log2_T, psi1, psi2, phi, pad_left=0,
pad_right=0, ind_start=None, ind_end=None, oversampling=0,
max_order=2, average=True, vectorize=False, out_type='array')
```
This didn't come to my attention while reviewing #673 , but the 5th input parameter, `log2_T`, is unnecessary: it is already available as `phi["j"]`. So we could just remove it, and put `log2_T = phi["j"]` at the top of the function.
Thoughts?
| closed | 2022-06-07T16:20:09Z | 2022-06-12T22:16:23Z | https://github.com/kymatio/kymatio/issues/852 | [
"good first issue",
"1D"
] | lostanlen | 3 |
Escape-Technologies/graphinder | graphql | 12 | [feat] Python 3.7 typing compliance | Migrate to a 3.7 typing compliance version of graphinder. | closed | 2022-07-19T21:41:10Z | 2022-07-26T10:27:35Z | https://github.com/Escape-Technologies/graphinder/issues/12 | [
"enhancement"
] | nullswan | 0 |
widgetti/solara | jupyter | 222 | API docs missing for Details | see https://github.com/widgetti/solara/issues/221
https://github.com/widgetti/solara/pull/185 is a good template of what it should look like | open | 2023-07-28T17:36:36Z | 2024-04-01T15:31:45Z | https://github.com/widgetti/solara/issues/222 | [
"documentation",
"good first issue",
"help wanted"
] | maartenbreddels | 1 |
polarsource/polar | fastapi | 5,202 | Customers: Delete customer from dashboard | Ability to delete a customer from within the dashboard.
Wait on: [https://github.com/polarsource/polar/issues/5142](https://github.com/polarsource/polar/issues/5142)
So we can fix that bug + disallow deletion of customers who have purchased something, i.e has an order/benefit. | closed | 2025-03-07T13:19:09Z | 2025-03-12T13:42:46Z | https://github.com/polarsource/polar/issues/5202 | [] | birkjernstrom | 0 |
tfranzel/drf-spectacular | rest-api | 752 | Question: How to deal with a generated API ? | Hi! Thanks for your project, it works well out of the box.
This is not an issue but more a request for help to properly integrate drf-spectacular into my project.
I have this special endpoint that asks for a query parameter. According to these parameters which are stored in base (they are model names), the endpoint will have very different validation criteria according to the chosen model.
I know I can define parameters with @extend_schema only I don't know them in advance since they are user-defined models in a multi-tenant environment.
Is it possible to automatically generate documentation with drf-spectacular based on data in databases? | closed | 2022-06-07T13:16:01Z | 2022-06-18T13:36:01Z | https://github.com/tfranzel/drf-spectacular/issues/752 | [] | moweerkat | 2 |
plotly/dash | data-visualization | 2,650 | Multipage App + Background Callback Changes the Page | Issue steps:
- Trigger a background callback on Page X
- Change page to Y
- The background callback on Page X finishes and changes the browser to Page X
Background callback finishing shouldn't change page unless a dcc.Location is in the output.
| open | 2023-10-02T14:54:17Z | 2024-08-13T19:38:22Z | https://github.com/plotly/dash/issues/2650 | [
"bug",
"P3"
] | IstvanM | 0 |
polarsource/polar | fastapi | 4,778 | Subscriptions (Customer Portal): Prorate immediately in case of annual subscription changes | Prerequisite: https://github.com/polarsource/polar/issues/4777
Better default to ensure all yearly upgrades are prorated immediately. Not an issue in case of an upgrade from monthly to yearly, but from year to year it can be a much higher risk.
| open | 2025-01-03T14:01:35Z | 2025-02-11T09:07:23Z | https://github.com/polarsource/polar/issues/4778 | [
"feature",
"changelog"
] | birkjernstrom | 1 |
supabase/supabase-py | fastapi | 522 | Supabase-py through outbound http proxy | I'd like to use supabse-py in an app that is deployed behind a http proxy. It seems supabase does not use the system proxy settings to route requests. Is there a way to specify a http proxy to be used by the client? | closed | 2023-08-21T09:40:19Z | 2023-08-21T11:59:21Z | https://github.com/supabase/supabase-py/issues/522 | [] | theveloped | 2 |
plotly/plotly.py | plotly | 5,009 | connection style for arrows | It would be nice to have orthogonal and other types of connection style arrows when adding annotations. Matplotlib has something like:
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot([1, 2, 3], [4, 5, 2])
# Annotate with a connection line
ax.annotate("Important Point", xy=(2, 5), xytext=(1.5, 5.5),
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=0.2"))
plt.show()
```
This would be useful for linking bars in timeline for example. Right now if you annotate the bar chart you can only add straight arrows. | open | 2025-02-03T15:27:27Z | 2025-02-03T15:46:08Z | https://github.com/plotly/plotly.py/issues/5009 | [
"feature",
"P3"
] | chaffra | 1 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 66 | 多卡微调时报错:ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 1280) of binary: /root/miniconda3/envs/llama2/bin/python | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案
### 问题类型
模型训练与精调
### 基础模型
Alpaca-2-7B
### 操作系统
Linux
### 详细描述问题
进行多卡训练时报错,运行环境:docker cuda11.6,4卡24G A6000,python3.10
```
########参数部分########
lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model=/root/.cache/huggingface/chinese-alpaca-2-7b-hf
chinese_tokenizer_path=/root/.cache/huggingface/chinese-alpaca-2-7b-hf
dataset_dir=/root/.cache/huggingface/data/merge.json
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=1
output_dir=output_dir
validation_file=/root/.cache/huggingface/data/merge.json
max_seq_length=1024
deepspeed_config_file=ds_zero2_no_offload.json
########启动命令########
torchrun --nnodes 1 --nproc_per_node 4 run_clm_sft_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--validation_split_percentage 0.001 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--do_train \
--do_eval \
--seed $RANDOM \
--fp16 \
--num_train_epochs 2 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.03 \
--weight_decay 0 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--evaluation_strategy steps \
--eval_steps 250 \
--save_steps 500 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--max_seq_length ${max_seq_length} \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--modules_to_save ${modules_to_save} \
--lora_dropout ${lora_dropout} \
--torch_dtype float16 \
--validation_file ${validation_file} \
--gradient_checkpointing \
--ddp_find_unused_parameters False
```
### 依赖情况(代码类问题务必提供)
```
peft 0.3.0.dev0
torch 2.0.1
transformers 4.31.0
```
### 运行日志或截图
```
(llama2) root@eb03b13bd90d:~/Chinese-LLaMA-Alpaca-2/scripts/training# bash run_sft.sh
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[2023-08-03 08:13:01,087] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-08-03 08:13:01,106] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-08-03 08:13:01,202] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2023-08-03 08:13:01,202] [INFO] [comm.py:616:init_distributed] cdb=None
[2023-08-03 08:13:01,202] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2023-08-03 08:13:01,219] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2023-08-03 08:13:01,219] [INFO] [comm.py:616:init_distributed] cdb=None
08/03/2023 08:13:01 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1distributed training: True, 16-bits training: True
[WARNING|logging.py:295] 2023-08-03 08:13:01,415 >> You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
08/03/2023 08:13:01 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True
[INFO|configuration_utils.py:710] 2023-08-03 08:13:01,818 >> loading configuration file /root/.cache/huggingface/chinese-alpaca-2-7b-hf/config.json
[INFO|configuration_utils.py:768] 2023-08-03 08:13:01,818 >> Model config LlamaConfig {
"_name_or_path": "/root/.cache/huggingface/chinese-alpaca-2-7b-hf",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_length": 4096,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.31.0",
"use_cache": true,
"vocab_size": 55296
}
[INFO|tokenization_utils_base.py:1837] 2023-08-03 08:13:01,818 >> loading file tokenizer.model
[INFO|tokenization_utils_base.py:1837] 2023-08-03 08:13:01,818 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:1837] 2023-08-03 08:13:01,818 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:1837] 2023-08-03 08:13:01,818 >> loading file tokenizer_config.json
[WARNING|logging.py:295] 2023-08-03 08:13:01,819 >> You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
08/03/2023 08:13:01 - INFO - __main__ - Training files:
08/03/2023 08:13:01 - WARNING - root - building dataset...
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 1280) of binary: /root/miniconda3/envs/llama2/bin/python
Traceback (most recent call last):
File "/root/miniconda3/envs/llama2/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/root/miniconda3/envs/llama2/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/envs/llama2/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/root/miniconda3/envs/llama2/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/root/miniconda3/envs/llama2/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/envs/llama2/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
====================================================
run_clm_sft_with_peft.py FAILED
----------------------------------------------------
Failures:
[1]:
time : 2023-08-03_08:13:04
host : eb03b13bd90d
rank : 1 (local_rank: 1)
exitcode : -7 (pid: 1281)
error_file: <N/A>
traceback : Signal 7 (SIGBUS) received by PID 1281
----------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-08-03_08:13:04
host : eb03b13bd90d
rank : 0 (local_rank: 0)
exitcode : -7 (pid: 1280)
error_file: <N/A>
traceback : Signal 7 (SIGBUS) received by PID 1280
====================================================
``` | closed | 2023-08-03T08:17:23Z | 2023-12-04T01:01:44Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/66 | [
"stale"
] | thugbobby | 6 |
saulpw/visidata | pandas | 2,359 | Aggregator with `+` not appearing in frequency table _unless_ selected after typing it out | **Small description**
When choosing an aggregator via `+`, such aggregator is not added to the frequency table if selected via `<Tab>` scrolling _unless_ it is first selected by typing it out.
**Expected result**
The aggregator is added to frequency tables even if selected via `<Tab>` scrolling.
**Actual result with screenshot**
See recording below. Explanation: consider the below file as `test.csv`
```
class,number
A,1
A,5
B,4
B,3
```
and let us assume we want to sum the values of the column `number` grouped by `class`. In the first attempt below I open the aggregator menu via `+` and "scroll down" until I select the relevant function (in this case `sum`): moving to the key column I then access via `shift+F` the frequency table only to find that the aggregator is not there.
Subsequently instead of scrolling down to the relevant aggregator I type it out manually, say `s-u-m` and _then_ select it: all works as intended in this case and as you can see the aggregator appears in the frequency table as needed.
https://github.com/saulpw/visidata/assets/15387611/b78ddfcf-a5ed-4eee-aaa0-3b74d412206b
**Additional context**
```
python3 -V
Python 3.12.2
vd -v
saul.pw/VisiData v3.0
```
on a
```
OS macOS 12.7.4 Monterey
```
P. S. It seems as if scrolling does not really "select" anything, the selection only happens if the menu element is typed out.
Anyway, as always, thank you for the excellent work you people are putting in `visidata`: these little problems are nothing compared to how much work we are getting done using your software. | closed | 2024-03-24T21:12:03Z | 2024-03-27T17:06:10Z | https://github.com/saulpw/visidata/issues/2359 | [
"bug",
"fixed"
] | gennaro-tedesco | 2 |
deepspeedai/DeepSpeed | deep-learning | 7,132 | [BUG] AttributeError: 'FusedAdam' object has no attribute 'refresh_fp32_params' | I loaded a pretrained checkpoint using the following code:
```
model_engine.load_checkpoint(
checkpoint_dir,
ckpt_id,
load_optimizer_states=False,
load_lr_scheduler_states=False,
load_module_only=True
)
```
However, I encountered the following error:
```
...
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2815, in load_checkpoint
load_path, client_states = self._load_checkpoint(load_dir,
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2909, in _load_checkpoint
self.optimizer.refresh_fp32_params()
AttributeError: 'FusedAdam' object has no attribute 'refresh_fp32_params'
```
A few details about my setup:
- I did not enable ZeRO or FP16.
- This issue did not occur in DeepSpeed 0.9.
After checking the source code, I noticed a change that might be related to this issue. In [this commit](https://github.com/deepspeedai/DeepSpeed/commit/870ae041d42190be8139afc12bef51d6ed7719f3), the following code:
```
if self.optimizer is not None and self.fp16_enabled():
self.optimizer.refresh_fp32_params()
```
was changed to:
```
if self.optimizer is not None:
self.optimizer.refresh_fp32_params()
```
It seems that the FusedAdam optimizer does not have the method `refresh_fp32_params()`.
`ds_report` output:
```
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
2025-03-11 17:02:35,003 root [INFO] - gcc -pthread -B /opt/conda/envs/ptca/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/ptca/include -fPIC -O2 -isystem /opt/conda/envs/ptca/include -fPIC -c /tmp/tmp45k1f5pw/test.c -o /tmp/tmp45k1f5pw/test.o
2025-03-11 17:02:35,032 root [INFO] - gcc -pthread -B /opt/conda/envs/ptca/compiler_compat /tmp/tmp45k1f5pw/test.o -L/usr/local/cuda -L/usr/local/cuda/lib64 -lcufile -o /tmp/tmp45k1f5pw/a.out
/usr/bin/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlvsym'
/usr/bin/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlopen'
/usr/bin/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlclose'
/usr/bin/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlerror'
/usr/bin/ld: /usr/local/cuda/lib64/libcufile.so: undefined reference to `dlsym'
collect2: error: ld returned 1 exit status
gds .................... [NO] ....... [NO]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/envs/ptca/lib/python3.10/site-packages/torch']
torch version .................... 2.4.1
deepspeed install path ........... ['/opt/conda/envs/ptca/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.15.1, unknown, unknown
torch cuda version ............... 12.4
torch hip version ................ None
nvcc version ..................... 12.4
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.4
shared memory (/dev/shm) size .... 209.00 GB
```
System info:
- OS: Ubuntu 20.04
- GPU count and types: one machine with x2 A100
- Python version: 3.10.15
Unfortunately it’s a bit difficult for me to provide a minimal reproducible script, but I hope the information above is sufficient. I think it might be an issue with how I’m using it, but I’m not quite familiar with DeepSpeed and maybe you can spot the problem. Thanks!
| closed | 2025-03-11T17:09:33Z | 2025-03-14T18:55:29Z | https://github.com/deepspeedai/DeepSpeed/issues/7132 | [
"bug",
"training"
] | jinwx | 0 |
lukas-blecher/LaTeX-OCR | pytorch | 256 | Can't run latexocr | I got to know about this recently and wanted to give it a try. `pix2tex` command works fine, but `latexocr` command gives this error
`qt.dbus.integration: Could not connect "org.freedesktop.IBus" to globalEngineChanged(QString)
Sandboxing disabled by user.
The Wayland connection experienced a fatal error: Protocol error`
Can someone help me with this issue?
Thanks! | open | 2023-04-14T17:57:50Z | 2023-06-07T01:34:20Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/256 | [] | Anirudh-Srivastha-Nemmani | 1 |
hpcaitech/ColossalAI | deep-learning | 5,960 | [fp8] support low level zero | closed | 2024-08-02T03:14:13Z | 2024-08-07T03:18:28Z | https://github.com/hpcaitech/ColossalAI/issues/5960 | [] | ver217 | 2 | |
kizniche/Mycodo | automation | 631 | error 500 when selecting function tab | ## Mycodo Issue Report:
- Specific Mycodo Version:
Mycodo Version: 7.2.3
Python Version: 3.5.3 (default, Jan 19 2017, 14:11:04) [GCC 6.3.0 20170124]
Database Version: 2976b41930ad
Daemon Status: Running
Daemon Process ID: 10805
Daemon RAM Usage: 70.248 MB
Daemon Virtualenv: Yes
Frontend RAM Usage: 51.476 MB
Frontend Virtualenv: Yes
#### Problem Description
Error 500 when selecting function tab. Happens on 2 rpi 3's after upgrading.
### Error
Daemon log:
2019-02-20 13:57:44,542 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:57:49,749 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:57:49,770 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:57:54,545 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:57:59,542 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:04,578 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:04,580 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:58:09,546 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:14,543 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:19,638 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:19,640 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:58:24,545 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:29,542 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:34,571 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:34,572 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:58:39,551 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:44,540 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:49,690 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:49,692 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:58:54,542 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:58:59,536 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:04,541 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:04,542 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:59:09,538 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:14,536 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:19,580 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:19,585 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:59:24,544 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:29,549 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:34,711 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:34,712 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:59:39,545 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:44,546 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:49,637 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:49,638 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 13:59:54,550 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 13:59:59,544 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 14:00:04,563 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 14:00:04,565 - mycodo.input_9c25ea72 - ERROR - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-02-20 14:00:09,541 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command.
2019-02-20 14:00:14,539 - mycodo.linux_command_9c25ea72 - ERROR - The command returned a non-numerical value. Ensure only one numerical value is returned by the command. | closed | 2019-02-20T22:01:55Z | 2019-02-21T10:17:16Z | https://github.com/kizniche/Mycodo/issues/631 | [] | SAM26K | 6 |
coqui-ai/TTS | python | 3,050 | [Bug] formatting_your_dataset doc doesn't match the LJSpeech formatter | ### Describe the bug
The `formatting_your_dataset` doc recommends writing your dataset in this format with the note that it'll be compatible with the LJSpeech formatter:
```
# metadata.txt
audio1|This is my sentence.
audio2|This is maybe my sentence.
audio3|This is certainly my sentence.
audio4|Let this be your sentence.
...
```
If you create a dataset in that format and try to follow the instructions in `tutorial_for_nervous_beginners`, you'll get an error:
```
root@937a34667dbe:~# CUDA_VISIBLE_DEVICES="0" python3 TTS/bin/train_tts.py --config_path config.json
Traceback (most recent call last):
File "/root/TTS/bin/train_tts.py", line 71, in <module>
main()
File "/root/TTS/bin/train_tts.py", line 47, in main
train_samples, eval_samples = load_tts_samples(
File "/root/TTS/tts/datasets/__init__.py", line 120, in load_tts_samples
meta_data_train = formatter(root_path, meta_file_train, ignored_speakers=ignored_speakers)
File "/root/TTS/tts/datasets/formatters.py", line 201, in ljspeech
text = cols[2]
IndexError: list index out of range
```
Looking at the code for that formatter it's expecting 3 columns (looking up `cols[2]`)
https://github.com/coqui-ai/TTS/blob/99635193f508092c746febb087dc6634fa5f59d8/TTS/tts/datasets/formatters.py#L198-L202
### To Reproduce
1. Create `transcript.txt`
```
audio1|This is my sentence.
audio2|This is maybe my sentence.
audio3|This is certainly my sentence.
audio4|Let this be your sentence.
```
2. Create `config,json`:
```
{
"run_name": "my_run",
"model": "glow_tts",
"batch_size": 32,
"eval_batch_size": 16,
"num_loader_workers": 4,
"num_eval_loader_workers": 4,
"run_eval": true,
"test_delay_epochs": -1,
"epochs": 1000,
"text_cleaner": "english_cleaners",
"use_phonemes": false,
"phoneme_language": "en-us",
"phoneme_cache_path": "phoneme_cache",
"print_step": 25,
"print_eval": true,
"mixed_precision": false,
"output_path": "recipes/ljspeech/glow_tts/",
"datasets":[{"formatter": "ljspeech", "meta_file_train":"transcript.csv", "path": "/dataset"}]
}
```
3. Run `CUDA_VISIBLE_DEVICES="0" python3 TTS/bin/train_tts.py --config_path config.json`
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": "11.8"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cu118",
"TTS": "0.17.8",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
""
],
"processor": "x86_64",
"python": "3.10.12",
"version": "#34~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Sep 7 13:12:03 UTC 2"
}
}
```
This is a docker image based on the `coqui-ai/tts` docker image:
```Dockerfile
FROM ghcr.io/coqui-ai/tts
EXPOSE 5002
RUN mkdir -p /dataset
COPY wavs /dataset
RUN apt-get update && \
apt-get install --yes \
nano
COPY config.json config.json
ENTRYPOINT bash
```
### Additional context
_No response_ | closed | 2023-10-09T14:44:22Z | 2023-10-16T10:25:50Z | https://github.com/coqui-ai/TTS/issues/3050 | [
"bug"
] | sqrt10pi | 2 |
coqui-ai/TTS | pytorch | 4,135 | [Bug] When I generate a TTS model and play it, I only hear noise. | ### Describe the bug
Hello,
I wanted to create a TTS model using my voice with Coqui TTS, so I followed the tutorial to implement it.
I wrote a train.py file to train the voice model, but when I try to play TTS using the model I created, I only hear noise.
I thought the issue might be with my audio files, so I tried modeling with 100 samples from the LJSpeech Dataset instead, but I still only hear noise.
### To Reproduce
Here is my train.py source code:
```python
import os
from trainer import Trainer, TrainerArgs
from TTS.tts.configs.glow_tts_config import GlowTTSConfig
from TTS.tts.configs.shared_configs import BaseDatasetConfig
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.glow_tts import GlowTTS
from TTS.tts.utils.text.tokenizer import TTSTokenizer
from TTS.utils.audio import AudioProcessor
output_path = os.path.dirname(os.path.abspath(__file__))
dataset_config = BaseDatasetConfig(
formatter="ljspeech", meta_file_train="metadata.csv", path=os.path.join(output_path, "files/LJSpeech-1.1")
)
config = GlowTTSConfig(
batch_size=32,
eval_batch_size=16,
num_loader_workers=4,
num_eval_loader_workers=4,
run_eval=True,
test_delay_epochs=-1,
epochs=10,
text_cleaner="phoneme_cleaners",
use_phonemes=True,
phoneme_language="en-us",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
print_step=25,
print_eval=False,
mixed_precision=True,
output_path=output_path,
datasets=[dataset_config],
)
ap = AudioProcessor.init_from_config(config)
tokenizer, config = TTSTokenizer.init_from_config(config)
train_samples, eval_samples = load_tts_samples(
dataset_config,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
model = GlowTTS(config, ap, tokenizer, speaker_manager=None)
trainer = Trainer(
TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
)
if __name__ == '__main__':
trainer.fit()
```
And here is the command I used to play the TTS:
```shell
tts --text "Text for TTS" --model_path ./files/best_model.pth --config_path ./files/config.json --out_path output.wav
```
### Expected behavior
_No response_
### Logs
```shell
> Training Environment:
| > Backend: Torch
| > Mixed precision: True
| > Precision: fp16
| > Num. of CPUs: 16
| > Num. of Torch Threads: 8
| > Torch seed: 54321
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
> Start Tensorboard: tensorboard --logdir=./output
> Model has 28610257 parameters
[4m[1m > EPOCH: 0/10[0m
--> ./output
| > avg_loader_time: 0.0013079643249511719 [0m(+0)
| > avg_loss: 3.632810592651367 [0m(+0)
| > avg_log_mle: 0.8002523481845856 [0m(+0)
| > avg_loss_dur: 2.8325581550598145 [0m(+0)
| > avg_loader_time:[91m 0.00635981559753418 [0m(+0.005051851272583008)
| > avg_loss: 3.632810592651367 [0m(+0.0)
| > avg_log_mle: 0.8002523481845856 [0m(+0.0)
| > avg_loss_dur: 2.8325581550598145 [0m(+0.0)
| > avg_loader_time:[92m 0.0036824941635131836 [0m(-0.002677321434020996)
| > avg_loss: 3.632810592651367 [0m(+0.0)
| > avg_log_mle: 0.8002523481845856 [0m(+0.0)
| > avg_loss_dur: 2.8325581550598145 [0m(+0.0)
| > avg_loader_time:[91m 0.007590651512145996 [0m(+0.0039081573486328125)
| > avg_loss:[92m 3.6250685453414917 [0m(-0.007742047309875488)
| > avg_log_mle:[92m 0.798242598772049 [0m(-0.002009749412536621)
| > avg_loss_dur:[92m 2.826825976371765 [0m(-0.005732178688049316)
| > avg_loader_time:[92m 0.0026175975799560547 [0m(-0.004973053932189941)
| > avg_loss:[92m 3.6232705116271973 [0m(-0.0017980337142944336)
| > avg_log_mle:[92m 0.7982403934001923 [0m(-2.205371856689453e-06)
| > avg_loss_dur:[92m 2.8250300884246826 [0m(-0.0017958879470825195)
| > avg_loader_time:[91m 0.009380459785461426 [0m(+0.006762862205505371)
| > avg_loss:[92m 3.6205027103424072 [0m(-0.002767801284790039)
| > avg_log_mle:[92m 0.7982217967510223 [0m(-1.8596649169921875e-05)
| > avg_loss_dur:[92m 2.822281002998352 [0m(-0.0027490854263305664)
| > avg_loader_time:[91m 0.01347208023071289 [0m(+0.004091620445251465)
| > avg_loss:[92m 3.6190003156661987 [0m(-0.001502394676208496)
| > avg_log_mle:[92m 0.798188179731369 [0m(-3.361701965332031e-05)
| > avg_loss_dur:[92m 2.820812225341797 [0m(-0.0014687776565551758)
| > avg_loader_time:[92m 0.003623485565185547 [0m(-0.009848594665527344)
| > avg_loss:[92m 3.6169681549072266 [0m(-0.002032160758972168)
| > avg_log_mle:[92m 0.7981387376785278 [0m(-4.9442052841186523e-05)
| > avg_loss_dur:[92m 2.8188294172286987 [0m(-0.0019828081130981445)
| > avg_loader_time:[91m 0.005441427230834961 [0m(+0.001817941665649414)
| > avg_loss:[92m 3.6120744943618774 [0m(-0.004893660545349121)
| > avg_log_mle:[92m 0.7980725467205048 [0m(-6.619095802307129e-05)
| > avg_loss_dur:[92m 2.8140019178390503 [0m(-0.0048274993896484375)
| > avg_loader_time:[91m 0.012539029121398926 [0m(+0.007097601890563965)
| > avg_loss:[91m 3.6240179538726807 [0m(+0.011943459510803223)
| > avg_log_mle:[92m 0.7979885637760162 [0m(-8.398294448852539e-05)
| > avg_loss_dur:[91m 2.8260293006896973 [0m(+0.012027382850646973)
```
### Environment
```shell
- 🐸TTS Version: 0.22
- PyTorch Version: 2.2.2
- Python version: 3.11.6
- OS: macOS Sequoia 15.0
- CUDA/cuDNN version: N/A
- GPU models and configuration: Radeon pro 575X
- How you installed PyTorch: pip
- Any other relevant information: Intel Core i9 8 Core, 48GB Ram
```
### Additional context
_No response_ | closed | 2025-01-22T06:44:25Z | 2025-02-16T23:11:42Z | https://github.com/coqui-ai/TTS/issues/4135 | [
"bug"
] | chuyeonhak | 5 |
amdegroot/ssd.pytorch | computer-vision | 213 | Outputs are always the same. | I train it on my own data, then I find that any picture will get same result. Does anyone see this problem before? | closed | 2018-07-31T12:09:26Z | 2019-09-26T08:46:22Z | https://github.com/amdegroot/ssd.pytorch/issues/213 | [] | chenxinyang123 | 3 |
Kanaries/pygwalker | matplotlib | 189 | PySpark DataFrame Support | Native Support for rendering visualizations for PySpark data frame in the Jupyter notebook.
It is OK to introduce some constraints if the sheer size of the data frame makes it difficult to load. | closed | 2023-08-03T07:20:51Z | 2023-11-06T02:02:08Z | https://github.com/Kanaries/pygwalker/issues/189 | [
"enhancement",
"P2"
] | rishabmps | 6 |
Johnserf-Seed/TikTokDownload | api | 677 | TikTokTool V1.5 版本 通过命令行启动并提供必要的参数, 输入 TikTokTool -h 查看不同平台帮助。[BUG] | **描述出现的错误**
请通过命令行启动并提供必要的参数, 输入 TikTokTool -h 查看不同平台帮助。
F2 Version:0.0.1.4
**bug复现**
复现这次行为的步骤:
1.打开终端运行python3 TikTokTool.py
**截图**
<img width="530" alt="image" src="https://github.com/Johnserf-Seed/TikTokDownload/assets/18301809/2f7d757d-49d8-4731-8510-717e4487f9d4">
**桌面(请填写以下信息):**
-操作系统:Mac
-vpn代理:开启
-项目版本:1.5.0.0
-py版本:3.11.6 | open | 2024-03-12T09:08:31Z | 2024-03-12T15:19:55Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/677 | [] | yjxfox | 6 |
jupyterhub/repo2docker | jupyter | 702 | Implement a GUI that builds repo2docker config files | ### Proposed change
It would be useful if we had a lightweight GUI that let people build repo2docker configuration files. It would have sections for each type of configuration file, and UI elements that helped populate the most common relevant fields. e.g.:

There would also be a "download" button that would create a bunch of config files according to what was placed inside this form, and download a ZIP file with all of them inside.
It could be a good way to show off the different configuration options, and to let people quickly create their own.
This could either be a page on something like readthedocs, or a jupyterlab extension. Something very similar to https://github.com/jupyterlab/jupyterlab-celltags but on a whole page and for the purposes of generating repo2docker config files
[Here's a Discourse thread that prototypes this with the ipywidgets ecosystem](https://discourse.jupyter.org/t/an-interactive-binder-config-file-builder-gui/1510?u=choldgraf)
### Alternative options
I think the only alternative here is to have people manually create these files, which can be a burden for some researchers who are just learning about reproducibility and coding. I suppose another alternative is to use a different service for reproducibility that does provide these kinds of UIs.
edit 2021: A nice implementation of something like this is here in the CodeOcean docs: https://help.codeocean.com/en/articles/1197548-the-package-management-system

### Who would use this feature?
I think primarily people who are new-ish to Binder or to coding, or who in general are more comfortable working from a form than from hand-coding things in a text editor.
### How much effort will adding it take?
I think it wouldn't be too hard for somebody that was familiar with building UI components in a javascript framework. It's largely a question of figuring out the right kind of UI/UX to provide, but in terms of the code itself I bet there are many libraries for building simple forms.
### Who can do this work?
Ideally, somebody with experience in React or some other kind of framework that can build simple-ish forms. Whoever did the work on https://github.com/jupyterlab/jupyterlab-celltags might be able to provide some guidance! (e.g. @zsailer do you know if one of the calpoly interns worked on this?) | open | 2019-06-12T23:43:42Z | 2024-02-13T23:37:07Z | https://github.com/jupyterhub/repo2docker/issues/702 | [
"enhancement",
"help wanted",
"needs: discussion"
] | choldgraf | 4 |
tflearn/tflearn | data-science | 1,075 | Calling regression() with parameter loss='weighted_crossentropy' | Hello, I am currently also trying to implement a weighted CE loss function. I'd really appreciate some guidance on how to call this function from the `loss=` parameter of the `tflearn.regression()` function.
The following attempt to use the above method in my code yields:
```
net_2 = net = tflearn.input_data(shape=[None, n_features])
net_2 = tflearn.fully_connected(net_2, 16, activation='relu')
net_2 = tflearn.dropout(net_2, 0.8)
net_2 = tflearn.fully_connected(net_2, 32, activation='relu')
net_2 = tflearn.dropout(net_2, 0.8)
net_2 = tflearn.fully_connected(net_2, 64, activation='relu')
net_2 = tflearn.dropout(net_2, 0.8)
net_2 = tflearn.fully_connected(net_2, 64, activation='relu')
net_2 = tflearn.dropout(net_2, 0.8)
net_2 = tflearn.fully_connected(net_2, 2, activation='softmax')
from tflearn.objectives import weighted_crossentropy
net_2 = tflearn.regression(net, optimizer='adam', loss=lambda data, target: weighted_crossentropy(data, target, weight=.5))
```
<img width="704" alt="screen shot 2018-07-19 at 3 19 07 pm" src="https://user-images.githubusercontent.com/17347282/42965146-250f5ab2-8b67-11e8-8f15-06ad377677af.png">
| closed | 2018-07-19T22:15:44Z | 2018-07-20T00:26:44Z | https://github.com/tflearn/tflearn/issues/1075 | [] | tnightengale | 0 |
nikitastupin/clairvoyance | graphql | 80 | The endpoint requires authorization keys | What should I do if my GraphQL endpoint requires cookies and authorization keys, how can I add this for analysis?
| closed | 2024-02-07T18:04:18Z | 2024-02-21T05:14:03Z | https://github.com/nikitastupin/clairvoyance/issues/80 | [] | simbadmorehod | 0 |
graphql-python/graphene-django | graphql | 1,484 | Automatic object type field for reverse one-to-one model fields cannot resolve when model field has `related_query_name` and object type has custom `get_queryset` | ## **What is the current behavior?**
Given the following models:
```python
from django.db import models
class Example(models.Model):
pass
class Related(models.Model):
example = models.OneToOneField(
Example,
on_delete=models.CASCADE,
related_name="related",
# Important!
related_query_name="related_item",
)
```
... and the following ObjectTypes:
```python
import graphene
from graphene_django import DjangoObjectType
from graphene_django.fields import DjangoConnectionField
class ExampleType(DjangoObjectType):
class Meta:
model = Example
fields = [
"id",
"related",
]
connection_class = graphene.Connection
interfaces = (graphene.relay.Node,)
class RelatedType(DjangoObjectType):
class Meta:
model = Related
fields = [
"id",
"example",
]
connection_class = graphene.Connection
interfaces = (graphene.relay.Node,)
# Important!
@classmethod
def get_queryset(cls, queryset, info):
return queryset
```
... and the following query definitions:
```python
import graphene
from graphene_django.fields import DjangoConnectionField
class Query(graphene.ObjectType):
examples = DjangoConnectionField(ExampleType)
related = DjangoConnectionField(RelatedType)
```
Now, trying to query like this:
```graphql
query {
examples {
edges {
node {
id
related {
id
}
}
}
}
}
```
... will result in an error like this:
```json
[
{
"locations": [
{
"column": 9,
"line": 3,
},
],
"message": "Example has no field named 'related'",
"path": [
"examples",
"edges",
0,
"node",
"related",
],
}
]
```
---
## **What causes the current behavior?**
This happens because of two things:
1. The `Related` model field `example` is a `OneToOneField`, which has defined a `related_query_name` different from the field's `related_name`.
2. The `RelatedType` ObjectType defined the `get_queryset` class method.
When `graphene_django.converter.convert_onetoone_field_to_djangomodel` creates a the ObjectType for the ExampleType field `related`, it uses the `RelatedType` ObjectType's queryset for this check: [graphene_django.converter.py:283](https://github.com/graphql-python/graphene-django/blob/62126dd46753ecce4f2b95bf63c1a7d08b1a91a2/graphene_django/converter.py#L283)
Since the `RelatedType`'s `get_queryset` was modified, this check does not early return, thus we use the `custom_resolver` below that.
When `custom_resolver` resolves, it tries to fetch `reversed_field_name` from the class fields (uses [django.db.models.options.py:649](https://github.com/django/django/blob/b287af5dc954628d4b336aefc5027b2edceee64b/django/db/models/options.py#L649)). Since `related` is a reverse relation, it does not exist in `_forward_fields_map`. However, _it's not found from `fields_map` either_, since the keys in `fields_map` use fields' `related_query_name`s instead of `related_name`s.
Therefore, automatic field object type creation fails in this case (reverse one-to-one fields with `get_queryset` defined).
---
## **What is the expected behavior?**
I should be able to make the query described above with the given object type configuration.
---
## **What is the motivation / use case for changing the behavior?**
I should be able to rely on automatic field generaton for reverse one-to-one relations with `related_query_name` to object types that have defined `get_queryset`. This reduces boilerplate code, since otherwise I would need to define the field and resolver manually.
---
## **Please tell us about your environment:**
- Version: v3.1.5
- Platform: Windows 11
--- | open | 2023-12-09T19:17:15Z | 2023-12-09T19:17:15Z | https://github.com/graphql-python/graphene-django/issues/1484 | [
"🐛bug"
] | MrThearMan | 0 |
flaskbb/flaskbb | flask | 380 | Post count includes deleted posts | If I delete a post on a thread, the count for the forum does not change (i.e., the "Posts" column on the "Forum" page). The "Total number of posts", however, is accurate. | closed | 2017-12-17T19:20:05Z | 2018-04-15T07:47:49Z | https://github.com/flaskbb/flaskbb/issues/380 | [
"bug"
] | haliphax | 3 |
tqdm/tqdm | jupyter | 1,120 | Saved ASCII output from tqdm.notebook shows zero progress | - [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
I've been trying out the new feature for exporting progress bar state as ASCII when saving a Jupyter notebook in 4.56.0 (see https://github.com/tqdm/tqdm/issues/937). It's an awesome feature, but I'm seeing the progress always getting saved as 0. For example, running a notebook with:
```python
from tqdm.notebook import tqdm
from time import sleep
for _ in tqdm(range(10)):
sleep(0.1)
```
shows a pretty progress bar in the notebook, but in the actual `.ipynb` file the output is:
```
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "967a3bd433ee4ed1be7b6f9ecfb7e4be",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/10 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
```
You can also see this on Colab here: https://colab.research.google.com/drive/1Ovt1-3jtnMPLdvIuYixvb01HmDJpe_ZA#scrollTo=wzNm4XFXr3X4 (and Tools -> Diff Notebooks to see the output).
The effect of this is that, when exporting to PDF (or when viewing on some online viewers, for example ReviewNB), an empty ASCII bar is shown. | open | 2021-02-03T02:52:59Z | 2021-02-17T02:49:17Z | https://github.com/tqdm/tqdm/issues/1120 | [
"duplicate 🗐",
"help wanted 🙏",
"invalid ⛔",
"p2-bug-warning ⚠",
"submodule-notebook 📓"
] | charmasaur | 5 |
babysor/MockingBird | deep-learning | 760 | 有没有在macbook上成功的同学,我配置完成后没有报错信息,但是一直是沙沙的声音,不清楚是什么原因 | **Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
mac OS Monterey 12.5
Python 3.9.2
**Screensho
<img width="1300" alt="截屏2022-10-07 00 26 04" src="https://user-images.githubusercontent.com/16750537/194367418-49c0c63b-b7d5-4d76-9738-4340db7c0325.png">
2524e6d77.png">
<img width="1669" alt="截屏2022-10-07 00 27 14" src="https://user-images.githubusercontent.com/16750537/194367712-1d2b7bd2-5704-4aa8-9d25-dcb1b41d9866.png">
ts[截图(如有)]**
If applicable, add screenshots to help
| closed | 2022-10-06T16:26:15Z | 2022-11-16T09:21:19Z | https://github.com/babysor/MockingBird/issues/760 | [] | zhuzhihui | 1 |
vitalik/django-ninja | pydantic | 1,067 | embedding api docs into custom page | Hi.
I am using the autom.-generated docs function, from ninja API, to create a web-based, user friendly interface for the users visiting my page, as well as test facility.
I would like to embed that wonderful feature into my already existing custom web layout (header, footer, navbar, etc...). By so doing, I would have to reinvent the wheel and the user could use that standalone facility. **Basically, I am missing a HOME button....**
I am currently using django
Is that possible? Any examples?
Thanks.
Marco | open | 2024-01-27T18:02:39Z | 2024-01-31T08:36:29Z | https://github.com/vitalik/django-ninja/issues/1067 | [] | MM-cyi | 3 |
graphdeco-inria/gaussian-splatting | computer-vision | 536 | Question about the variable | First thank you for your excellent work.
The method "create_from_pcd" of class "GaussianModel" in "./scene
/gaussian_model.py" defines the properties called "self._features_dc" and "self._features_rest" in line 142 and 143. I don't know what they correspond to in the article and how to use them. Could you please explain it? | closed | 2023-12-09T05:02:59Z | 2023-12-10T01:57:58Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/536 | [] | tuning12 | 3 |
iperov/DeepFaceLab | machine-learning | 5,267 | Using 2 GPUs for 2 different models at the same time I get this error | Hi... When I try to train 2 different models using 2 different GPUs, the first one starts, and then when it starts loading the 2nd one I get errors and it won't train at the same time. This is what the error is on the computer that was already training....
Traceback (most recent call last):][0.0505]
File "multiprocessing\queues.py", line 236, in _feed
File "multiprocessing\connection.py", line 200, in send_bytes
File "multiprocessing\connection.py", line 280, in _send_bytes
OSError: [WinError 1450] Insufficient system resources exist to complete the requested service
The computer has good specs w/ 64gb RAM and the gpus are both nvidia A6000's with 48gb VRAM. They run the models fine one at a time. It's weird because I have tried this with 2 Titans (24gb) vram on models that work on those cards and they would train 2 at a time. Any idea what would cause this? On the computer that I start the 2nd training on I get a lot more errors. I'll attach a screenshot of thosehere at the end. Thank you!!!

| open | 2021-01-28T04:14:14Z | 2023-06-08T22:21:33Z | https://github.com/iperov/DeepFaceLab/issues/5267 | [] | kilerb | 2 |
pywinauto/pywinauto | automation | 515 | How can I get all ListItem in ListBox? | Good day, everyone! I use pywinauto to automation desktop application. And I need to receive all ListItems from ListBox. Then I execute this code:
```python
def common_list(list_control):
state = list_control.element_info.enabled
if state:
automation_id = list_control.element_info.automation_id
if 'ListBox' in automation_id:
# list_of_item = list_control.children(control_type="ListItem")
list_of_item = list_control.items()
else:
list_of_item = list_control.children()[1:]
time.sleep(pause)
return list_of_item
```
I receive only visible elements of a list (in my case, it is 14 elements, but there are 53 of them). How can I receive ALL of them? | open | 2018-07-02T14:31:52Z | 2018-07-11T17:36:44Z | https://github.com/pywinauto/pywinauto/issues/515 | [
"enhancement",
"UIA-related"
] | Nebyt | 0 |
huggingface/transformers | machine-learning | 36,051 | [generate] generate not working gradient_checkpointing=True | ### System Info
```
transformers==4.37.2
peft==0.13.1
accelerate==0.21.0
deepspeed==0.12.6
torch==2.1.2+cu118
```
### Who can help?
@zucchini-nlp @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
## Minimal reproducible example on colab
```bash
!pip install -q uv
!uv pip install -q --system transformers==4.37.2 peft==0.13.1 accelerate==0.21.0 deepspeed==0.12.6
!pip install -q torch==2.1.2 --index-url https://download.pytorch.org/whl/cu118
```
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
Seq2SeqTrainingArguments,
Seq2SeqTrainer,
AutoProcessor
)
from torch.utils.data import Dataset
import torch
import os
class DummyDataset(Dataset):
def __init__(self):
self.samples = [{"text": "Hello world"} for _ in range(10)]
def __len__(self):
return len(self.samples)
def __getitem__(self, idx):
return self.samples[idx]
model = AutoModelForCausalLM.from_pretrained(
"HuggingFaceM4/tiny-random-LlamaForCausalLM",
torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceM4/tiny-random-LlamaForCausalLM")
def collate_fn(batch):
texts = [item["text"] for item in batch]
return tokenizer(texts, return_tensors="pt", padding=True, truncation=True)
# A dummy trainer in which we need to generate during the training loop
class MyTrainer(Seq2SeqTrainer):
def compute_loss(self, model, inputs, return_outputs=False, **kwargs):
with torch.no_grad():
# simulating generation with inputs-embeds (like in Llava)
# https://github.com/haotian-liu/LLaVA/blob/c121f0432da27facab705978f83c4ada465e46fd/llava/model/language_model/llava_llama.py#L137-L141
generated = model.generate(
inputs_embeds=torch.randn((4, 100, 16), dtype=torch.float16, device=self.args.device),
attention_mask=torch.where(torch.arange(100).unsqueeze(0).repeat_interleave(4, dim=0) < 50, torch.zeros((1,)), torch.ones((1,))).to(dtype=torch.bool, device=self.args.device),
do_sample=True,
temperature=0.6,
output_scores=False,
num_return_sequences=2,
max_new_tokens=30,
)
print("Generated:", tokenizer.decode(generated[0]))
return torch.tensor(0.5, requires_grad=True, device=self.args.device)
training_args = Seq2SeqTrainingArguments(
output_dir="./output",
per_device_train_batch_size=2,
learning_rate=1e-5,
num_train_epochs=1,
logging_steps=1,
gradient_checkpointing=True, # True # <-- Generation works with grad. ckpt. disabled, doesn't work with grad. ckpt. enabled
remove_unused_columns=False,
fp16=True,
optim="adamw_torch",
report_to="none"
)
trainer = MyTrainer(
model=model,
args=training_args,
train_dataset=DummyDataset(),
data_collator=collate_fn,
tokenizer=tokenizer,
)
trainer.train()
```
## Bug description
`generate()` in `transformers==4.37.2` does not work properly when `gradient_checkpointing` is enabled in `Seq2SeqTrainer`.
If `gradient_checkpointing=False`, `generate()` works fine and returns the generated sequence:
```
Generated: <unk> radical助 reflected pipapplejnapom workaroundчик moment coresкра información Lisaвали résocoAnim radical ore Gene after convergence allem FIFAtembre davon
```
With `gradient_checkpointing=True` generation raises the error `RuntimeError: The size of tensor a (101) must match the size of tensor b (100) at non-singleton dimension 3`
## Full traceback
```
warnings.warn(
/usr/local/lib/python3.11/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
/usr/local/lib/python3.11/dist-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
/usr/local/lib/python3.11/dist-packages/torch/utils/checkpoint.py:61: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-4-ab8901164bef>](https://localhost:8080/#) in <cell line: 0>()
72 )
73
---> 74 trainer.train()
17 frames
[/usr/local/lib/python3.11/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1537 hf_hub_utils.enable_progress_bars()
1538 else:
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
[/usr/local/lib/python3.11/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1867
1868 with self.accelerator.accumulate(model):
-> 1869 tr_loss_step = self.training_step(model, inputs)
1870
1871 if (
[/usr/local/lib/python3.11/dist-packages/transformers/trainer.py](https://localhost:8080/#) in training_step(self, model, inputs)
2770
2771 with self.compute_loss_context_manager():
-> 2772 loss = self.compute_loss(model, inputs)
2773
2774 if self.args.n_gpu > 1:
[<ipython-input-4-ab8901164bef>](https://localhost:8080/#) in compute_loss(self, model, inputs, return_outputs, **kwargs)
38 # simulating generation with inputs-embeds (like in Llava)
39 # https://github.com/haotian-liu/LLaVA/blob/c121f0432da27facab705978f83c4ada465e46fd/llava/model/language_model/llava_llama.py#L137-L141
---> 40 generated = model.generate(
41 inputs_embeds=torch.randn((4, 100, 16), dtype=torch.float16, device=self.args.device),
42 attention_mask=torch.where(torch.arange(100).unsqueeze(0).repeat_interleave(4, dim=0) < 50, torch.zeros((1,)), torch.ones((1,))).to(dtype=torch.bool, device=self.args.device),
[/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py](https://localhost:8080/#) in decorate_context(*args, **kwargs)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
116
117 return decorate_context
[/usr/local/lib/python3.11/dist-packages/transformers/generation/utils.py](https://localhost:8080/#) in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
1523
1524 # 13. run sample
-> 1525 return self.sample(
1526 input_ids,
1527 logits_processor=prepared_logits_processor,
[/usr/local/lib/python3.11/dist-packages/transformers/generation/utils.py](https://localhost:8080/#) in sample(self, input_ids, logits_processor, stopping_criteria, logits_warper, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
2620
2621 # forward pass to get next token
-> 2622 outputs = self(
2623 **model_inputs,
2624 return_dict=True,
[/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
1519
1520 def _call_impl(self, *args, **kwargs):
[/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1528
1529 try:
[/usr/local/lib/python3.11/dist-packages/accelerate/utils/operations.py](https://localhost:8080/#) in forward(*args, **kwargs)
579
580 def forward(*args, **kwargs):
--> 581 return model_forward(*args, **kwargs)
582
583 # To act like a decorator so that it can be popped when doing `extract_model_from_parallel`
[/usr/local/lib/python3.11/dist-packages/accelerate/utils/operations.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)
567
568 def __call__(self, *args, **kwargs):
--> 569 return convert_to_fp32(self.model_forward(*args, **kwargs))
570
571 def __getstate__(self):
[/usr/local/lib/python3.11/dist-packages/torch/amp/autocast_mode.py](https://localhost:8080/#) in decorate_autocast(*args, **kwargs)
14 def decorate_autocast(*args, **kwargs):
15 with autocast_instance:
---> 16 return func(*args, **kwargs)
17
18 decorate_autocast.__script_unsupported = "@autocast() decorator is not supported in script mode" # type: ignore[attr-defined]
[/usr/local/lib/python3.11/dist-packages/transformers/models/llama/modeling_llama.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1181
1182 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
-> 1183 outputs = self.model(
1184 input_ids=input_ids,
1185 attention_mask=attention_mask,
[/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
1519
1520 def _call_impl(self, *args, **kwargs):
[/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1528
1529 try:
[/usr/local/lib/python3.11/dist-packages/transformers/models/llama/modeling_llama.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1033 # output_attentions=True can not be supported when using SDPA, and we fall back on
1034 # the manual implementation that requires a 4D causal mask in all cases.
-> 1035 attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
1036 attention_mask,
1037 (batch_size, seq_length),
[/usr/local/lib/python3.11/dist-packages/transformers/modeling_attn_mask_utils.py](https://localhost:8080/#) in _prepare_4d_causal_attention_mask_for_sdpa(attention_mask, input_shape, inputs_embeds, past_key_values_length, sliding_window)
396 )
397 else:
--> 398 expanded_4d_mask = attn_mask_converter.to_4d(
399 attention_mask,
400 input_shape[-1],
[/usr/local/lib/python3.11/dist-packages/transformers/modeling_attn_mask_utils.py](https://localhost:8080/#) in to_4d(self, attention_mask_2d, query_length, dtype, key_value_length)
135
136 if causal_4d_mask is not None:
--> 137 expanded_attn_mask = causal_4d_mask.masked_fill(expanded_attn_mask.bool(), torch.finfo(dtype).min)
138
139 # expanded_attn_mask + causal_4d_mask can cause some overflow
RuntimeError: The size of tensor a (101) must match the size of tensor b (100) at non-singleton dimension 3
```
### Expected behavior
I think this error is caused by the fact that after the first model forward, `_update_model_kwargs_for_generation` extends the attention mask for the new generated token
https://github.com/huggingface/transformers/blob/345b9b1a6a308a1fa6559251eb33ead2211240ac/src/transformers/generation/utils.py#L623
but `LlamaForCausalLM.prepare_inputs_for_generation` always relies on `inputs_embeds` to generate the next token when `gradient_checkpointing=True --> use_cache=False --> past_key_values=None`
https://github.com/huggingface/transformers/blob/v4.37.2/src/transformers/models/llama/modeling_llama.py#L1270-L1273
That does not happen when gradient_checkpointing=False because past_key_values is not None after the first model forward so input_ids are chosen as model_inputs for successive steps.
I can't tell if this affects also recent version of transformers because I am not familiar with the new Cache API.
Is it enough to find an alternative way to check whether we are in the first generation step or in a successive one (instead of relying on past_key_values) to fix the issue or there are other functions that relies on this check which I am not aware of?
Thanks in advance for the support :) | closed | 2025-02-05T15:00:39Z | 2025-02-06T14:16:32Z | https://github.com/huggingface/transformers/issues/36051 | [
"bug",
"Generation"
] | alcompa | 7 |
davidsandberg/facenet | computer-vision | 433 | What is the role of facenet.prewhiten | I saw the code of compare.py and found the function prewhiten.
What is the role of facenet.prewhiten? | closed | 2017-08-22T10:41:41Z | 2018-06-28T23:50:17Z | https://github.com/davidsandberg/facenet/issues/433 | [] | tonybaigang | 2 |
horovod/horovod | pytorch | 3,620 | Wrapping keras optimizer with hvd optimizer breaks the code | Using 2 VMs with 1 GPU each to do distributed training using Horovod.
This is what I have for building and compiling my model:

When I pass in True for horovod, my code breaks starts Epoch 1, then hangs for a while (no output), and then crashes with the following error.


When I pass in False for horovod, my code works on 2 VMs.
My hvd.init() and model.fit() are both in a script called horovod_training.py.
And my build_and_compile_model() function is in another script called model.py.
In model.py I have the following imports:

Is this okay to do? Any help is appreciated! | closed | 2022-07-26T15:37:39Z | 2022-07-28T14:07:13Z | https://github.com/horovod/horovod/issues/3620 | [] | bluepra | 3 |
jacobgil/pytorch-grad-cam | computer-vision | 553 | About the inference speed of using FullGrad for cam visualization | I was using FullGrad for cam visualization and found that it was much slower compared with ScoreCAM or GradCAM, and when using FullGrad for visualizations, there were warnings like `Warning: target_layers is ignored in FullGrad. All bias layers will be used instead`. Was it normal that using FullGrad was slow because it considered all layers? Or did I wrongly use the FullGrad? | open | 2025-02-05T07:06:10Z | 2025-02-18T20:44:24Z | https://github.com/jacobgil/pytorch-grad-cam/issues/553 | [] | kasteric | 1 |
HumanSignal/labelImg | deep-learning | 362 | Could you please add the support for CJK | Thanks for sharing this useful tool.
I'm using labelImg to label vehicles in traffic images, and the label are Chinese names, which is difficult to translate to English. Even if we translate the names to English, the people who do the labeling job may not understand the exact meaning of the label, there are information loss in the translate and re-translate processes.
So, could you please add the support for CJK labels.
BTW, I'm using labelImg V1.7.0 for Windows. | closed | 2018-09-07T03:07:27Z | 2018-10-23T05:09:15Z | https://github.com/HumanSignal/labelImg/issues/362 | [] | yangulei | 1 |
assafelovic/gpt-researcher | automation | 407 | ModuleNotFoundError: No module named 'gpt_researcher.retrievers.serpapi' | Hi!
I've the following setup:
- Venv: _Python 3.10.12_
- PIP: _gpt-researcher 0.1.0_
And i obtain this error:
"ModuleNotFoundError: No module named 'gpt_researcher.retrievers.serpapi'"
Neither in https://pypi.org/project/gpt-researcher/ and https://github.com/assafelovic/gpt-researcher/blob/master/requirements.txt address any Google SERP package dependency.
I have tried installing: _pip install google-search-results_, but no good!
I can install gpt-researcher in a different project, but it's not the goal here.
Can you help?
PS - Congrats! Amazing product you built here!
Best,
João Moreira | closed | 2024-03-22T19:07:15Z | 2024-03-25T18:23:55Z | https://github.com/assafelovic/gpt-researcher/issues/407 | [] | joaomnmoreira | 3 |
flairNLP/flair | nlp | 3,036 | Support for Vietnamese | Hi, I am looking through Flair and wondering if it support Vietnamese or not. If not, will it in the future? Thank you!
_Originally posted by @longsc2603 in https://github.com/flairNLP/flair/issues/2#issuecomment-1354413764_
| closed | 2022-12-21T01:37:57Z | 2023-06-11T11:25:47Z | https://github.com/flairNLP/flair/issues/3036 | [
"wontfix"
] | longsc2603 | 1 |
mitmproxy/pdoc | api | 110 | .. | closed | 2016-07-23T05:51:14Z | 2018-06-03T01:42:18Z | https://github.com/mitmproxy/pdoc/issues/110 | [] | krisvandermerwe | 1 | |
davidsandberg/facenet | tensorflow | 700 | How to remove the identities which are overlapped between Ms-Celeb-1M and LFW/Facescrub? | Hello everyone, is there a good way to remove the identities which are overlapped between Ms-Celeb-1M and LFW/Facescrub?
Or could anyone share your overlapping list?
Thank you very much! | open | 2018-04-15T22:44:22Z | 2018-04-15T22:44:22Z | https://github.com/davidsandberg/facenet/issues/700 | [] | Landwind-Xin | 0 |
igorbenav/fastcrud | pydantic | 115 | Data repetition in get_multi_joined and issue in join_on | ```
data = crud_user.get_multi(
db,
nest_joins=True,
joins_config=[
JoinConfig(
model=Portions,
join_on=User.portion_id == Portion.id,
join_prefix='portions',
join_type="left",
relationship_type='one-to-many'
),
JoinConfig(
model=Category,
join_on=User.created_id == Category.id,
),
],
)
```
the above code is what I used to run. When it is complete, it give data like this but the problem is that in portions, the data is repeating like the below one;
```
{
"id": 1,
"created_user_id": 1,
"name": "Test",
"email": "test@gmail.com",
"portions": [
{"id": 1, "user_id": 1, "quantity": "Half"},
{"id": 2, "user_id": 1, "quantity": "Full"},
{"id": 1, "user_id": 1, "quantity": "Half"},
{"id": 2, "user_id": 1, "quantity": "Full"},
],
"category": [{"id": 1, "category": "Food"}, {"id": 2, "category": "Real-Estate"}],
}
```
when i debug the code, the problem is found in here;
```
def _nest_multi_join_data(
base_primary_key: str,
data: list[Union[dict, BaseModel]],
joins_config: Sequence[JoinConfig],
return_as_model: bool = False,
schema_to_select: Optional[type[BaseModel]] = None,
nested_schema_to_select: Optional[dict[str, type[BaseModel]]] = None,
) -> Sequence[Union[dict, BaseModel]]:
pre_nested_data = {}
for join_config in joins_config:
join_primary_key = _get_primary_key(join_config.model)
for row in data:
if isinstance(row, BaseModel):
new_row = {key: (value[:] if isinstance(value, list) else value) for key, value in row.model_dump().items()}
else:
new_row = {key: (value[:] if isinstance(value, list) else value) for key, value in row.items()}
primary_key_value = new_row[base_primary_key]
if primary_key_value not in pre_nested_data:
for key, value in new_row.items():
if isinstance(value, list) and any(item[join_primary_key] is None for item in value): # pragma: no cover
new_row[key] = []
pre_nested_data[primary_key_value] = new_row
else:
existing_row = pre_nested_data[primary_key_value]
for key, value in new_row.items():
if isinstance(value, list):
if any(item[join_primary_key] is None for item in value): # pragma: no cover
existing_row[key] = []
else:
existing_row[key].extend(value)
nested_data: list = list(pre_nested_data.values())
```
here the
```
for join_config in joins_config:
join_primary_key = _get_primary_key(join_config.model)
for row in data:
```
When the first loop is complete, it gives the correct data, but the problem is that there are two join configs so the outside loop will execute again and do the same thing. I think that's why the data is repeating
____________________________________________________________________________________________________________________________________
And also, there is another issue, like if I add join_on, it will show this error message: `Could not automatically determine join condition. Please provide join_on.` The reason is because:
` join_on=join_on or _auto_detect_join_condition(self.model, join_model),`
To solve it, I added it like this:
```
if join_on is None and joins_config is None:
join_on = _auto_detect_join_condition(self.model, join_model)
```
| closed | 2024-07-01T02:50:34Z | 2024-12-23T03:40:51Z | https://github.com/igorbenav/fastcrud/issues/115 | [] | mithun2003 | 3 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 434 | I would like to sing in polish language instead of text | Is this tool able to "render" singing voice based on my voice? (not by typing text)
I would like to sound like some other person. so, basically train model with some other speech -> I am recording my song with my voice -> generate singing voice like some other else. It doesn't need to be "perfect", I want it just to sound like someone else.
Is that possible? How? | closed | 2020-07-20T19:13:22Z | 2021-01-16T22:35:48Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/434 | [] | annaskarzynska | 3 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 883 | TTS outputing different words than the ones typed in | Hi,
I am putting my hands on your fun project! Actually I am trying to clone a voice in French. I edited a short recording and made 16 extracts (22kHz mono 32 pcm Microsoft wav ranging from 1 to 5 seconds) out of it that I manually transcripted following the file hierarchy @blue-fish [shows](https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/437#issuecomment-666099538).
I also added some characters to [utils/symbols](https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/257#issuecomment-665428894). Then I launched the training with the command you gave : `python3 synthesizer_preprocess_audio.py datasets_root --datasets_name LibriTTS --subfolders train-clean-100 --no_alignments`
I let it go up to 22k steps because I did not know when to stop it ([this guy ](https://www.youtube.com/watch?v=b1fzyM0VhhI)suggests to stop it when the loss is less than 0.15) and because the SV2TTS folder in my datasets_root was not inflating. Indeed it seems that the command output the model in _synthesizer/saved_models/my_new_run_.
I looked at the gene

rated mel-spectrograms (see attached) and it looked promising since ground truth and predicted looked quite alike (at least to my uninstructed eyes).
But when I tried the model out in the toolbox I got pity results. Indeed I input "Bonjour le monde" and it output another phrase (eg : "on va en profiter") that directly comes from the extracts.
So I know you advised for a 12 minute recording for single speaker training but the other guy I mentioned earlier had good results with 20 or 30 extracts (in English), consequently I took my chances with even less extracts just to have a quick starting point to later compare with.
Yet I feel disappointed because the results I got were unexpected to me. I would have expected bad quality audio but not completely different words ! Unless the word in input in TTS field and the one it output as wav have the closest embedding ?
It also looks like extracts of a duration less than around 1.53s (reported duration from audacity) are discarded. Is it expected and is it linked to the 1.6 utterance duration written in Corentin's thesis page 16 ?
Finally I could not train the vocoder because of missing mels_gta directory. I know you wrote that it was not necessary to train the vocider, but if I miss mels_gta directory maybe something went wrong during the training. Or is everything OK ?
Is it worth it to continue with editing 12 minutes or more from this voice or I made something wrong in the process ?
Can you help me out ?
| closed | 2021-10-31T06:36:37Z | 2022-08-19T11:47:34Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/883 | [] | Ca-ressemble-a-du-fake | 29 |
Avaiga/taipy | data-visualization | 1,496 | [🐛 BUG] Callbacks not called in Python API | ### What went wrong? 🤔
Callbacks are not called by Taipy when they are referenced directly.
### Expected Behavior
They should still be called like before. This is a regression.
### Steps to Reproduce Issue
Run this code. The first slider doesn't call the callback, but the second does.
```python
from taipy.gui import Gui, notify
import taipy.gui.builder as tgb
value: int = 10
def on_slider(state):
print("Not called with 'on_slider'")
notify(state, "success", f"Value: {state.value}")
with tgb.Page() as page:
# Not working
tgb.slider(value="{value}", on_change=on_slider)
# Works
tgb.slider(value="{value}", on_change="{on_slider}")
Gui(page=page).run(title="Frontend Demo")
```
### Browsers
Chrome
### OS
Windows
### Version of Taipy
Develop - 7/10/24
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-07-10T09:48:17Z | 2024-07-10T13:59:40Z | https://github.com/Avaiga/taipy/issues/1496 | [
"🟥 Priority: Critical",
"🖰 GUI",
"💥Malfunction"
] | FlorianJacta | 0 |
sammchardy/python-binance | api | 1,155 | TESTNET: binance.exceptions.BinanceAPIException: APIError(code=-2008): Invalid Api-Key ID. | **Describe the bug**
I tried to connect to the binance testnet and to get the account status, but got the following error:
File "/home/h/anaconda3/envs/pairTrading/lib/python3.9/site-packages/binance/client.py", line 2065, in get_account_status
return self._request_margin_api('get', 'account/status', True, data=params)
File "/home/h/anaconda3/envs/pairTrading/lib/python3.9/site-packages/binance/client.py", line 365, in _request_margin_api
return self._request(method, uri, signed, **kwargs)
File "/home/h/anaconda3/envs/pairTrading/lib/python3.9/site-packages/binance/client.py", line 316, in _request
return self._handle_response(self.response)
File "/home/h/anaconda3/envs/pairTrading/lib/python3.9/site-packages/binance/client.py", line 325, in _handle_response
raise BinanceAPIException(response, response.status_code, response.text)
binance.exceptions.BinanceAPIException: APIError(code=-2008): Invalid Api-Key ID.
I found out, that when I call client.get_account_status(), the client connects to the real binance api and not to the testnet api because it doesn't switch from the real url to the testnet url:
```
def _create_margin_api_uri(self, path: str, version: str = MARGIN_API_VERSION) -> str:
return self.MARGIN_API_URL + '/' + version + '/' + path
```
To compare with other requests that works:
```
def _create_api_uri(self, path: str, signed: bool = True, version: str = PUBLIC_API_VERSION) -> str:
url = self.API_URL
if self.testnet:
url = self.API_TESTNET_URL
v = self.PRIVATE_API_VERSION if signed else version
return url + '/' + v + '/' + path
```
**To Reproduce**
client = Client(config.api_key, config.secret_key, testnet=True)
client.get_account_status()
**Expected behavior**
Response from the Testnet.
**Environment (please complete the following information):**
- Python version: 3.9
- Virtual Env: conda
- OS: RHEL 8.4
- python-binance version: 1.0.15
**Logs or Additional context**
Add any other context about the problem here.
| open | 2022-03-02T16:11:38Z | 2024-04-01T22:14:17Z | https://github.com/sammchardy/python-binance/issues/1155 | [] | zenthara | 24 |
MycroftAI/mycroft-core | nlp | 2,712 | Use a static custom settings file to save user informationn | I'm in trouble with this problem: I need to create a user id card where put some information about it, like name, surname, age, and so on.
I tried to add my own custom file a the mycroft main level called userinfo.conf, i added a class in configuration.py, locations.py and __init__.py to load this conf file with the other mycroft configurations file.
The loading work very well, I just used the existent classes in configuration.py to create my class to load the file.
But I'm in trouble with the consistency of the information saved on my own configuration file.
Let me explain: if i have a skill to set the name of the user, the skill sets the name of the user, using the dict given by MycroftSkill (config_core) and then calling a mine own method in configuration.py class to save this info on my own conf file.
But the problem is that cause the dict where MycroftSkill save the configuration information is not static, if a change the value in that dict, all the others skills can't see the new value, because they load the configuration information at creation time so where the MycroftSkill constructor is called.
I need something that all skills can be seen and that is consistent over all the skills, so that if a skill changes the same value, then that value is visible by all other skills.
I know that all skills have a configuration file to save some value, but if i have a skill for the name, one for the age, and so on and i save each info on proper skill configuration file, then the access to the info is very useless because I have the info fragmented in different skills configuration files.
What is the right way to solve this problem? I also want to avoid access to this hypothetic file every time I need to read some info, For example, if all my skills use the name value to call the user by name during the interaction, i don't want that each skill every time open a stream to the file to read the value.
for this reason i tried to exploit the mycroft pattern about configuration. | closed | 2020-10-01T12:02:05Z | 2024-09-08T08:33:18Z | https://github.com/MycroftAI/mycroft-core/issues/2712 | [
"Type: Enhancement - proposed",
"Status: For discussion"
] | damorosodaragona | 6 |
microsoft/nni | data-science | 5,660 | nni norm_pruning example error | Hello!
I am trying to locally run a pytorch version of one of the pruning examples (nni/examples/compression/pruning/norm_pruning.py).
However, it seems that the config_list that is generate from the function "auto_set_denpendency_group_ids" has some problem.
**The config_list is:**
[{'op_names': ['layer3.1.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '0152545ff8de4d14a8cfe727bf9769d1',
'internal_metric_block': 1},
{'op_names': ['layer1.0.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '5c913fb3076441e2af16c32c03758329',
'internal_metric_block': 1},
{'op_names': ['layer2.0.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '497a228f19e047d8a26fa94cc97fbabf',
'internal_metric_block': 1},
{'op_names': ['layer4.0.downsample.0'],
'sparse_ratio': 0.5,
'dependency_group_id': 'ea22b181139c4199b090c4e702d85083',
'internal_metric_block': 1},
{'op_names': ['layer3.1.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '60ccd2e1a186412e89d256682007b2f7',
'internal_metric_block': 1},
{'op_names': ['layer4.1.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '70578023ad6e48c1b14ef44d5e6a0c3f',
'internal_metric_block': 1},
{'op_names': ['conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '01d9e39d16e94df4838ea98275f5d445',
'internal_metric_block': 1},
{'op_names': ['layer1.0.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '01d9e39d16e94df4838ea98275f5d445',
'internal_metric_block': 1},
{'op_names': ['layer3.0.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '0cbb55f71a484d64b775d7d82380d0dd',
'internal_metric_block': 1},
{'op_names': ['layer4.0.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': 'ea22b181139c4199b090c4e702d85083',
'internal_metric_block': 1},
{'op_names': ['layer1.1.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '01d9e39d16e94df4838ea98275f5d445',
'internal_metric_block': 1},
{'op_names': ['layer2.0.downsample.0'],
'sparse_ratio': 0.5,
'dependency_group_id': '3a2f40dea91340f296b0e40049ee1b57',
'internal_metric_block': 1},
{'op_names': ['layer2.1.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': 'aa5de9115c5141aeb1736ed8d9f479fd',
'internal_metric_block': 1},
{'op_names': ['layer4.1.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': 'ea22b181139c4199b090c4e702d85083',
'internal_metric_block': 1},
{'op_names': ['layer2.1.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '3a2f40dea91340f296b0e40049ee1b57',
'internal_metric_block': 1},
{'op_names': ['layer3.0.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '60ccd2e1a186412e89d256682007b2f7',
'internal_metric_block': 1},
{'op_names': ['layer4.0.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': '526922a4a69d4a46b2fdbf937f8283dc',
'internal_metric_block': 1},
{'op_names': ['layer1.1.conv1'],
'sparse_ratio': 0.5,
'dependency_group_id': 'f1dbbaba5cce46698e3efbcd84d48e4c',
'internal_metric_block': 1},
{'op_names': ['layer3.0.downsample.0'],
'sparse_ratio': 0.5,
'dependency_group_id': '60ccd2e1a186412e89d256682007b2f7',
'internal_metric_block': 1},
{'op_names': ['layer2.0.conv2'],
'sparse_ratio': 0.5,
'dependency_group_id': '3a2f40dea91340f296b0e40049ee1b57',
'internal_metric_block': 1}]
**The error I get:**
Or(And({Or('sparsity', 'sparsity_per_layer'): And(<class 'float'>, <function <lambda> at 0x7d11a9a048b0>), Optional('op_types'): And(['Conv2d', 'Linear'], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a98b8ca0>), Optional('op_names'): And([<class 'str'>], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cb520>), Optional('op_partial_names'): [<class 'str'>]}, <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cbeb0>), And({'exclude': <class 'bool'>, Optional('op_types'): And(['Conv2d', 'Linear'], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cbe20>), Optional('op_names'): And([<class 'str'>], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cbd90>), Optional('op_partial_names'): [<class 'str'>]}, <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a74cb490>), And({'total_sparsity': And(<class 'float'>, <function <lambda> at 0x7d11a9a07be0>), Optional('max_sparsity_per_layer'): {<class 'str'>: <class 'float'>}, Optional('op_types'): And(['Conv2d', 'Linear'], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a7429ea0>), Optional('op_names'): And([<class 'str'>], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a7428790>)}, <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x7d11a7428430>)) did not validate {'op_names': ['layer3.1.conv1'], 'sparse_ratio': 0.5, 'dependenc...
Missing key: Or('sparsity', 'sparsity_per_layer')
Missing key: 'exclude'
Missing key: 'total_sparsity'
Would be happy to understand what I am missing.
Thanks a lot!
Noy
| closed | 2023-08-13T13:32:48Z | 2023-08-17T07:30:54Z | https://github.com/microsoft/nni/issues/5660 | [] | NoyLalzary | 0 |
QuivrHQ/quivr | api | 2,741 | Notion Synchronization | Synchronise with Notion | closed | 2024-06-25T16:49:37Z | 2024-09-28T20:06:11Z | https://github.com/QuivrHQ/quivr/issues/2741 | [
"Stale"
] | StanGirard | 2 |
frappe/frappe | rest-api | 31,525 | Reports: Total Row rendered in the midst of a report | <!--
Welcome to the Frappe Framework issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to Frappe
- For questions and general support, use https://stackoverflow.com/questions/tagged/frappe
- For documentation issues, refer to https://frappeframework.com/docs/user/en or the developer cheetsheet https://github.com/frappe/frappe/wiki/Developer-Cheatsheet
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
4. If you think you know what the reason for the bug is, share it with us. Maybe put in a PR 😉
-->
## Description of the issue
In Reports with Total Row enabled the Total Row is rendered as a report row instead at the end of the report.
## Context information (for bug reports)
<img width="637" alt="Image" src="https://github.com/user-attachments/assets/22fdfc85-6dae-473c-9d31-edc80c634e65" />
**Output of `bench version`**
```
erpnext 15.53.4
frappe 15.57.0
```
## Steps to reproduce the issue
1. Create a Query Report that returns about 30 lines and enable "Add total row"
2. Show report and scroll down to the end of the report
### Observed result
If you scroll down in the report, Total Row moves up like a data line instead at staying at the bottom.
### Expected result
Total row stays at the bottom of the report.
### Stacktrace / full error message
```
Does not occur
```
## Additional information
OS version / distribution, `Frappe` install method, etc.
Debian bullseye | closed | 2025-03-05T06:56:44Z | 2025-03-21T00:16:00Z | https://github.com/frappe/frappe/issues/31525 | [
"bug"
] | zongo811 | 2 |
oegedijk/explainerdashboard | plotly | 292 | Update component plots when selecting data | Hello, I'm making a custom dashboard with ExplainerDashboard components and a map. The idea is to be able to select a region in the map to filter the data and re calculate the shap values in order to understand a certain area's predictions by seeing the feature importances in this area in particular. However, since I'm not an expert in Dash I haven't been able to update the components. After being initialized correctly, once I select an area of the map and trigger the callback, the component plots end up empty. This is my (shortened) code:
dash.py (omitting initial setup)
```
app = Dash(__name__)
server = app.server
map_tab = RegressionDashboard(consolidated, eb_explainer, model, model_type, name="Regression Dashboard", app=app)
app.layout = html.Div([
map_tab.layout()
])
map_tab.register_callbacks(app)
if __name__ == "__main__":
log.info('Starting dashboard server ...')
app.run(port=6660, host='0.0.0.0')
```
regression_dashboard.py
```
class RegressionDashboard(ExplainerComponent):
def __init__(self, consolidated, explainer, model, model_type, app, source_crs='EPSG:32719',name=None,**kwargs):
super().__init__(explainer, title="Map")
# a lot of self.(something) lines
self.contrib = ShapContributionsGraphComponent(explainer,
hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True,
**kwargs)
self.shap_summary = ShapSummaryComponent(explainer, hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True, hide_type=True,
**kwargs) #Feature importances basically, edit title
self.shap_dependance = ShapDependenceComponent(explainer, hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True, plot_sample=100000,
**kwargs)
self.shap_dependance_connector = ShapSummaryDependenceConnector(self.shap_summary, self.shap_dependance)
#terrible layout, just for testing purposes
def layout(self):
self.map_fig = self.create_map()
return html.Div(
html.Div([
html.Div(
dcc.Graph(figure=self.map_fig, id="preds_map", style={'height': '45vh'}),
style={
'width': '50%',
'display': 'inline-block',
'border': 'thin lightgrey solid',
'boxSizing': 'border-box',
'height': '50vh'
}
),
html.Div([
self.contrib.layout(),
self.shap_summary.layout(),
self.shap_dependance.layout(),
],
)
],
style={
'width': '100%',
'height': '60vh'
}),
id='layout-container')
def update_layout_components(self):
return html.Div([
html.Div(
dcc.Graph(figure=self.map_fig, id="preds_map", style={'height': '45vh'}),
style={
'width': '50%',
'display': 'inline-block',
'border': 'thin lightgrey solid',
'boxSizing': 'border-box',
'height': '50vh'
}
),
html.Div([
self.contrib.layout(),
self.shap_summary.layout(),
self.shap_dependance.layout(),
]),
],
style={
'width': '100%',
'height': '60vh'
})
def create_map(self, filtered_data = None, max_points = None):
#map code, irrelevant
return fig
def transform_coordinates(self, df, x_col, y_col, source_crs):
# transform coordinates from one system to another, irrelevant
return df
#I want to filter by coordinates but right now I'm just trying to update the plots by just making a
# random subsample of the data to prove
# the plots are updating
def update_components(self):
predictor = self.model.steps[-1][1]
X_transformed, blockids = consolidated_to_X(self.consolidated.sample(n=3000, random_state=42), self.model)
X_transformed.drop(['long', 'lat'], axis=1, inplace=True)
explainer = RegressionExplainer(model=predictor, X=X_transformed, n_jobs=-1, index_name="Block ID",
precision="float32", target="DEPVAR")
shap_explainer = shap.Explainer(predictor, X_transformed)
shap_values = shap_explainer.shap_values(X_transformed, check_additivity=False, approximate=True)
base_values = shap_explainer.expected_value
explainer.set_shap_values(base_values, shap_values)
self.contrib = ShapContributionsGraphComponent(explainer,
hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True,
)
self.shap_summary = ShapSummaryComponent(explainer, hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True, hide_type=True,
) #Feature importances basically, edit title
self.shap_dependance = ShapDependenceComponent(explainer, hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True, plot_sample=100000,
)
self.shap_dependance_connector = ShapSummaryDependenceConnector(self.shap_summary, self.shap_dependance)
def component_callbacks(self, app):
@app.callback(
Output('layout-container', 'children'),
Input('preds_map', 'selectedData'),
prevent_initial_call=True)
def update_selected_data(selectedData):
if not selectedData:
raise PreventUpdate
self.update_components()
new_layout = self.update_layout_components()
return new_layout
```
What am I missing here? I know there's probably a lot of unnecesary code here and it's really messy, but I'm really losing my mind over this. Any help is greatly appreciated. Thanks! | closed | 2023-12-28T20:02:37Z | 2024-01-25T14:43:40Z | https://github.com/oegedijk/explainerdashboard/issues/292 | [] | soundgarden134 | 3 |
pallets-eco/flask-sqlalchemy | flask | 659 | lgfntveceig | closed | 2018-12-16T20:16:54Z | 2020-12-05T20:46:22Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/659 | [] | seanmcfeely | 0 | |
MagicStack/asyncpg | asyncio | 237 | Inserting array types [0,1,2,...] | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**:
* **PostgreSQL version**:
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**:
* **Python version**:
* **Platform**:
* **Do you use pgbouncer?**:
* **Did you install asyncpg with pip?**:
* **If you built asyncpg locally, which version of Cython did you use?**:
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**:
<!-- Enter your issue details below this comment. -->
Running most recent asyncpg, pg96, uvloop, py3.6, no bouncer, Linux. Trying to insert an array into a array table in PG. Constantly gives me type error (looks for int, got str, give it a str, type not iterable, etc). I have scoured the documentation looking for the proper way to deal with this type but have resorted to posting here. Is there an accepted way to insert this data type? Thanks in advance! | closed | 2017-12-13T13:31:26Z | 2017-12-13T15:26:42Z | https://github.com/MagicStack/asyncpg/issues/237 | [] | ghost | 6 |
nonebot/nonebot2 | fastapi | 3,262 | Bot: AntiFraudBot | ### 机器人名称
AntiFraudBot
### 机器人描述
反诈机器人
### 机器人项目仓库/主页链接
https://github.com/itsevin/AntiFraudBot
### 标签
[{"label":"反诈","color":"#ea5252"}] | closed | 2025-01-16T14:10:27Z | 2025-01-19T03:07:27Z | https://github.com/nonebot/nonebot2/issues/3262 | [
"Bot",
"Publish"
] | itsevin | 4 |
deepspeedai/DeepSpeed | machine-learning | 6,878 | How can DeepSpeed be configured to prevent the merging of parameter groups | The optimizer has been re-implemented to group parameters and set different learning rates for each group. However, after using DeepSpeed, all the `param_groups` are merged into one. How can this be prevented?
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupCosineLR",
"params": {
"total_num_steps": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"offload_param": {
"device": "none",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
``` | open | 2024-12-16T14:30:54Z | 2024-12-18T11:32:04Z | https://github.com/deepspeedai/DeepSpeed/issues/6878 | [] | CLL112 | 3 |
openapi-generators/openapi-python-client | fastapi | 605 | $ref in path parameters doesn't seem to work | **Describe the bug**
If a `get` path has a parameter using `$ref` to point to `#/components/parameters`, the generated API lacks the corresponding `kwarg`. If one "hoists" the indirect parameter up into the path's `parameters`, it works as expected.
**To Reproduce**
See [this repo](https://github.com/jrobbins-LiveData/openapi-ref-issue) for a reproducible example.
And see [this repo](https://github.com/jrobbins-LiveData/openapi-noref/blob/main/test-schema.yaml) for the same schema without the `$ref`, working as expected.
**Expected behavior**
I expect to see the optional `page` `kwarg` in both generated APIs.
I see this in the `$ref` example:
```python
def sync(
*,
client: Client,
) -> Optional[List[str]]:
```
and this in the `noref` example:
```python
def sync(
*,
client: Client,
page: Union[Unset, None, GetTestPage] = UNSET,
) -> Optional[List[str]]:
```
**OpenAPI Spec File**
[A link to your openapi.json which produces this issue.](https://github.com/jrobbins-LiveData/openapi-ref-issue/blob/main/test-schema.yaml)
**Desktop (please complete the following information):**
- OS: Microsoft Windows 10 Pro 10.0.19044 Build 19044
- Python Version: 3.9.12
- openapi-python-client version 0.11.1
| open | 2022-04-30T20:56:37Z | 2022-04-30T20:56:37Z | https://github.com/openapi-generators/openapi-python-client/issues/605 | [
"🐞bug"
] | jrobbins-LiveData | 0 |
hankcs/HanLP | nlp | 601 | 自定义词典没有起到作用? |
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
我是通过maven直接引入的
<dependency>
<groupId>com.hankcs</groupId>
<artifactId>hanlp</artifactId>
<version>portable-1.3.4</version>
</dependency>
当前最新版本号是:1.3.4
我使用的版本是:1.3.4
## 我的问题
自定义词典不起作用
## 复现问题
自定义词典不起作用
### 步骤
```
我在配置文件hanlp.properties中加入了
CustomDictionaryPath=data/dictionary/custom/CustomDictionary.txt;data/dictionary/custom/CustomDictionary/自定义词典.txt
```
自定义词典格式如下:
```
罗氏婴儿配方粉 n 1000
挂花大头菜 n 1000
黄毛籽 n 1000
青豆 n 1000
儿童营养饼干 n 1000
汤菜 n 1000
青萝卜 n 1000
```
分词结果:
### 触发代码
```
String str = "罗氏婴儿配方粉是什么?";
CoNLLSentence sentence = new NeuralNetworkDependencyParser().enableDeprelTranslator(false).parse(str);
// 可以方便地遍历它
for (CoNLLWord word : sentence) {
System.out.printf("%s--%s --(%s)--> %s\n",word.ID, word.LEMMA, word.DEPREL, word.CPOSTAG);
}
```
### 期望输出
```
1--罗氏婴儿配方粉 --(SBV)-->n
2--是 --(HED)--> v
3--什么 --(VOB)--> r
4--? --(WP)--> wp
```
### 实际输出
```
1--罗氏 --(ATT)--> nz
2--婴儿 --(ATT)--> n
3--配方 --(ATT)--> n
4--粉 --(SBV)--> a
5--是 --(HED)--> v
6--什么 --(VOB)--> r
7--? --(WP)--> wp
```
| closed | 2017-08-10T10:35:41Z | 2020-01-01T11:08:29Z | https://github.com/hankcs/HanLP/issues/601 | [
"ignored"
] | djblovecxc | 2 |
STVIR/pysot | computer-vision | 202 | Can you upload models to site accessible from outside of China? | I live outside China, and am unable to create a Baidu account. Could you host the files on a different site? | closed | 2019-10-10T13:29:12Z | 2020-04-24T10:07:14Z | https://github.com/STVIR/pysot/issues/202 | [] | asw-v4 | 1 |
cvat-ai/cvat | computer-vision | 8,627 | GT annotations can sometimes show up in the Standard mode | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
In a task with honeypots, GT annotations can show up in the UI, when an annotation job is just opened. It can happen on the first opening of the job or later. If a mode is switched, e.g. to Review, the GT annotations correctly disappear, until the show conflicts button is pressed.

### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
- bce96eaafd0dc1dab6d59044aba6e15f7ca3163e
| closed | 2024-10-31T15:03:55Z | 2024-11-13T14:20:47Z | https://github.com/cvat-ai/cvat/issues/8627 | [
"bug",
"ui/ux"
] | zhiltsov-max | 0 |
nltk/nltk | nlp | 2,622 | [wiki] SENNA binary link is outdated | On this page:
https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software
The link to SENNA toolkit is no longer accessible. It was most likely taken from [here](https://www.nec-labs.com/research-departments/machine-learning/machine-learning-software/Senna) where it also leads nowhere.
The only other place I found this distribution author's website: https://ronan.collobert.com/senna/ | closed | 2020-11-09T12:56:01Z | 2021-07-30T07:53:46Z | https://github.com/nltk/nltk/issues/2622 | [
"documentation",
"resolved"
] | ermik | 1 |
voila-dashboards/voila | jupyter | 1,252 | Updating widgets in voila behaves differently than jupyterlab (voila flashes for a few seconds) | Very simple app
One button, one Table. If you click the button in refreshes the table
If you run it Jupyterlab it works fine and the table is updating without flashing
If you run it in voila the table gets updated but it disappears for a second before reappearing.
```
import pandas as pd
import numpy as np
import ipywidgets as w
from ipydatagrid import DataGrid
def make_grid(rows=20, cols=2):
df = pd.DataFrame(np.random.uniform(size=(rows, cols))).round(2)
return DataGrid(df)
btn1 = w.Button(description='click me')
cont1 = w.HBox([make_grid()])
def on_click_update1(event):
grid = make_grid()
cont1.children[0].data = grid.data
btn1.on_click(on_click_update1)
b1 = w.VBox([btn1, cont1])
``` | open | 2022-11-06T09:55:39Z | 2022-11-28T18:50:22Z | https://github.com/voila-dashboards/voila/issues/1252 | [
"bug"
] | gioxc88 | 2 |
nltk/nltk | nlp | 2,934 | Potential bug in sentence tokenizer since 3.6.6 | We use `nltk` tokenizer `tokenizers/punkt/english.pickle` to split relatively long text into sentences.
After we upgrading to 3.6.6, we noticed at least one change in the tokenizer results which looks rather like a bug
Given this simple text example:
```
1. This is R .
2. This is A .
3. That's all
```
We expect those sentences:
```json
[
"1.",
"This is R .",
"2.",
"This is A .",
"3.",
"That's all"
]
```
However since `3.6.6` we got these sentences:
```json
[
"1.",
"This is R .\n2.",
"This is A .",
"3.",
"That's all"
]
```
This may looks very insignificant but may has some effect on sensible transformer-based nlp models...
Here is the whole code to reproduce it:
```python
import nltk
text = """1. This is R .
2. This is A .
3. That's all"""
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
tokenizer.tokenize(text)
``` | closed | 2022-01-24T15:50:45Z | 2022-07-04T05:40:03Z | https://github.com/nltk/nltk/issues/2934 | [
"bug"
] | radcheb | 5 |
CPJKU/madmom | numpy | 502 | Incompatible with Python 3.10 because MutableSequence was moved to collections.abc | ### Expected behaviour
`import madmom` would be necessary to work in order to use this library.
### Actual behaviour
Fails with
```ImportError: cannot import name 'MutableSequence' from 'collections' (..../python3.10/collections/__init__.py)```
### Steps needed to reproduce the behaviour
Import madmom under python 3.10
### Suggested solution
Import [MutableSequences](https://docs.python.org/3/library/collections.abc.html#collections.abc.MutableSequence) from `collections.abc` | closed | 2022-01-25T08:20:21Z | 2023-11-10T11:56:53Z | https://github.com/CPJKU/madmom/issues/502 | [] | johentsch | 9 |
Miserlou/Zappa | flask | 1,736 | Zappa won't deploy from Travis with environment variables | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
I'm trying to deploy a Zappa app as per [this blog post](https://blog.zappa.io/posts/continuous-zappa-deployments-with-travis) using Python 3.6 and Travis.
## Expected Behavior
<!--- Tell us what should happen -->
The app would deploy. I can deploy from my computer but not Travis.
## Actual Behavior
<!--- Tell us what happens instead -->
```
$ export AWS_ACCESS_KEY_ID=[secure]
$ export AWS_SECRET_ACCESS_KEY=[secure]
travis_time:start:0002b71a
[0K$ source ~/virtualenv/python3.6/bin/activate
travis_time:end:0002b71a:start=1545521726858641044,finish=1545521726864754926,duration=6113882
[0K$ python --version
Python 3.6.3
$ pip --version
pip 9.0.1 from /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (python 3.6)
travis_fold:start:install
[0Ktravis_time:start:13e7689c
[0K$ pip install -r requirements.txt
Collecting boto3==1.9.66 (from -r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/6b/31/2922911f17940f8616c075415cda13de88577f350c31f9b5ea14ed104e7c/boto3-1.9.66-py2.py3-none-any.whl (128kB)
Collecting pandas==0.22.0 (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/da/c6/0936bc5814b429fddb5d6252566fe73a3e40372e6ceaf87de3dec1326f28/pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting Werkzeug==0.14.1 (from -r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)
Collecting xxhash==1.3.0 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/df/db/abd8ecd1753b60e5b527365676482bda272d71eaab0ad732a8be5f11d2d8/xxhash-1.3.0-cp36-cp36m-manylinux1_x86_64.whl (46kB)
Collecting Flask==1.0.2 (from -r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)
Collecting zappa (from -r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/64/e2/2c13f117d2178dd45f25f04c0a3a2afb59f2dc779fec1bdeaf7a8567f776/zappa-0.47.1-py3-none-any.whl (107kB)
Collecting pyarrow (from -r requirements.txt (line 7))
Downloading https://files.pythonhosted.org/packages/36/94/23135312f97b20d6457294606fb70fad43ef93b7bffe567088ebe3623703/pyarrow-0.11.1-cp36-cp36m-manylinux1_x86_64.whl (11.6MB)
Collecting s3transfer<0.2.0,>=0.1.10 (from boto3==1.9.66->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/d7/14/2a0004d487464d120c9fb85313a75cd3d71a7506955be458eebfe19a6b1d/s3transfer-0.1.13-py2.py3-none-any.whl (59kB)
Collecting botocore<1.13.0,>=1.12.66 (from boto3==1.9.66->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/50/c0/cd4f8bec8a10876f0ce34f0cf264fda04e09df41d1a473a43f890c71fffa/botocore-1.12.71-py2.py3-none-any.whl (5.2MB)
Collecting jmespath<1.0.0,>=0.7.1 (from boto3==1.9.66->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/b7/31/05c8d001f7f87f0f07289a5fc0fc3832e9a57f2dbd4d3b0fee70e0d51365/jmespath-0.9.3-py2.py3-none-any.whl
Collecting python-dateutil>=2 (from pandas==0.22.0->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/74/68/d87d9b36af36f44254a8d512cbfc48369103a3b9e474be9bdfe536abfc45/python_dateutil-2.7.5-py2.py3-none-any.whl (225kB)
Collecting pytz>=2011k (from pandas==0.22.0->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/f8/0e/2365ddc010afb3d79147f1dd544e5ee24bf4ece58ab99b16fbb465ce6dc0/pytz-2018.7-py2.py3-none-any.whl (506kB)
Requirement already satisfied: numpy>=1.9.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from pandas==0.22.0->-r requirements.txt (line 2))
Collecting click>=5.1 (from Flask==1.0.2->-r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl (81kB)
Collecting itsdangerous>=0.24 (from Flask==1.0.2->-r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/76/ae/44b03b253d6fade317f32c24d100b3b35c2239807046a4c953c7b89fa49e/itsdangerous-1.1.0-py2.py3-none-any.whl
Collecting Jinja2>=2.10 (from Flask==1.0.2->-r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
Collecting PyYAML==3.13 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/9e/a3/1d13970c3f36777c583f136c136f804d70f500168edc1edea6daa7200769/PyYAML-3.13.tar.gz (270kB)
Requirement already satisfied: six>=1.11.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: pip>=9.0.1 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Collecting lambda-packages==0.20.0 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/0d/27/e588646a1c8c47b96862aafa66416142db5db857732594aafe19cbbf3fda/lambda_packages-0.20.0.tar.gz (99.7MB)
Collecting tqdm==4.19.1 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/c0/d3/7f930cbfcafae3836be39dd3ed9b77e5bb177bdcf587a80b6cd1c7b85e74/tqdm-4.19.1-py2.py3-none-any.whl (50kB)
Collecting troposphere>=1.9.0 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/c7/d8/ac1aa690cb15d8b5f451ff1c54d106534d305a7dadee25c0052b144d425f/troposphere-2.3.4.tar.gz (128kB)
Collecting requests>=2.20.0 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/7d/e3/20f3d364d6c8e5d2353c72a67778eb189176f08e873c9900e10c0287b84b/requests-2.21.0-py2.py3-none-any.whl (57kB)
Collecting future==0.16.0 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/00/2b/8d082ddfed935f3608cc61140df6dcbf0edea1bc3ab52fb6c29ae3e81e85/future-0.16.0.tar.gz (824kB)
Collecting docutils>=0.12 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/36/fa/08e9e6e0e3cbd1d362c3bbee8d01d0aedb2155c4ac112b19ef3cae8eed8d/docutils-0.14-py3-none-any.whl (543kB)
Collecting toml>=0.9.4 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/a2/12/ced7105d2de62fa7c8fb5fce92cc4ce66b57c95fb875e9318dba7f8c5db0/toml-0.10.0-py2.py3-none-any.whl
Collecting wsgi-request-logger==0.4.6 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/04/dd/5e6c52b96a841baec75e5c5647460214aa02e9c4902c7b250375352224c0/wsgi-request-logger-0.4.6.tar.gz
Collecting argcomplete==1.9.3 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/0d/f2/058910b2c732092175875820177dae9d390e71a5f30a9895f92e6a6ca466/argcomplete-1.9.3-py2.py3-none-any.whl
Collecting kappa==0.6.0 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/ee/fa/1b8328d2199520ef5a257f8a2e9315ed0b0194e353a152ca1959490dfbc8/kappa-0.6.0.tar.gz
Collecting durationpy==0.5 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/24/85/70dabb62c3f9705566db0d8a33ebf6c203b5887a1a186f8bcf58a9bf46a6/durationpy-0.5.tar.gz
Collecting python-slugify==1.2.4 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/9f/77/ab7134b731d0e831cf82861c1ab0bb318e80c41155fa9da18958f9d96057/python_slugify-1.2.4-py2.py3-none-any.whl
Requirement already satisfied: wheel>=0.30.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Collecting hjson==3.0.1 (from zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/8a/92/6b6b85064f8a88cb3b31901d839e7b45c33e4ee450bb1b3cf0c226cca8ec/hjson-3.0.1.tar.gz (43kB)
Collecting urllib3<1.25,>=1.20; python_version >= "3.4" (from botocore<1.13.0,>=1.12.66->boto3==1.9.66->-r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/62/00/ee1d7de624db8ba7090d1226aebefab96a2c71cd5cfa7629d6ad3f61b79e/urllib3-1.24.1-py2.py3-none-any.whl (118kB)
Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask==1.0.2->-r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/08/04/f2191b50fb7f0712f03f064b71d8b4605190f2178ba02e975a87f7b89a0d/MarkupSafe-1.1.0-cp36-cp36m-manylinux1_x86_64.whl
Collecting cfn_flip>=1.0.2 (from troposphere>=1.9.0->zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/2b/98/c91146bf03087ea35fedeac0e7a751af9cbc29b560f576e6422aaacbe13d/cfn_flip-1.1.0.post1-py3-none-any.whl
Collecting idna<2.9,>=2.5 (from requests>=2.20.0->zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl (58kB)
Collecting certifi>=2017.4.17 (from requests>=2.20.0->zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/9f/e0/accfc1b56b57e9750eba272e24c4dddeac86852c2bebd1236674d7887e8a/certifi-2018.11.29-py2.py3-none-any.whl (154kB)
Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.20.0->zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
Collecting placebo>=0.8.1 (from kappa==0.6.0->zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/4e/67/cdd143e5cb486b33a0b43c5cdfe96c72f0f5dc0c2a2b5efebd4a47d703c1/placebo-0.8.2.tar.gz
Collecting Unidecode>=0.04.16 (from python-slugify==1.2.4->zappa->-r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/31/39/53096f9217b057cb049fe872b7fc7ce799a1a89b76cf917d9639e7a558b5/Unidecode-1.0.23-py2.py3-none-any.whl (237kB)
Building wheels for collected packages: PyYAML, lambda-packages, troposphere, future, wsgi-request-logger, kappa, durationpy, hjson, placebo
Running setup.py bdist_wheel for PyYAML: started
Running setup.py bdist_wheel for PyYAML: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/ad/da/0c/74eb680767247273e2cf2723482cb9c924fe70af57c334513f
Running setup.py bdist_wheel for lambda-packages: started
Running setup.py bdist_wheel for lambda-packages: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/c3/88/27/fdd463e4ada229ba4874ef3c38bea268d91bddcdf0ea69cb71
Running setup.py bdist_wheel for troposphere: started
Running setup.py bdist_wheel for troposphere: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/c2/da/bc/f353d6852cfbb312db95f9dbaf578eaaa3f3863e8d5d2e586f
Running setup.py bdist_wheel for future: started
Running setup.py bdist_wheel for future: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/bf/c9/a3/c538d90ef17cf7823fa51fc701a7a7a910a80f6a405bf15b1a
Running setup.py bdist_wheel for wsgi-request-logger: started
Running setup.py bdist_wheel for wsgi-request-logger: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/6b/aa/05/d92d33e020a66e347ddd04ec3a57dd5b2c33d14616d675bd43
Running setup.py bdist_wheel for kappa: started
Running setup.py bdist_wheel for kappa: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/fe/34/ec/23a576ff864ede8a44f06062b7b6183b11479b860eea67f72b
Running setup.py bdist_wheel for durationpy: started
Running setup.py bdist_wheel for durationpy: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/00/cd/8d/5089f745ef355c25c8642b3d8ab6ffb2bb958b91eb13c49c90
Running setup.py bdist_wheel for hjson: started
Running setup.py bdist_wheel for hjson: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/34/2a/5b/254bcb7475d861a2bdc1f8f32f9924734b4e045a7c8dd596ae
Running setup.py bdist_wheel for placebo: started
Running setup.py bdist_wheel for placebo: finished with status 'done'
Stored in directory: /home/travis/.cache/pip/wheels/85/4f/60/7cf98f842649810ac29f0b6b51684621c60e9a6f997aa552e1
Successfully built PyYAML lambda-packages troposphere future wsgi-request-logger kappa durationpy hjson placebo
Installing collected packages: docutils, urllib3, jmespath, python-dateutil, botocore, s3transfer, boto3, pytz, pandas, Werkzeug, xxhash, click, itsdangerous, MarkupSafe, Jinja2, Flask, PyYAML, lambda-packages, tqdm, cfn-flip, troposphere, idna, certifi, chardet, requests, future, toml, wsgi-request-logger, argcomplete, placebo, kappa, durationpy, Unidecode, python-slugify, hjson, zappa, pyarrow
Successfully installed Flask-1.0.2 Jinja2-2.10 MarkupSafe-1.1.0 PyYAML-3.13 Unidecode-1.0.23 Werkzeug-0.14.1 argcomplete-1.9.3 boto3-1.9.66 botocore-1.12.71 certifi-2018.11.29 cfn-flip-1.1.0.post1 chardet-3.0.4 click-7.0 docutils-0.14 durationpy-0.5 future-0.16.0 hjson-3.0.1 idna-2.8 itsdangerous-1.1.0 jmespath-0.9.3 kappa-0.6.0 lambda-packages-0.20.0 pandas-0.22.0 placebo-0.8.2 pyarrow-0.11.1 python-dateutil-2.7.5 python-slugify-1.2.4 pytz-2018.7 requests-2.21.0 s3transfer-0.1.13 toml-0.10.0 tqdm-4.19.1 troposphere-2.3.4 urllib3-1.24.1 wsgi-request-logger-0.4.6 xxhash-1.3.0 zappa-0.47.1
travis_time:end:13e7689c:start=1545521727877395793,finish=1545521779755813937,duration=51878418144
[0Ktravis_fold:end:install
[0Ktravis_time:start:0cbe1cc4
[0K$ echo "all tests passing"
all tests passing
travis_time:end:0cbe1cc4:start=1545521779760698848,finish=1545521779763699315,duration=3000467
[0K[32;1mThe command "echo "all tests passing"" exited with 0.[0m
travis_fold:start:after_success.1
[0Ktravis_time:start:056ad326
[0K$ pip install -r requirements.txt
Requirement already satisfied: boto3==1.9.66 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from -r requirements.txt (line 1))
Requirement already satisfied: pandas==0.22.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from -r requirements.txt (line 2))
Requirement already satisfied: Werkzeug==0.14.1 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from -r requirements.txt (line 3))
Requirement already satisfied: xxhash==1.3.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from -r requirements.txt (line 4))
Requirement already satisfied: Flask==1.0.2 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from -r requirements.txt (line 5))
Requirement already satisfied: zappa in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from -r requirements.txt (line 6))
Requirement already satisfied: pyarrow in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from -r requirements.txt (line 7))
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from boto3==1.9.66->-r requirements.txt (line 1))
Requirement already satisfied: botocore<1.13.0,>=1.12.66 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from boto3==1.9.66->-r requirements.txt (line 1))
Requirement already satisfied: s3transfer<0.2.0,>=0.1.10 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from boto3==1.9.66->-r requirements.txt (line 1))
Requirement already satisfied: numpy>=1.9.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from pandas==0.22.0->-r requirements.txt (line 2))
Requirement already satisfied: python-dateutil>=2 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from pandas==0.22.0->-r requirements.txt (line 2))
Requirement already satisfied: pytz>=2011k in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from pandas==0.22.0->-r requirements.txt (line 2))
Requirement already satisfied: Jinja2>=2.10 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from Flask==1.0.2->-r requirements.txt (line 5))
Requirement already satisfied: click>=5.1 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from Flask==1.0.2->-r requirements.txt (line 5))
Requirement already satisfied: itsdangerous>=0.24 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from Flask==1.0.2->-r requirements.txt (line 5))
Requirement already satisfied: hjson==3.0.1 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: future==0.16.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: durationpy==0.5 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: toml>=0.9.4 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: docutils>=0.12 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: troposphere>=1.9.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: kappa==0.6.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: six>=1.11.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: argcomplete==1.9.3 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: pip>=9.0.1 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: requests>=2.20.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: tqdm==4.19.1 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: python-slugify==1.2.4 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: wsgi-request-logger==0.4.6 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: lambda-packages==0.20.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: PyYAML==3.13 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: wheel>=0.30.0 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from zappa->-r requirements.txt (line 6))
Requirement already satisfied: urllib3<1.25,>=1.20; python_version >= "3.4" in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from botocore<1.13.0,>=1.12.66->boto3==1.9.66->-r requirements.txt (line 1))
Requirement already satisfied: MarkupSafe>=0.23 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from Jinja2>=2.10->Flask==1.0.2->-r requirements.txt (line 5))
Requirement already satisfied: cfn-flip>=1.0.2 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from troposphere>=1.9.0->zappa->-r requirements.txt (line 6))
Requirement already satisfied: placebo>=0.8.1 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from kappa==0.6.0->zappa->-r requirements.txt (line 6))
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from requests>=2.20.0->zappa->-r requirements.txt (line 6))
Requirement already satisfied: idna<2.9,>=2.5 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from requests>=2.20.0->zappa->-r requirements.txt (line 6))
Requirement already satisfied: certifi>=2017.4.17 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from requests>=2.20.0->zappa->-r requirements.txt (line 6))
Requirement already satisfied: Unidecode>=0.04.16 in /home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages (from python-slugify==1.2.4->zappa->-r requirements.txt (line 6))
travis_time:end:056ad326:start=1545521779769256550,finish=1545521780629014544,duration=859757994
[0Ktravis_fold:end:after_success.1
[0Ktravis_fold:start:after_success.2
[0Ktravis_time:start:09f93e59
[0K$ zappa update dev
(python-dateutil 2.7.5 (/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages), Requirement.parse('python-dateutil<2.7.0,>=2.6.1'), {'zappa'})
Calling update for stage dev..
Oh no! An error occurred! :(
==============
==============
Need help? Found a bug? Let us know! :D
File bug reports on GitHub here: https://github.com/Miserlou/Zappa
And join our Slack channel here: https://slack.zappa.io
Love!,
~ Team Zappa!
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/zappa/cli.py", line 2712, in handle
sys.exit(cli.handle())
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/zappa/cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/zappa/cli.py", line 535, in dispatch_command
self.load_settings(self.vargs.get('settings_file'))
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/zappa/cli.py", line 2074, in load_settings
xray_tracing=self.xray_tracing
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/zappa/core.py", line 293, in __init__
self.load_credentials(boto_session, profile_name)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/zappa/core.py", line 2922, in load_credentials
self.boto_session = boto3.Session(profile_name=profile_name, region_name=self.aws_region)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/boto3/session.py", line 80, in __init__
self._setup_loader()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/boto3/session.py", line 120, in _setup_loader
self._loader = self._session.get_component('data_loader')
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/botocore/session.py", line 679, in get_component
return self._components.get_component(name)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/botocore/session.py", line 902, in get_component
self._components[name] = factory()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/botocore/session.py", line 150, in <lambda>
lambda: create_loader(self.get_config_variable('data_path')))
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/botocore/session.py", line 233, in get_config_variable
logical_name)
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/botocore/configprovider.py", line 226, in get_config_variable
return provider.provide()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/botocore/configprovider.py", line 323, in provide
value = provider.provide()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/botocore/configprovider.py", line 382, in provide
config = self._session.get_scoped_config()
File "/home/travis/virtualenv/python3.6.3/lib/python3.6/site-packages/botocore/session.py", line 334, in get_scoped_config
raise ProfileNotFound(profile=profile_name)
botocore.exceptions.ProfileNotFound: The config profile (default) could not be found
```
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.47.1
* Operating System and Python version: Travis Python 3.6
* The output of `pip freeze`:
```
argcomplete==1.9.3
boto3==1.9.66
botocore==1.12.66
brython==3.6.2
certifi==2018.11.29
cfn-flip==1.1.0.post1
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
Flask==1.0.2
future==0.16.0
hjson==3.0.1
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10
jiphy==1.2.2
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.0
numpy==1.15.4
pandas==0.22.0
placebo==0.8.2
pyarrow==0.11.1
python-dateutil==2.7.5
python-slugify==1.2.4
pytz==2018.7
PyYAML==3.13
requests==2.21.0
s3transfer==0.1.13
six==1.12.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.3.4
Unidecode==1.0.23
urllib3==1.24.1
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
xxhash==1.3.0
zappa==0.47.1
```
* Link to your project (optional):
https://github.com/Benjamin-Lee/squiggle-server
https://travis-ci.org/Benjamin-Lee/squiggle-server
* Your `zappa_settings.py`:
| closed | 2018-12-22T23:51:09Z | 2018-12-23T00:00:54Z | https://github.com/Miserlou/Zappa/issues/1736 | [] | Benjamin-Lee | 1 |
seleniumbase/SeleniumBase | pytest | 2,510 | How to set login to run before executing entire test class | Hello,Michael:
I place the login to run before the entire test class is executed. An error will be reported when running. What should I do? Thank you.
`class MyTestClass(BaseCase):
@classmethod
def setUpClass(cls):
super(MyTestClass, cls).setUpClass()
cls.open("https://www.baidu.com")
Error message:
cls = <class 'demo2.MyTestClass'>
@classmethod
def setUpClass(cls):
super(MyTestClass, cls).setUpClass()
> cls.open("https://www.baidu.com")
E TypeError: BaseCase.open() missing 1 required positional argument: 'url'
` | closed | 2024-02-18T01:41:53Z | 2024-02-18T04:05:18Z | https://github.com/seleniumbase/SeleniumBase/issues/2510 | [
"duplicate",
"question"
] | chenhaijun02 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,167 | Fine tuning a downloaded pre-trained cyclegan model | Hello,
First of all, thanks for this amazing repo!
After going through tips&tricks, and the first few pages of issues I haven't found out how I can start with one of your pretrained cyclegan models and then resume training on my own dataset.
Specifically, it seems that for a given pretrained model (here style_monet_pretrained) we can only download the generator Photo -> Monet. By downloading monet2photo I guess it is possible to access the second generator Monet -> Photo. However, to resume training D_A and D_B are necessary. Are these available anywhere? Is it possible to resume training with slightly different datasets on any of your pretrained, available, models?
I'm trying to obtain decent results with style transfer on 360° images for visualization in VR. I've tried just applying your already pretrained models with the right preprocessing and it works quite well but I was thinking of resuming training with 360° images for the photos database as this might get better results. I'm not sure it's feasible to carry out the training process entirely in a reasonable timeframe as this is a personal project and I'm running it on my own computer equipped with just the one gpu and a relatively small 360° image database.
Many thanks, | open | 2020-10-21T10:31:55Z | 2020-10-30T09:31:32Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1167 | [] | SebastianPartarrieu | 0 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 218 | when I run the `python preprocess.py -lang_src de -lang_trg en -share_vocab -save_data m30k_deen_shr.pkl`.I have faced a problem | when I run the `python preprocess.py -lang_src de -lang_trg en -share_vocab -save_data m30k_deen_shr.pkl`.I have face a problem ,which is`
Namespace(data_src=None, data_trg=None, keep_case=False, lang_src='de', lang_trg='en', max_len=100, min_word_count=3, save_data='m30k_deen_shr.pkl', share_vocab=True)
Traceback (most recent call last):
File "preprocess.py", line 335, in <module>
main_wo_bpe()
File "preprocess.py", line 270, in main_wo_bpe
src_lang_model = spacy.load(opt.lang_src)
File "E:\app\aconda\envs\att\lib\site-packages\spacy\__init__.py", line 30, in load
return util.load_model(name, **overrides)
File "E:\app\aconda\envs\att\lib\site-packages\spacy\util.py", line 175, in load_model
raise IOError(Errors.E050.format(name=name))
OSError: [E050] Can't find model 'de'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.`.
How can I solve it? | open | 2024-02-09T02:20:59Z | 2024-03-28T08:38:29Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/218 | [] | dapaolufuduizhang | 1 |
apify/crawlee-python | automation | 78 | Implement session management | - Implement the initial version.
- Session management in TS Crawlee - https://github.com/apify/crawlee/tree/v3.8.2/packages/core/src/session_pool | closed | 2024-03-25T16:26:11Z | 2024-04-15T15:14:23Z | https://github.com/apify/crawlee-python/issues/78 | [
"enhancement",
"t-tooling"
] | vdusek | 0 |
graphql-python/graphene | graphql | 818 | Search by docs | Hello! Could you please enable search by documentation? | closed | 2018-08-27T20:30:14Z | 2018-09-10T21:58:54Z | https://github.com/graphql-python/graphene/issues/818 | [
"📖 documentation"
] | oleksandr-kuzmenko | 3 |
polarsource/polar | fastapi | 5,145 | GitHub Benefit: Showing wrong information re: billing of collaborators for a free organization | Currently, we show the warning of GitHub seat pricing for free organizations with copy that makes it sound like it impacts them when it only impacts paid organizations | closed | 2025-03-03T09:53:21Z | 2025-03-03T12:56:20Z | https://github.com/polarsource/polar/issues/5145 | [
"bug"
] | birkjernstrom | 0 |
modAL-python/modAL | scikit-learn | 157 | decision_function instead of predict_proba | Several non-probabilistic estimators, such as SVMs in particular, can be used with uncertainty sampling. Scikit-Learn estimators that support the decision_function method can be used with the closest-to-hyperplane selection algorithm [[Bloodgood]](https://arxiv.org/pdf/1801.07875.pdf). This is actually a very popular strategy in AL research and would be very easy to implement. | open | 2022-04-21T12:42:48Z | 2022-05-03T16:15:18Z | https://github.com/modAL-python/modAL/issues/157 | [] | lkurlandski | 5 |
tflearn/tflearn | data-science | 795 | AttributeError: module 'tflearn' has no attribute 'get_layer_variables_by_scope' in gan.py | Traceback (most recent call last):
File "gan.py", line 60, in <module>
gen_vars = tflearn.get_layer_variables_by_scope('Generator')
AttributeError: module 'tflearn' has no attribute 'get_layer_variables_by_scope' | open | 2017-06-15T13:44:27Z | 2017-06-15T19:02:16Z | https://github.com/tflearn/tflearn/issues/795 | [] | forhonourlx | 1 |
pydantic/logfire | fastapi | 300 | FastAPI integration error | ### Description
I apologize if this belongs in the FastAPI issues instead of logfire. I'm not really sure who is the culprit here.
I'm attaching a [sample project](https://github.com/user-attachments/files/16090524/example.zip) to demonstrate an error when `logfire[fastapi]` is added to a FastAPI project.
> **Note:** forcing a downgrade to pydantic v1 fixes the issue _(or removing logfire all together)_
### Error Reproduction
A simple API without any input works:
```shell
curl "http://localhost:8000/hello/"
```
Calling an API with a pydantic model as input fails:
```shell
curl -X POST "http://localhost:8000/test/" -H "Content-Type: application/json" -d '{"name": "test"}'
````
Error log:
```
INFO: 127.0.0.1:58776 - "POST /test/ HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/asgi/__init__.py", line 631, in __call__
await self.app(scope, otel_receive, otel_send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 269, in app
solved_result = await solve_dependencies(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/logfire/_internal/integrations/fastapi.py", line 111, in patched_solve_dependencies
return await instrumentation.solve_dependencies(request, original)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/logfire/_internal/integrations/fastapi.py", line 173, in solve_dependencies
result = await original
^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/fastapi/dependencies/utils.py", line 628, in solve_dependencies
) = await request_body_to_args( # body_params checked above
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/fastapi/dependencies/utils.py", line 758, in request_body_to_args
v_, errors_ = field.validate(value, values, loc=loc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/fastapi/_compat.py", line 127, in validate
self._type_adapter.validate_python(value, from_attributes=True),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/pydantic/type_adapter.py", line 142, in wrapped
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/pydantic/type_adapter.py", line 373, in validate_python
return self.validator.validate_python(object, strict=strict, from_attributes=from_attributes, context=context)
^^^^^^^^^^^^^^
File "/Users/mcantrell/.pyenv/versions/3.11.9/lib/python3.11/functools.py", line 1001, in __get__
val = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/pydantic/type_adapter.py", line 142, in wrapped
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/acme/fastapi-pydantic-error/.venv/lib/python3.11/site-packages/pydantic/type_adapter.py", line 318, in validator
assert isinstance(self._validator, SchemaValidator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
```
To fix the problem, either remove logfire completely or downgrade pydantic to v1.
### Python, Logfire & OS Versions, related packages (not required)
```TOML
requests="2.32.3"
pydantic="2.8.0"
fastapi="0.111.0"
protobuf="4.25.3"
rich="13.7.1"
executing="2.0.1"
opentelemetry-api="1.25.0"
opentelemetry-exporter-otlp-proto-common="1.25.0"
opentelemetry-exporter-otlp-proto-http="1.25.0"
opentelemetry-instrumentation="0.46b0"
opentelemetry-instrumentation-asgi="0.46b0"
opentelemetry-instrumentation-fastapi="0.46b0"
opentelemetry-proto="1.25.0"
opentelemetry-sdk="1.25.0"
opentelemetry-semantic-conventions="0.46b0"
opentelemetry-util-http="0.46b0"
```
| closed | 2024-07-03T20:07:53Z | 2024-07-04T10:33:35Z | https://github.com/pydantic/logfire/issues/300 | [
"bug"
] | mcantrell | 4 |
miguelgrinberg/microblog | flask | 209 | CH17 Vagrant Issue | Hello. I'm having difficulty figuring out what the precise sequence of commands is necessary to log into the Vagrant box as the `ubuntu` user. I've rebuilt/destroyed the box a number of times while trying to get this to work.
- `vagrant up`
- `vagrant ssh`
- `(vagrant) ssh ubuntu@192.168.33.10`
This attempt to log into the ubuntu user shoots out a somewhat expected `Permission denied (publickey).` error; with some cursory googling a number of solutions presented suggest deleting the default `ubuntu` user and making a new one to overcome this, or writing an entire new Vagrantfile configuration to work around `ssh` logins.
The phrasing at the top of the 'Password-less Logins' sections suggests there's no other configuration required, unless I've totally dropped the ball on the intended workflow for logging in as the 'ubuntu' user.
In short, I have no clue if I'm meant *find* the SSH keys (if they're provided), or if I'm meant to generate them from the Vagrant box and add them to the `ubuntu` user without being able to access the account. | closed | 2020-03-03T23:30:34Z | 2020-03-04T18:17:54Z | https://github.com/miguelgrinberg/microblog/issues/209 | [
"question"
] | ADubhlaoich | 2 |
adbar/trafilatura | web-scraping | 488 | Extract more text | for this url = "https://www.aia.com/en/health-wellness/healthy-living/healthy-mind/Managing-financial-stress",
I use
downloaded = trafilatura.fetch_url(url) trafilatura.bare_extraction(downloaded, url=url)
I get the text and this is a good result. However it only has text with index 1. while the website has text with index 1. 2. 3. 4. 5.
Even though I used favor_recall=True, nothing changed.
Thank you, however, for this library, it really is better than bs4! | open | 2024-01-26T09:40:10Z | 2024-02-16T15:21:06Z | https://github.com/adbar/trafilatura/issues/488 | [
"bug"
] | vulinh48936 | 6 |
Lightning-AI/pytorch-lightning | pytorch | 20,465 | Stop renaming everything, you're annoying. Fix the names of the classes and don't rename them. | ### Bug description
description in title
### What version are you seeing the problem on?
v1.x
### How to reproduce the bug
```python
```
### Error messages and logs
```
# Error messages and logs here please
Support for `training_epoch_end` has been removed in v2.0.0. `ResnetTT` implements this method. You can use the `on_train_epoch_end` hook instead. To access outputs, save them in-memory as instance attributes. You can find migration examples in https://github.com/Lightning-AI/lightning/pull/16520.
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
I don't understand what you were on when you replaced it `training_epoch_end` to `on_train_epoch_end`. Are you crazy? Do you understand that this is how many problems with versions arise? what kind of idiot do you have to be to do such crap | closed | 2024-12-04T14:16:50Z | 2024-12-04T15:57:06Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20465 | [
"bug",
"needs triage"
] | vadinabronin | 1 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 619 | [BUG]: It is skipping job applications | ### Describe the bug
logs
```zsh
2024-10-26 16:58:38.973 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.003 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.004 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Java Developer at Cartney
2024-10-26 16:58:39.004 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Java Developer at Cartney in Denver, CO (On-site)
2024-10-26 16:58:39.004 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.005 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Cartney (once per company policy), skipping...
2024-10-26 16:58:39.006 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.036 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.036 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Full Stack Developer at CoExperiences
2024-10-26 16:58:39.036 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Full Stack Developer at CoExperiences in United States (Remote)
2024-10-26 16:58:39.036 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.038 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at CoExperiences (once per company policy), skipping...
2024-10-26 16:58:39.038 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.067 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.068 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Software Developer at Aegis Mobile
2024-10-26 16:58:39.068 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Software Developer at Aegis Mobile in Mobile Metropolitan Area (Hybrid)
2024-10-26 16:58:39.068 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.069 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Aegis Mobile (once per company policy), skipping...
2024-10-26 16:58:39.069 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.098 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.098 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Core Java Backend Developer at BCforward
2024-10-26 16:58:39.098 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Core Java Backend Developer at BCforward in Chicago, IL (Hybrid)
2024-10-26 16:58:39.098 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.099 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at BCforward (once per company policy), skipping...
2024-10-26 16:58:39.100 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.129 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.129 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Java fullstack Developer at ClifyX
2024-10-26 16:58:39.129 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Java fullstack Developer at ClifyX in Jersey City, NJ (Hybrid)
2024-10-26 16:58:39.130 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.131 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at ClifyX (once per company policy), skipping...
2024-10-26 16:58:39.131 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.158 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.159 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Software Engineer (Fullstack) - Denver/Colorado at Vorto
2024-10-26 16:58:39.159 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Software Engineer (Fullstack) - Denver/Colorado at Vorto in Denver, CO (On-site)
2024-10-26 16:58:39.159 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.160 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Vorto (once per company policy), skipping...
2024-10-26 16:58:39.160 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.189 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.190 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Java-fullstack UI developer at Tekgence Inc
2024-10-26 16:58:39.190 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Java-fullstack UI developer at Tekgence Inc in Sunrise, FL (On-site)
2024-10-26 16:58:39.190 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.191 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Tekgence Inc (once per company policy), skipping...
2024-10-26 16:58:39.191 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.219 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.219 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Full Stack Java Developer at ASK Consulting
2024-10-26 16:58:39.219 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Full Stack Java Developer at ASK Consulting in Alpharetta, GA (On-site)
2024-10-26 16:58:39.219 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.221 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at ASK Consulting (once per company policy), skipping...
2024-10-26 16:58:39.221 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.249 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.250 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior AWS Fullstack Developer at Hexaware Technologies
2024-10-26 16:58:39.250 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior AWS Fullstack Developer at Hexaware Technologies in Reston, VA (On-site)
2024-10-26 16:58:39.250 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.251 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Hexaware Technologies (once per company policy), skipping...
2024-10-26 16:58:39.251 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.280 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.281 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Full Stack Engineer at Milestone Technologies, Inc.
2024-10-26 16:58:39.281 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Full Stack Engineer at Milestone Technologies, Inc. in Orlando, FL (On-site)
2024-10-26 16:58:39.281 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.282 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Milestone Technologies, Inc. (once per company policy), skipping...
2024-10-26 16:58:39.282 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.312 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.312 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Back End Developer at Accelon Inc.
2024-10-26 16:58:39.312 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Back End Developer at Accelon Inc. in United States (Remote)
2024-10-26 16:58:39.312 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.313 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Accelon Inc. (once per company policy), skipping...
2024-10-26 16:58:39.313 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.341 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.342 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Java Full Stack Developer at Servsys Corporation
2024-10-26 16:58:39.342 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Java Full Stack Developer at Servsys Corporation in Pasadena, CA (On-site)
2024-10-26 16:58:39.342 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.343 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Servsys Corporation (once per company policy), skipping...
2024-10-26 16:58:39.343 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.371 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.372 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Python Developer at Quantiphi
2024-10-26 16:58:39.372 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Python Developer at Quantiphi in Nashville, TN (On-site)
2024-10-26 16:58:39.372 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.373 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Quantiphi (once per company policy), skipping...
2024-10-26 16:58:39.373 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.401 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.401 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Java Software Engineer at BeaconFire Inc.
2024-10-26 16:58:39.401 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Java Software Engineer at BeaconFire Inc. in New Jersey, United States (On-site)
2024-10-26 16:58:39.402 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.403 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at BeaconFire Inc. (once per company policy), skipping...
2024-10-26 16:58:39.403 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.431 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.431 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Full Stack Engineer at Harnham
2024-10-26 16:58:39.431 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Full Stack Engineer at Harnham in New York, NY (Hybrid)
2024-10-26 16:58:39.431 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.432 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Harnham (once per company policy), skipping...
2024-10-26 16:58:39.433 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.461 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.462 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Software Engineer at Storm2
2024-10-26 16:58:39.462 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Software Engineer at Storm2 in California, United States (On-site)
2024-10-26 16:58:39.462 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.463 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Storm2 (once per company policy), skipping...
2024-10-26 16:58:39.463 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.492 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.493 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Software Engineer at Agility Partners
2024-10-26 16:58:39.493 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Software Engineer at Agility Partners in Cincinnati, OH (On-site)
2024-10-26 16:58:39.493 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.494 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Agility Partners (once per company policy), skipping...
2024-10-26 16:58:39.494 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.522 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.523 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Javascript Developer at Avance Consulting
2024-10-26 16:58:39.523 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Javascript Developer at Avance Consulting in Plano, TX (On-site)
2024-10-26 16:58:39.523 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.524 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Avance Consulting (once per company policy), skipping...
2024-10-26 16:58:39.524 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.552 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.552 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Python Developer at Perfict
2024-10-26 16:58:39.552 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Python Developer at Perfict in Oakland, CA (Hybrid)
2024-10-26 16:58:39.552 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.554 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Perfict (once per company policy), skipping...
2024-10-26 16:58:39.554 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.582 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.582 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Software Engineer at Storm2
2024-10-26 16:58:39.582 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Software Engineer at Storm2 in Sunnyvale, CA (On-site)
2024-10-26 16:58:39.582 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.584 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Storm2 (once per company policy), skipping...
2024-10-26 16:58:39.584 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.612 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.612 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Full Stack Engineer at Luna Data Solutions, Inc.
2024-10-26 16:58:39.612 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Full Stack Engineer at Luna Data Solutions, Inc. in Austin, Texas Metropolitan Area (Hybrid)
2024-10-26 16:58:39.612 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.613 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Luna Data Solutions, Inc. (once per company policy), skipping...
2024-10-26 16:58:39.614 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.642 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.642 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Full Stack Engineer at Mindlance
2024-10-26 16:58:39.642 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Full Stack Engineer at Mindlance in Phoenix, AZ (Hybrid)
2024-10-26 16:58:39.642 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.644 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at Mindlance (once per company policy), skipping...
2024-10-26 16:58:39.644 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.672 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.672 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Senior Software Backend Engineer at aKUBE
2024-10-26 16:58:39.672 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Senior Software Backend Engineer at aKUBE in California, United States (Remote)
2024-10-26 16:58:39.672 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.673 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at aKUBE (once per company policy), skipping...
2024-10-26 16:58:39.673 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.702 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.703 | DEBUG | src.aihawk_job_manager:apply_jobs:310 - Starting applicant for job: Software Engineer at People Tech Group Inc
2024-10-26 16:58:39.703 | DEBUG | src.aihawk_job_manager:is_blacklisted:472 - Checking if job is blacklisted: Software Engineer at People Tech Group Inc in Seattle, WA (On-site)
2024-10-26 16:58:39.703 | DEBUG | src.aihawk_job_manager:is_blacklisted:479 - Job blacklisted status: False
2024-10-26 16:58:39.704 | DEBUG | src.aihawk_job_manager:is_already_applied_to_company:502 - Already applied at People Tech Group Inc (once per company policy), skipping...
2024-10-26 16:58:39.704 | DEBUG | src.aihawk_job_manager:write_to_file:386 - Writing job application result to file: skipped
2024-10-26 16:58:39.733 | DEBUG | src.aihawk_job_manager:write_to_file:413 - Job data appended to existing file: skipped
2024-10-26 16:58:39.734 | DEBUG | src.aihawk_job_manager:start_applying:161 - Applying to jobs on this page has been completed!
Sleeping for 20.315550088882446 seconds. Press 'y' to skip waiting. Timeout 60 seconds :
```
### Steps to reproduce
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Branch
None
### Branch name
_No response_
### Python version
_No response_
### LLM Used
_No response_
### Model used
_No response_
### Additional context
_No response_ | closed | 2024-10-26T21:00:07Z | 2024-10-27T01:15:43Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/619 | [
"bug"
] | surapuramakhil | 1 |
dask/dask | numpy | 11,073 | Unique Operation fails on dataframe repartitioned using set index after resetting the index |
**Describe the issue**:
**Minimal Complete Verifiable Example**:
```python
import pandas as pd
import dask.dataframe as dd
data = {
'Column1': range(30),
'Column2': range(30, 60)
}
pdf = pd.DataFrame(data)
# Convert the Pandas DataFrame to a Dask DataFrame with 3 partitions
ddf = dd.from_pandas(pdf, npartitions=1)
ddf = ddf.set_index('Column1', sort=True, divisions=[0,10,20,29], shuffle='tasks')
print(ddf.npartitions)
ddf = ddf.reset_index()
unique = ddf['Column1'].unique().compute()
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.4.2
- Python version: 3.10
- Operating System: Mac OSx
- Install method (conda, pip, source): dask[dataframe]
| closed | 2024-04-25T16:25:55Z | 2024-04-25T17:49:05Z | https://github.com/dask/dask/issues/11073 | [
"needs triage"
] | mscanlon-exos | 1 |
deedy5/primp | web-scraping | 90 | Implement requests-like exception hierarchy | open | 2025-02-07T09:25:17Z | 2025-02-07T18:11:45Z | https://github.com/deedy5/primp/issues/90 | [
"enhancement"
] | deedy5 | 0 | |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 350 | What is required to make voice data for this? | My friend has given me a challenge to make him swear with this tool using regular data of him reading samples of text. I got it working with recording my voice, but when i try to load his voice from an mp3 file i get "audioread.exceptions.NoBackendError"
What can i do to make the voice files processable? | closed | 2020-05-26T01:05:20Z | 2020-07-04T17:35:19Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/350 | [] | DrasticGray | 2 |
mwaskom/seaborn | data-visualization | 2,821 | Calling `sns.heatmap()` changes matplotlib rcParams | See the following example
```python
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
mpl.rcParams["figure.dpi"] = 120
mpl.rcParams["figure.facecolor"] = "white"
mpl.rcParams["figure.figsize"] = (9, 6)
data = sns.load_dataset("iris")
print(mpl.rcParams["figure.dpi"])
print(mpl.rcParams["figure.facecolor"])
print(mpl.rcParams["figure.figsize"])
#120.0
#white
#[9.0, 6.0]
fig, ax = plt.subplots()
sns.heatmap(data.corr(), vmin=-1, vmax=1, center=0, annot=True, linewidths=4, ax=ax);
print(mpl.rcParams["figure.dpi"])
print(mpl.rcParams["figure.facecolor"])
print(mpl.rcParams["figure.figsize"])
#72.0
#(1, 1, 1, 0)
#[6.0, 4.0]
```
If I call again
```python
mpl.rcParams["figure.dpi"] = 120
mpl.rcParams["figure.facecolor"] = "white"
mpl.rcParams["figure.figsize"] = (9, 6)
```
then it works fine, but I don't know why it changes the rcParams.
**Edit** These are the versions being used
```
Last updated: Wed May 25 2022
Python implementation: CPython
Python version : 3.9.12
IPython version : 8.3.0
matplotlib: 3.5.2
seaborn : 0.11.2
sys : 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:25:59)
[GCC 10.3.0]
Watermark: 2.3.0
``` | closed | 2022-05-25T19:16:45Z | 2022-05-27T11:13:29Z | https://github.com/mwaskom/seaborn/issues/2821 | [] | tomicapretto | 2 |
zihangdai/xlnet | nlp | 224 | Text Classifier Prediction Problem | I trained a text classification model based on the XLNet pre-training model and got the corresponding ckpt file.

Then, based on this text classification model, I made predictions and but it always reported an error. The predicted scripts and errors are as follows.
**Scripts:**
python3 chinese_classifier.py
--do_predict=True
--eval_split=test
--task_name=inre
--data_dir="/home/luban/IR/data/IR/"
--output_dir="/home/luban/IR/output2/tfrecords/"
--model_dir="/home/luban/IR/output2/finetunedModel/"
--spiece_model_file="/home/luban/IR/sentencePiec/spm.model"
--model_config_path="xlnet/modelCkpt/config.json"
--init_checkpoint="xlnet/modelCkpt/model.ckpt"
--predict_dir="/home/luban/IR/predict/IR/"
--predict_ckpt="/home/luban/IR/output2/finetunedModel/model.ckpt-3000"
--max_seq_length=128
--predict_batch_size=16
--num_hosts=1
--num_core_per_host=1
--learning_rate=2e-5
--train_steps=3000
--warmup_steps=500
--save_steps=3000
--iterations=500
--dropout=0.05
**Results:**
INFO:tensorflow:Single device mode.
I0909 11:01:45.503506 139996976785152 tf_logging.py:115] Single device mode.
INFO:tensorflow:Using config: {'_model_dir': '/home/luban/IR/output3/finetunedModel/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 3000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
, '_keep_checkpoint_max': 0, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f53893cd1d0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=500, num_shards=1, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_cluster': None}
I0909 11:01:47.359045 139996976785152 tf_logging.py:115] Using config: {'_model_dir': '/home/luban/IR/output3/finetunedModel/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 3000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
, '_keep_checkpoint_max': 0, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f53893cd1d0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=500, num_shards=1, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_cluster': None}
WARNING:tensorflow:Estimator's model_fn (<function get_model_fn.<locals>.model_fn at 0x7f538c010730>) includes params argument, but params are not passed to Estimator.
W0909 11:01:47.359888 139996976785152 tf_logging.py:125] Estimator's model_fn (<function get_model_fn.<locals>.model_fn at 0x7f538c010730>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Num of eval samples: 5000
I0909 11:01:47.390489 139996976785152 tf_logging.py:115] Num of eval samples: 5000
INFO:tensorflow:Do not overwrite tfrecord /home/luban/IR/output3/tfrecords/0.model.len-128.test.predict.tf_record exists.
I0909 11:01:47.390708 139996976785152 tf_logging.py:115] Do not overwrite tfrecord /home/luban/IR/output3/tfrecords/0.model.len-128.test.predict.tf_record exists.
INFO:tensorflow:Input tfrecord file /home/luban/IR/output3/tfrecords/0.model.len-128.test.predict.tf_record
I0909 11:01:47.390810 139996976785152 tf_logging.py:115] Input tfrecord file /home/luban/IR/output3/tfrecords/0.model.len-128.test.predict.tf_record
WARNING:tensorflow:From chinese_classifier.py:562: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.map_and_batch(...)`.
W0909 11:01:47.412828 139996976785152 tf_logging.py:125] From chinese_classifier.py:562: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.map_and_batch(...)`.
INFO:tensorflow:Calling model_fn.
I0909 11:01:47.428736 139996976785152 tf_logging.py:115] Calling model_fn.
INFO:tensorflow:memory input None
I0909 11:01:47.440355 139996976785152 tf_logging.py:115] memory input None
INFO:tensorflow:Use float type <dtype: 'float32'>
I0909 11:01:47.440568 139996976785152 tf_logging.py:115] Use float type <dtype: 'float32'>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 1551, in zeros
output = _constant_if_small(zero, shape, dtype, name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 1508, in _constant_if_small
if np.prod(shape) < 1000:
File "/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py", line 2585, in prod
initial=initial)
File "/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py", line 83, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py", line 869, in binary_op_wrapper
y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1050, in convert_to_tensor
as_ref=False)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 282, in _dimension_tensor_conversion_function
raise ValueError("Cannot convert an unknown Dimension to a Tensor: %s" % d)
ValueError: Cannot convert an unknown Dimension to a Tensor: ?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "chinese_classifier.py", line 914, in <module>
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "chinese_classifier.py", line 883, in main
checkpoint_path=FLAGS.predict_ckpt)):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 577, in predict
features, None, model_fn_lib.ModeKeys.PREDICT, self.config)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 1195, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "chinese_classifier.py", line 581, in model_fn
FLAGS, features, n_class, is_training)
File "/home/luban/IR/chineseClassifier/function_builder.py", line 155, in get_classification_loss
input_mask=inp_mask)
File "/home/luban/IR/chineseClassifier/xlnet.py", line 222, in __init__
) = modeling.transformer_xl(**tfm_args)
File "/home/luban/IR/chineseClassifier/modeling.py", line 500, in transformer_xl
dtype=tf_float)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 1560, in zeros
shape = ops.convert_to_tensor(shape, dtype=dtypes.int32)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1050, in convert_to_tensor
as_ref=False)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 922, in _autopacking_helper
constant_op.constant(elem, dtype=dtype, name=str(i)))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py", line 443, in make_tensor_proto
nparray = np.array(values, dtype=np_dt)
TypeError: __int__ returned non-int (type NoneType)
**What should I do? Thank you!**
| open | 2019-09-09T03:07:31Z | 2020-09-06T06:20:12Z | https://github.com/zihangdai/xlnet/issues/224 | [] | MissMcFly | 2 |
ultralytics/ultralytics | pytorch | 18,820 | Image shape issue | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
If the images I am training are not square, for example, 300*200 pixels, will there be significant distortion after resizing? Does YOLOv11 pad them to square before resizing?
### Additional
_No response_ | open | 2025-01-22T10:04:10Z | 2025-01-22T10:45:56Z | https://github.com/ultralytics/ultralytics/issues/18820 | [
"question",
"detect"
] | chaojiniubi | 4 |
lundberg/respx | pytest | 273 | `respx_mock` doesn't handle `//` in the path-section of URLs. | Hi,
I just stumbled upon the issue described in the title, which can be reproduced with the following `pytest` file.
```python
import httpx
import pytest
from pydantic import AnyHttpUrl
from respx import MockRouter
@pytest.mark.parametrize(
"url",
[
"http://localhost", # OK
"http://localhost/", # OK
"http://localhost//", # Fails
"http://localhost///", # Fails
"http://localhost/%2F", # Fails
"http://localhost/%2F/", # Fails
"http://localhost/%2F%2F", # Fails
],
)
async def test_respx_targeting(respx_mock: MockRouter, url: AnyHttpUrl) -> None:
route = respx_mock.get(url=url).respond(status_code=200)
result = httpx.get(url)
assert result.status_code == 200
assert route.called
```
The fails all take the form of:
```
respx.models.AllMockedAssertionError: RESPX: <Request('GET', 'http://localhost//')> not mocked!
```
Having `//` in the path sections of URLs is valid according to [RFC 3986](https://www.ietf.org/rfc/rfc3986.txt) (see section '3: Syntax Components' and section '3.3. Path'), [Stack Overflow](https://stackoverflow.com/a/20524044) seems to concur.
| open | 2024-08-02T19:54:59Z | 2025-01-21T10:25:46Z | https://github.com/lundberg/respx/issues/273 | [
"bug"
] | Skeen | 3 |
modoboa/modoboa | django | 2,410 | TypeError: '<' not supported between instances of 'memoryview' and 'memoryview' | # Impacted versions
* OS Type: Debian/Ubuntu
* OS Version: 20.04
* Database Type: not sure
* Database version: X.y
* Modoboa: 1.17.0
* installer used: Yes
* Webserver: Nginx
# Steps to reproduce
It appears to be part of the following CRON job:
```
# Quarantine cleanup
0 0 * * * root $PYTHON $INSTANCE/manage.py qcleanup
```
# Current behavior
In a daily email, I get the following error:
```
Traceback (most recent call last):
File "/srv/modoboa/instance/manage.py", line 21, in <module>
main()
File "/srv/modoboa/instance/manage.py", line 17, in main
execute_from_command_line(sys.argv)
File "/srv/modoboa/env/lib/python3.8/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/srv/modoboa/env/lib/python3.8/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/srv/modoboa/env/lib/python3.8/site-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/srv/modoboa/env/lib/python3.8/site-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/srv/modoboa/env/lib/python3.8/site-packages/modoboa_amavis/management/commands/qcleanup.py", line 59, in handle
Msgs.objects.filter(time_num__lt=limit).delete()
File "/srv/modoboa/env/lib/python3.8/site-packages/django/db/models/query.py", line 711, in delete
deleted, _rows_count = collector.delete()
File "/srv/modoboa/env/lib/python3.8/site-packages/django/db/models/deletion.py", line 266, in delete
self.data[model] = sorted(instances, key=attrgetter("pk"))
TypeError: '<' not supported between instances of 'memoryview' and 'memoryview'
```
Not sure how to proceed. The install is pretty vanilla. Thank you! | closed | 2021-11-23T18:02:05Z | 2021-11-30T16:36:20Z | https://github.com/modoboa/modoboa/issues/2410 | [] | binarydad | 1 |
automl/auto-sklearn | scikit-learn | 867 | Dockerfile is not working | ## Describe the bug ##
The provided dockerfile does not build on Mac.
## To Reproduce ##
Steps to reproduce the behavior:
- Run docker build on the provided Dockerfile
- See error:
```
Collecting lazy_import
Downloading lazy_import-0.2.2.tar.gz (15 kB)
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ogvjzynh/lazy-import/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ogvjzynh/lazy-import/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-zdklootb
cwd: /tmp/pip-install-ogvjzynh/lazy-import/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-ogvjzynh/lazy-import/setup.py", line 6, in <module>
readme = infile.read()
File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 621: ordinal not in range(128)
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
The command '/bin/sh -c curl https://raw.githubusercontent.com/automl/auto-sklearn/master/requirements.txt | xargs -n 1 -L 1 pip3 install' returned a non-zero code: 123
```
## Expected behavior ##
A clear and concise description of what you expected to happen.
## Environment and installation: ##
Please give details about your installation:
* OS: Mac
* Conda
* Python3.7
* Auto-sklearn version: 0.0.7
| closed | 2020-05-30T21:41:59Z | 2020-07-23T14:38:10Z | https://github.com/automl/auto-sklearn/issues/867 | [] | wlongxiang | 8 |
flairNLP/flair | pytorch | 2,914 | Regarding flair/ner-english-ontonotes model fine tuning | I thought of `Fine-Tuning` the model `flair/ner-english-ontonotes` here `https://huggingface.co/flair/ner-english-ontonotes`.
I couldn't find the following files in `Files and Verions` https://huggingface.co/flair/ner-english-ontonotes/tree/main
- tokenizer_config.josn
- config.json
- some more tokens related files
In all the Hugging face models, we have these files.
**Am I missing something?**
If these are the only files, is there any code in `Google colab` which I can use to fine-tune this model based on my data and labels.
My data looks like this:
```
{text: "Texas USA TX DRIVER LICENSE B Lamma 4d DL 12345678 9 Class AM to Iss 04/05/21 Ab Exp 07130/2021 DOB 03/03/2021
1 B 2 GERALD 8 2120 OLD MAIN STREET ANYTOWN TX 123456-0000 073076 12 Restrictions A 9a End P 16 Hgt 5'-04" 15 Sex F 18 Eyes
BLU 5 DD 12345678900000000000 Gerald B",
tags:
[{'start': 278, 'end': 293, 'label': 'PERSON_NAME', 'ngram': 'Gerald B'},
{'start': 131, 'end': 137, 'label': 'FIRST_NAME', 'ngram': 'GERALD'},
{'start': 118, 'end': 130, 'label': 'LAST_NAME', 'ngram': 'B'},
{'start': 76, 'end': 84, 'label': 'ISSUE_DATE', 'ngram': '04/05/21'},
{'start': 92, 'end': 102, 'label': 'EXPIRY_DATE', 'ngram': '07130/2021'},
{'start': 107, 'end': 117, 'label': 'DATE_OF_BIRTH', 'ngram': '03/03/2021'},
{'start': 49, 'end': 57, 'label': 'DRIVER_LICENSE_NUMBER', 'ngram': '12345678'},
{'start': 140, 'end': 182, 'label': 'ADDRESS', 'ngram': '2120 OLD MAIN STREET ANYTOWN TX 123456-0000'}]}
```
| closed | 2022-08-22T08:18:31Z | 2023-04-02T16:54:42Z | https://github.com/flairNLP/flair/issues/2914 | [
"question",
"wontfix"
] | pratikchhapolika | 4 |
NVIDIA/pix2pixHD | computer-vision | 56 | minor spell typo | your no_flip option confuses between argumentation and augmentation.
check_here:
https://github.com/NVIDIA/pix2pixHD/blob/master/options/base_options.py
might a developer being too much bored with repeated word of argument :/ | closed | 2018-08-25T08:28:09Z | 2019-06-09T23:28:52Z | https://github.com/NVIDIA/pix2pixHD/issues/56 | [] | syyunn | 2 |
nidhaloff/igel | automation | 6 | provide a way to do one hot encoding | the user should be able to use one hot encoding in the yaml file
| closed | 2020-09-08T22:48:10Z | 2020-09-10T12:10:15Z | https://github.com/nidhaloff/igel/issues/6 | [
"enhancement",
"good first issue"
] | nidhaloff | 1 |
seleniumbase/SeleniumBase | pytest | 3,111 | The CF CAPTCHAs changed again (on Linux) | ## The CF CAPTCHAs changed again (on Linux)
### CI started failing:
<img width="830" alt="Screenshot 2024-09-09 at 10 43 29 AM" src="https://github.com/user-attachments/assets/65b2d952-dbc8-4437-912c-498f024454ff">
---
### This is how it normally looks when passing:
(`PyAutoGUI` clicks the CAPTCHA successfully, and then takes you to the real page.)
<img width="864" alt="Screenshot 2024-09-09 at 10 45 10 AM" src="https://github.com/user-attachments/assets/85184587-8ce9-4c8d-88d9-679d40d1bcc3">
---
I'm looking into what changed. Changes come frequently, as you may have seen in UC Mode Video 3: https://www.youtube.com/watch?v=-EpZlhGWo9k, where I talked about "The Great CAPTCHA Duel".
If you figure out what changed before I do, let me know.
| closed | 2024-09-09T14:55:54Z | 2024-09-11T05:44:32Z | https://github.com/seleniumbase/SeleniumBase/issues/3111 | [
"workaround exists",
"UC Mode / CDP Mode",
"Fun"
] | mdmintz | 6 |
seleniumbase/SeleniumBase | web-scraping | 2,131 | Seleniumbase is crashed when i run the script | Seleniumbase is crashed when i run the script is there a problem in it ? | closed | 2023-09-22T19:27:23Z | 2023-09-22T20:33:59Z | https://github.com/seleniumbase/SeleniumBase/issues/2131 | [
"invalid",
"can't reproduce"
] | ahmedabdelhamedz | 1 |
albumentations-team/albumentations | deep-learning | 2,061 | [MaskDropout] Remove objects with particular mask classes. | Use case: tracking. We may have object moving, and to simulate occlusions, we cut it out. But only this object, not others. | open | 2024-11-06T01:39:24Z | 2024-11-06T01:39:24Z | https://github.com/albumentations-team/albumentations/issues/2061 | [
"enhancement"
] | ternaus | 0 |
mljar/mercury | data-visualization | 360 | PDF download not working | I am trying to download PDF but the web page becomes greyed out and unresponsive with the spinning wheels that keeps spinning

| open | 2023-09-04T12:00:54Z | 2023-10-23T10:53:26Z | https://github.com/mljar/mercury/issues/360 | [
"bug"
] | gioxc88 | 8 |
chiphuyen/stanford-tensorflow-tutorials | nlp | 30 | advise to add a Jupyter Notebook | hi, here is a suggestion, how about add a "Jupyter Notebook" in example PATH? | open | 2017-06-18T09:05:43Z | 2017-07-11T17:47:48Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/30 | [] | DoneHome | 1 |
huggingface/transformers | tensorflow | 36,040 | `Llama-3.2-11B-Vision-Instruct` (`mllama`) FSDP fails if grad checkpointing is enabled | ### System Info
1 node with 4 A100 40GB GPUs launched by SkyPilot (`A100:4`) on GCP
### Who can help?
### What happened?
FSDP SFT fine-tuning of `meta-llama/Llama-3.2-90B-Vision-Instruct` on 1 node with 4 `A100-40GB` GPU-s with TRL trainer (`trl.SFTTrainer`) started to fail for us after upgrade to `transformers>=4.46`, including `transformers==4.48.2`:
Sample error for `sdpa` attention:
```
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: File "/home/gcpuser/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank2]: return forward_call(*args, **kwargs)
[rank2]: File "/home/gcpuser/miniconda3/lib/python3.10/site-packages/transformers/models/mllama/modeling_mllama.py", line 798, in forward
[rank2]: attn_output = torch.nn.functional.scaled_dot_product_attention(
[rank2]: RuntimeError: The expanded size of the tensor (46) must match the existing size (23) at non-singleton dimension 3. Target sizes: [2, 32, 23, 46]. Tensor sizes: [2, 1, 23, 23]
```
It fails with similar error messages for `eager` attention as well.
This affects both full-finetuning and LoRA tuning.
Disabling grad checkpointing (w/ smaller batch size) resolves the error.
Note that if we install `transformers>=4.45.2,<4.46` then training works w/o the error under the same settings w/ gradient checkpointing on or off. It's likely the regression is related to this attention refactor: https://github.com/huggingface/transformers/pull/35235
### Steps to reproduce the bug
1. Install `transformers>=4.48.2,<4.49`, `trl>=0.13.0,<0.14`
2. FSDP tune `meta-llama/Llama-3.2-90B-Vision-Instruct` using `torchrun`
Accelerate environment variables for FSDP:
` {'ACCELERATE_DYNAMO_BACKEND': 'NO', 'ACCELERATE_DYNAMO_MODE': 'default', 'ACCELERATE_DYNAMO_USE_FULLGRAPH': 'False', 'ACCELERATE_DYNAMO_USE_DYNAMIC': 'False', 'FSDP_CPU_RAM_EFFICIENT_LOADING': 'true', 'FSDP_USE_ORIG_PARAMS': 'true', 'ACCELERATE_USE_FSDP': 'true', 'FSDP_SHARDING_STRATEGY': 'HYBRID_SHARD', 'FSDP_OFFLOAD_PARAMS': 'false', 'FSDP_BACKWARD_PREFETCH': 'BACKWARD_PRE', 'FSDP_FORWARD_PREFETCH': 'false', 'FSDP_STATE_DICT_TYPE': 'FULL_STATE_DICT', 'FSDP_AUTO_WRAP_POLICY': 'TRANSFORMER_BASED_WRAP', 'FSDP_MIN_NUM_PARAMS': '100000', 'FSDP_TRANSFORMER_CLS_TO_WRAP': 'MllamaSelfAttentionDecoderLayer,MllamaCrossAttentionDecoderLayer,MllamaVisionEncoderLayer', 'FSDP_SYNC_MODULE_STATES': 'true', 'FSDP_ACTIVATION_CHECKPOINTING': 'true'}
`
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I don't yet have a standalone repro script for this issue (it was reproduced as part of a different system). If it's a requirement, and you can't easily reproduce the issue using your own scripts based on the description above, please let me know .
### Expected behavior
No error | open | 2025-02-05T01:23:16Z | 2025-03-08T17:55:39Z | https://github.com/huggingface/transformers/issues/36040 | [
"bug"
] | nikg4 | 3 |
datapane/datapane | data-visualization | 376 | [Bug]: Report upload errror | ### Is there an existing issue for this?
- [X] I have searched for similar issues and discussions
### Bug Description
```markdown
I used dp.upload_report() to upload a report. It was fine until yesterday, but today, it is throwing me the below error. The dp.View uses Group and HTML. Within Group I have Media and Table.
raise DPClientError(msg)
datapane.client.exceptions.DPClientError: Group has less than 1 objects
I tried running the upload_report with only one piece of HTML in dp.View and it is working. But with other elements, it is throwing the error.
```
### System Information
```markdown
- Datapane version: 0.16.2
- Python version:Python 2.7.16
- Operating System: Mac
- Using Jupyter: No
- Pip or Conda: pip
- Dependencies:
- pandas:
- ...
```
### Anything else?
Full stack trace
Configuring datapane logging in library mode
[22:10:36] [DEBUG] No Bokeh Found
Uploading report and associated data - *please wait...*
Traceback (most recent call last):
File "/Users/akuncheria/Documents/GSR-2021Feb/UCBerkeley_GSR/city-factsheet/city-factsheet-sanfranciscobayarea/results/../scripts/report.py", line 101, in <module>
report()
File "/Users/akuncheria/Documents/GSR-2021Feb/UCBerkeley_GSR/city-factsheet/city-factsheet-sanfranciscobayarea/results/../scripts/report.py", line 96, in report
dp.upload_report(report_content, name=f'Smart Cities Research Center: {city_name_report}',
File "/usr/local/lib/python3.11/site-packages/datapane/processors/api.py", line 191, in upload_report
Pipeline(s).pipe(PreProcessView(is_finalised=True)).pipe(ConvertXML()).pipe(PreUploadProcessor()).result
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datapane/processors/types.py", line 60, in pipe
y = p.__call__(self._x) # need to call as positional args
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datapane/processors/processors.py", line 60, in __call__
v.accept(pp)
File "/usr/local/lib/python3.11/site-packages/datapane/blocks/base.py", line 84, in accept
visitor.visit(self)
File "/usr/local/lib/python3.11/site-packages/multimethod/__init__.py", line 315, in __call__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datapane/view/visitors.py", line 121, in visit
_ = b.traverse(self)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datapane/blocks/layout.py", line 70, in traverse
return reduce(lambda _visitor, block: block.accept(_visitor), self.blocks, visitor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datapane/blocks/layout.py", line 70, in <lambda>
return reduce(lambda _visitor, block: block.accept(_visitor), self.blocks, visitor)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datapane/blocks/base.py", line 84, in accept
visitor.visit(self)
File "/usr/local/lib/python3.11/site-packages/multimethod/__init__.py", line 315, in __call__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datapane/view/visitors.py", line 115, in visit
raise DPClientError(msg)
datapane.client.exceptions.DPClientError: Group has less than 1 objects
Please run with `dp.enable_logging()`, restart your Jupyter kernel/Python instance, and/or visit https://www.github.com/datapane/datapane to raise issue / discuss if error repeats | closed | 2023-05-17T20:34:23Z | 2023-05-24T05:38:43Z | https://github.com/datapane/datapane/issues/376 | [
"bug",
"triage"
] | anu-kuncheria | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.