organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
THUDM | ChatGLM-6B | 8633db1503fc3b0edc1d035f64aa35dce5d97969 | https://github.com/THUDM/ChatGLM-6B/issues/622 | [BUG/Help] ptuning时,指定PRE_SEQ_LEN=512,训练后,回答的问题仍旧有回答一百字左右就断了,该如何调整? | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
训练参数如下:
PRE_SEQ_LEN=512
LR=2e-2
CUDA_VISIBLE_DEVICES=0 python3 main.py \
--do_train \
--train_file ./data/gwddc.json \
--validation_file ./data/gwddc_test.json \
--prompt_column instruction \
--response_column output \
--overwrite_cache \
--model_name_or_path THUDM/chatglm-6b \
--output_dir output/adgen-chatglm-6b-pt-gwddc-v3 \
--overwrite_output_dir \
--max_source_length 64 \
--max_target_length 64 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--predict_with_generate \
--max_steps 3000 \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate $LR \
--pre_seq_len $PRE_SEQ_LEN
训练成功,加载checkpoint模型也成功,输入prompts也能正常回答,可是,回答长度仍旧很短,还会出现回答半截断掉的情况,请问该如何调整训练参数?
### Expected Behavior
_No response_
### Steps To Reproduce
1. ./data/gwddc.json为自备的训练集,prompts只有不到2000条
2. 输入上述参数并运行,训练结果信息如下:
……
{'loss': 0.0371, 'learning_rate': 0.0, 'epoch': 96.77}
Saving PrefixEncoder
{'train_runtime': 21212.1807, 'train_samples_per_second': 9.051, 'train_steps_per_second': 0.141, 'train_loss': 0.2381483610868454, 'epoch': 96.77}
***** train metrics *****
epoch = 96.77
train_loss = 0.2381
train_runtime = 5:53:32.18
train_samples = 1982
train_samples_per_second = 9.051
train_steps_per_second = 0.141
帮看是不是train_loss不行?需要增加迭代次数?
### Environment
```markdown
- OS:centos 7.6
- Python:3.9
- Transformers:4.27.1
- PyTorch:2.0.0+cu117
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :True
```
### Anything else?
另外,特别训练了“你是谁”,部署后,也没生效。 | null | null | null | {'base_commit': '8633db1503fc3b0edc1d035f64aa35dce5d97969', 'files': [{'path': 'ptuning/README.md', 'Loc': {'(None, None, 180)': {'mod': [180]}}, 'status': 'modified'}, {'path': 'ptuning/arguments.py', 'Loc': {"('DataTrainingArguments', None, 65)": {'mod': [123]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "4",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [
"ptuning/arguments.py"
],
"doc": [
"ptuning/README.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
THUDM | ChatGLM-6B | a14bc1d32452d92613551eb5d523e00950913710 | https://github.com/THUDM/ChatGLM-6B/issues/353 | enhancement | [Help] 如何支持多显卡 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
公司内部使用,装了2卡,发现默认配置只有1卡在跑,请问如何使用才可以使用多卡
### Expected Behavior
_No response_
### Steps To Reproduce
无
### Environment
```markdown
OS: Ubuntu 20.04
Python: 3.8
Transformers: 4.26.1
PyTorch: 1.12
CUDA Support: True
```
### Anything else?
_No response_ | null | null | null | {'base_commit': 'a14bc1d32452d92613551eb5d523e00950913710', 'files': [{'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3\n如何支持多显卡",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null |
huggingface | transformers | 34f28b2a1342fd72c2e4d4e5613855bfb9f35d34 | https://github.com/huggingface/transformers/issues/1225 | wontfix | Bert output last hidden state | ## ❓ Questions & Help
Hi,
Suppose we have an utterance of length 24 (considering special tokens) and we right-pad it with 0 to max length of 64.
If we use Bert pertained model to get the last hidden states, the output would be of size [1, 64, 768].
Can we use just the first 24 as the hidden states of the utterance? I mean is it right to say that the output[0, :24, :] has all the required information?
I realized that from index 24:64, the outputs has float values as well. | null | null | null | {'base_commit': '34f28b2a1342fd72c2e4d4e5613855bfb9f35d34', 'files': [{'path': 'src/transformers/models/bert/modeling_bert.py', 'Loc': {"('BertSelfAttention', 'forward', 276)": {'mod': [279]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/models/bert/modeling_bert.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
huggingface | transformers | 82c7e879876822864b5ceaf2c99eb01159266bcd | https://github.com/huggingface/transformers/issues/27200 | dataset download error in speech recognition examples | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@stevhliu and @MKhalusova
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \
--output_dir="./wav2vec2-common_voice-tr-demo" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--length_column_name="input_length" \
--save_steps="400" \
--eval_steps="100" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_encoder \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--push_to_hub \
--do_train --do_eval
### Expected behavior
When I run the default command, which set `dataset_name` as "common_voice", and I got a warning:
```
/home/xintong/.cache/huggingface/modules/datasets_modules/datasets/common_voice/220833898d6a60c50f621126e51fb22eb2dfe5244392c70dccd8e6e2f055f4bf/common_voice.py:634: FutureWarning:
This version of the Common Voice dataset is deprecated.
You can download the latest one with
>>> load_dataset("mozilla-foundation/common_voice_11_0", "en")
warnings.warn(
Generating train split: 0%| | 0/1831 [00:00<?, ? examples/s]
Traceback (most recent call last):
File "/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py", line 2578, in next
tarinfo = self.tarinfo.fromtarfile(self)
File "/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py", line 1283, in fromtarfile
obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
File "/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py", line 1221, in frombuf
raise TruncatedHeaderError("truncated header")
tarfile.TruncatedHeaderError: truncated header
```
I modified this into `mozilla-foundation/common_voice_11_0`, it passed.
```
Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.13k/8.13k [00:00<00:00, 30.3MB/s]
Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.4k/14.4k [00:00<00:00, 19.2MB/s]
Downloading extra modules: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.44k/3.44k [00:00<00:00, 19.9MB/s]
Downloading extra modules: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 60.9k/60.9k [00:00<00:00, 304kB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.2k/12.2k [00:00<00:00, 25.6MB/s]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 568M/568M [00:07<00:00, 71.7MB/s]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 233M/233M [00:02<00:00, 78.6MB/s]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 285M/285M [00:04<00:00, 67.7MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.86M/4.86M [00:00<00:00, 73.3MB/s]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 109M/109M [00:01<00:00, 80.4MB/s]
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:21<00:00, 4.24s/it]
Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:07<00:00, 1.54s/it]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.76M/5.76M [00:00<00:00, 56.0MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.17M/2.17M [00:00<00:00, 54.1MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.18M/2.18M [00:00<00:00, 64.3MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32.8k/32.8k [00:00<00:00, 53.1MB/s]
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 800k/800k [00:00<00:00, 59.8MB/s]
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:05<00:00, 1.01s/it]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 2954.98it/s]
``` | null | null | null | {'base_commit': '82c7e879876822864b5ceaf2c99eb01159266bcd', 'files': [{'path': 'examples/pytorch/speech-recognition/README.md', 'Loc': {'(None, None, 69)': {'mod': [69]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"examples/pytorch/speech-recognition/README.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494 | https://github.com/huggingface/transformers/issues/12081 | GPT2 Flax "TypeError: JAX only supports number and bool dtypes, got dtype object in array" | On GPU
```
>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2-medium")
>>> model = FlaxAutoModelForCausalLM.from_pretrained("gpt2-medium")
>>> input_context = "The dog"
>>> # encode input context
>>> input_ids = tokenizer(input_context, return_tensors="jax").input_ids
>>> # generate candidates using sampling
>>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)
TypeError: JAX only supports number and bool dtypes, got dtype object in array
```
@patrickvonplaten @patil-suraj | null | null | null | {'base_commit': '0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494', 'files': [{'path': 'src/transformers/models/gpt2/modeling_flax_gpt2.py', 'Loc': {"('FlaxGPT2LMHeadModule', None, 553)": {'mod': []}}, 'status': 'modified'}, {'path': 'src/transformers/models/gpt2/tokenization_gpt2_fast.py', 'Loc': {"('GPT2TokenizerFast', None, 70)": {'mod': []}}, 'status': 'modified'}, {'Loc': [6, 7], 'path': None}]} | [
{
"Loc": [
6,
7
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null,
"src/transformers/models/gpt2/tokenization_gpt2_fast.py",
"src/transformers/models/gpt2/modeling_flax_gpt2.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 322037e842e5e89080918c824998c17722df6f19 | https://github.com/huggingface/transformers/issues/10079 | Unclear error "NotImplementedError: "while saving tokenizer. How fix it? | Here is my tokenizer code and how I save it to a json file" /content/bert-datas7.json"
````
from tokenizers import normalizers
from tokenizers.normalizers import Lowercase, NFD, StripAccents
bert_tokenizer.pre_tokenizer = Whitespace()
from tokenizers.processors import TemplateProcessing
bert_tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[
("[CLS]", 1),
("[SEP]", 2),
("[PAD]", 3),
],
)
from tokenizers.trainers import WordPieceTrainer
trainer = WordPieceTrainer(
vocab_size=30522, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"], pad_to_max_length=True
)
files = [f"/content/For_ITMO.txt" for split in ["test", "train", "valid"]]
bert_tokenizer.train(trainer, files)
model_files = bert_tokenizer.model.save("data", "/content/For_ITMO.txt")
bert_tokenizer.model = WordPiece.from_file(*model_files, unk_token="[UNK]", pad_to_max_length=True)
bert_tokenizer.save("/content/bert-datas7.json")
````
When I output tokenizer name_or_path = nothing is displayed. This is normal?
````
tokenizer = PreTrainedTokenizerFast(tokenizer_file='/content/bert-datas7.json')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
print(tokenizer)
>>> PreTrainedTokenizerFast(name_or_path='', vocab_size=1435, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={'pad_token': '[PAD]'})
````
Also, when I try to save my tokenizer, I get an error without explanation. How can I rewrite the code so that all this???
#9658
#10039
[For_ITMO.txt-vocab (1) (1).txt](https://github.com/huggingface/transformers/files/5945659/For_ITMO.txt-vocab.1.1.txt)
````
tokenizer.save_pretrained("/content/tokennizerrrr")
NotImplementedError Traceback (most recent call last)
<ipython-input-11-efc48254a528> in <module>()
----> 1 tokenizer.save_pretrained("/content/tokennizerrrr")
2 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in save_vocabulary(self, save_directory, filename_prefix)
2042 :obj:`Tuple(str)`: Paths to the files saved.
2043 """
-> 2044 raise NotImplementedError
2045
2046 def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]:
NotImplementedError:
````
| null | null | null | {'base_commit': '322037e842e5e89080918c824998c17722df6f19', 'files': [{'path': 'src/transformers/tokenization_utils_fast.py', 'Loc': {"('PreTrainedTokenizerFast', '_save_pretrained', 505)": {'mod': [509]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/tokenization_utils_fast.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 77a257fc210a56f1fd0d75166ecd654cf58111f3 | https://github.com/huggingface/transformers/issues/8403 | [s2s finetune] huge increase in memory demands with --fp16 native amp | While working on https://github.com/huggingface/transformers/issues/8353 I discovered that `--fp16` causes a 10x+ increase in gpu memory demands.
e.g. I can run bs=12 w/o `--fp16`
```
cd examples/seq2seq
export BS=12; rm -rf distilbart-cnn-12-6; python finetune.py --learning_rate=3e-5 --gpus 1 \
--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \
--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \
--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \
--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \
--warmup_steps 500 --output_dir distilbart-cnn-12-6
```
But if I add:
```
--fp16
```
(w/ or w/o `--fp16_opt_level O1`)
I get OOM even with bs=1 on a 8GB card and it barely manages on a 24GB card - I think the increase in memory demand is more than 10x.
The OOM either right away when it does the sanity check step, or after just 10-20 batches - so within a few secs
This is with pytorch-1.6. Same goes for pytorch-1.7 and 1.8-nightly.
I wasn't able to test `--fp16` with pytorch-1.5, since I can't build apex on ubuntu-20.04. Without `--fp16` pytorch-1.5 works the same as pytorch-1.6 gpu memory-wise.
I tested with pytorch-1.5 + apex and there is no problem there. Memory consumption is about half.
Here is the table of the batch sizes that fit into a 8gb rtx-1070 (bigger BS leads to an instant OOM):
bs | version
---|--------
12 | pt15
20 | pt15+fp16
12 | pt16
1 | pt16+fp16
If you'd like to reproduce the problem here are the full steps:
```
# prep library
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[dev]
pip install -r examples/requirements.txt
cd examples/seq2seq
# prep data
wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz
tar -xzvf cnn_dm_v2.tgz # empty lines removed
mv cnn_cln cnn_dm
# run
export BS=12;
rm -rf distilbart-cnn-12-6
python finetune.py --learning_rate=3e-5 --gpus 1 \
--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \
--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \
--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \
--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \
--warmup_steps 500 --output_dir distilbart-cnn-12-6
```
This issue is to track the problem and hopefully finding a solution.
@sshleifer | null | null | https://github.com/pytorch/pytorch/commit/57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57 | {} | [] | [] | [
{
"org": "pytorch",
"pro": "pytorch",
"path": [
"{'base_commit': '57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57', 'files': [{'path': 'aten/src/ATen/autocast_mode.cpp', 'status': 'modified', 'Loc': {\"(None, 'cached_cast', 67)\": {'mod': [71]}}}, {'path': 'test/test_cuda.py', 'status': 'modified', 'Loc'... | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [
"aten/src/ATen/autocast_mode.cpp"
],
"doc": [],
"test": [
"test/test_cuda.py"
],
"config": [],
"asset": [
"pytorch"
]
} | null | |
huggingface | transformers | 1a688709b34b10bd372e3e0860c8d39d170ebf53 | https://github.com/huggingface/transformers/issues/17201 | a memory leak in qqp prediction using bart | ### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I met the same issue #11011. If not using `--eval_accumulation_steps`, it caused CUDA out of memory. If using it, it caused out of RAM and killed by system.
I only did prediction on GLUE QQP dataset using bart without fine-tuning. Considering QQP having a large test set (300k), the prediction got slower and slower, and finally got out of memory.
This is the script to reproduce:
```
CUDA_VISIBLE_DEVICES=0 python run_glue.py --model_name_or_path facebook/bart-large --task_name qqp --output_dir bart-large_qqp --eval_accumulation_steps 100 --do_predict --per_device_eval_batch_size 24
```
### Expected behavior
```shell
Prediction without out memory.
```
| null | null | null | {'base_commit': '1a688709b34b10bd372e3e0860c8d39d170ebf53', 'files': [{'path': 'src/transformers/trainer.py', 'Loc': {"('Trainer', 'evaluation_loop', 2549)": {'mod': [2635]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2\nOr\n5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/trainer.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5 | https://github.com/huggingface/transformers/issues/28435 | Skip some weights for load_in_8bit and keep them as fp16/32? | ### Feature request
Hello,
I am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit.
### Motivation
My motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 16 bit because I notice performance degradations with CLIP in 8 bit and also want to be able to train this part without LoRA.
As far as I can see in the documentation, issues and with Google (both here and for bitsandbytes), there is currently no way to do this.
### Your contribution
I can in theory help implement something like this but I don't know where and how in the code this should be done. | null | null | null | {'base_commit': 'cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5', 'files': [{'path': 'src/transformers/modeling_utils.py', 'Loc': {"('PreTrainedModel', 'from_pretrained', 2528)": {'mod': [3524]}}, 'status': 'modified'}, {'path': 'src/transformers/utils/quantization_config.py', 'Loc': {"('BitsAndBytesConfig', None, 151)": {'mod': [176]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/modeling_utils.py",
"src/transformers/utils/quantization_config.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 705ca7f21b2b557e0cfd5d0853b297fa53489d20 | https://github.com/huggingface/transformers/issues/14938 | Question: Object of type EncoderDecoderConfig is not JSON serializable | Hi.
An error occurred when I used Trainer to train and save EncoderDecoderModel.
```python
File "/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py", line 482, in <module>
run(model_args, data_args, training_args)
File "/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py", line 465, in run
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 1391, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 1495, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 1557, in _save_checkpoint
self.save_model(output_dir)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 1961, in save_model
self._save(output_dir)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py", line 2009, in _save
self.model.save_pretrained(output_dir, state_dict=state_dict)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1053, in save_pretrained
model_to_save.config.save_pretrained(save_directory)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py", line 416, in save_pretrained
self.to_json_file(output_config_file, use_diff=True)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py", line 739, in to_json_file
writer.write(self.to_json_string(use_diff=use_diff))
File "/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py", line 725, in to_json_string
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type EncoderDecoderConfig is not JSON serializable
```
My model and Config define the following code.
```python
tokenizer = RobertaTokenizerFast.from_pretrained(model_args.tokenizer_name)
encoder_config = RobertaConfig.from_pretrained(model_args.encoder_model_name_or_path)
decoder_config = RobertaConfig.from_pretrained(model_args.decoder_model_name_or_path)
encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
model = RobertaForSeq2Seq.from_encoder_decoder_pretrained(model_args.encoder_model_name_or_path,
model_args.decoder_model_name_or_path,
config=encoder_decoder_config, tie_encoder_decoder=True)
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 64
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 4
model.config.pad_token_id = tokenizer.pad_token_id
```
This error occurred because EncoderDecoderConfig cannot be converted to json format. But I don't know how to modify it.
```python
ERROR OCCURRED:
if use_diff is True:
config_dict = self.to_diff_dict()
else:
config_dict = self.to_dict()
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
```
I look forward to your help! Thanks!
@jplu @patrickvonplaten | null | null | null | {} | [
{
"Loc": [
46,
47
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 45d21502f0b67eb8a5ad244d469dcc0dfb7517a7 | https://github.com/huggingface/transformers/issues/653 | Different Results from version 0.4.0 to version 0.5.0 | Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained a new model, so the weights changed? | null | null | null | {'base_commit': '45d21502f0b67eb8a5ad244d469dcc0dfb7517a7', 'files': [{'path': 'pytorch_pretrained_bert/modeling.py', 'Loc': {"('BertPreTrainedModel', 'init_bert_weights', 515)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"pytorch_pretrained_bert/modeling.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885 | https://github.com/huggingface/transformers/issues/10202 | Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True | ## Environment info
- `transformers` version: 4.3.2
- Platform: macOS-11.2.1-x86_64-i386-64bit
- Python version: 3.9.1
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
See title; this issue does not reproduce with slow tokenizers. Does not reproduce with serialized tokenizers.
Found while investigating https://github.com/minimaxir/aitextgen/issues/88
## To reproduce
Using [gpt2_merges.txt](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_merges.txt) and [gpt2_vocab.json](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_vocab.json) as linked:
```py
from transformers import AutoModelForCausalLM, GPT2Tokenizer, GPT2TokenizerFast
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
outputs = model.generate(max_length=40)
# tensor([[50256, 383, 471, 13, 50, 13, 2732, 286, 4796, 468,
# 587, 10240, 262, 1918, 286, 257, 1966, 5349, 5797, 508,
# 373, 2823, 290, 2923, 416, 257, 23128, 287, 262, 471,
# 13, 50, 13, 13241, 319, 3583, 13, 198, 198, 198]])
tokenizer_fast = GPT2TokenizerFast(vocab_file="gpt2_vocab.json", merges_file="gpt2_merges.txt")
tokenizer_fast.decode(outputs[0], skip_special_tokens=True)
# '<|endoftext|> The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\n\n\n'
tokenizer_slow = GPT2Tokenizer(vocab_file="gpt2_vocab.json", merges_file="gpt2_merges.txt")
tokenizer_slow.decode(outputs[0], skip_special_tokens=True)
# ' The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\n\n\n'
```
| null | null | null | {'base_commit': '1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885', 'files': [{'path': 'src/transformers/tokenization_utils_base.py', 'Loc': {"('SpecialTokensMixin', 'add_special_tokens', 900)": {'mod': []}}, 'status': 'modified'}, {'Loc': [33], 'path': None}]} | [
{
"Loc": [
33
],
"path": null
}
] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "Cment指出用户代码问题,给出需要使用的API\n自己代码的问题 另一个issue中指出cmit\nI think this is happening because when you load it from the vocab and merge files, it doesn't know <|endoftext|> is a special token. For the skip_special_tokens to work, I believe it would be necessary to add them to the tokenizer:\ntokenizer_fast.add_special_tokens({\n \"additional_special_tokens\": \"<|endoftext|>\"\n})\n",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/tokenization_utils_base.py",
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 5bcbdff15922b1d0eeb035879630ca61c292122a | https://github.com/huggingface/transformers/issues/32661 | bug | RoBERTa config defaults are inconsistent with fairseq implementation | ### System Info
python 3.12, transformers 4.14, latest mac os
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import RobertaConfig
my_config = RobertaConfig()
roberta_config = RobertaConfig.from_pretrained("roberta-base")
assert (
my_config.max_position_embeddings == roberta_config.max_position_embeddings
), "%d %d" % (my_config.max_position_embeddings, roberta_config.max_position_embeddings)
### Expected behavior
The config defaults should correspond the the base model?
This is an implementation detail, but it did send me on a debugging spree as it hid as a sticky CUDA assertion error.
```Assertion `srcIndex < srcSelectDimSize` failed```
The problem is that by default if you create the position_ids yourself or if you let transformers roberta_modelling take care of it (it also does it the way fairseq implemented it), it will create indeces that are out of bounds with the default configuration as everything is shifted by pad_token_id.
This is more of a heads up. Do transformers generally provide defaults aligned with the original models, or are the defaults here meant to be agnostic of that? | null | null | null | {'base_commit': '5bcbdff15922b1d0eeb035879630ca61c292122a', 'files': [{'path': 'src/transformers/models/roberta/configuration_roberta.py', 'Loc': {"('RobertaConfig', None, 29)": {'mod': [59]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/models/roberta/configuration_roberta.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
geekan | MetaGPT | f0df3144d68ed288f5ccce0c34d3939f8462ba98 | https://github.com/geekan/MetaGPT/issues/1345 | Not able to run any MetaGPT examples | Referred Issue #1322 , but not able to resolve the issue. I added azure based api endpoint and api key in config2.yaml
│ 105 │ │ typer.echo("Missing argument 'IDEA'. Run 'metagpt --help' for more information." │
│ 106 │ │ raise typer.Exit() │
│ 107 │ │
│ ❱ 108 │ return generate_repo( │
│ 109 │ │ idea, │
│ 110 │ │ investment, │
│ 111 │ │ n_round, │
│ │
\metagpt\software_company.py:30 in generate_repo │
│ │
│ 27 │ recover_path=None, │
│ 28 ) -> ProjectRepo: │
│ 29 │ """Run the startup logic. Can be called from CLI or other Python scripts.""" │
│ ❱ 30 │ from metagpt.config2 import config │
│ 31 │ from metagpt.context import Context │
│ 32 │ from metagpt.roles import ( │
│ 33 │ │ Architect, │
│ │
\new_meta_env\Lib\site-packages\metagpt-0.8.1-py3.11.egg\metagpt\ │
│ config2.py:164 in <module> │
│ │
│ 161 │ return result │
│ 162 │
│ 163 │
│ ❱ 164 config = Config.default() │
\new_meta_env\Lib\site-packages\metagpt-0.8.1-py3.11.egg\metagpt\ │
│ config2.py:106 in default │
│ │
│ 103 │ │ dicts = [dict(os.environ)] │
│ 104 │ │ dicts += [Config.read_yaml(path) for path in default_config_paths] │
│ 105 │ │ final = merge_dict(dicts) │
│ ❱ 106 │ │ return Config(**final) │
│ 107 │ │
│ 108 │ @classmethod │
│ 109 │ def from_llm_config(cls, llm_config: dict): │
│ │
\new_meta_env\Lib\site-packages\pydantic\main.py:176 in __init__ │
│ │
│ 173 │ │ """ │
│ 174 │ │ # `__tracebackhide__` tells pytest and some other tools to omit this function fr │
│ 175 │ │ __tracebackhide__ = True │
│ ❱ 176 │ │ self.__pydantic_validator__.validate_python(data, self_instance=self) │
│ 177 │ │
│ 178 │ # The following line sets a flag that we use to determine when `__init__` gets overr │
│ 179 │ __init__.__pydantic_base_init__ = True # pyright: ignore[reportFunctionMemberAccess │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValidationError: 1 validation error for Config
llm
Field required [type=missing, input_value={'ALLUSERSPROFILE': 'C:\\..._INIT_AT_FORK': 'FALSE'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.7/v/missing | null | null | null | {'base_commit': 'f0df3144d68ed288f5ccce0c34d3939f8462ba98', 'files': [{'path': 'config/config2.yaml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config2.yaml"
],
"asset": []
} | null | |
geekan | MetaGPT | e43aaec9322054f4dec92f44627533816588663b | https://github.com/geekan/MetaGPT/issues/576 | 请问metagpt是否支持向量数据,构建自己的知识库 | 请问metagpt是否支持向量数据,构建自己的知识库 | null | null | null | {'base_commit': 'e43aaec9322054f4dec92f44627533816588663b', 'files': [{'path': '/metagpt/document_store', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"/metagpt/document_store"
],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | be56351e000a0f08562820fb04f6fdbe34d9e655 | https://github.com/geekan/MetaGPT/issues/205 | Rate Limited error | openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-fK5bb25UFhVbebfBtfCejGc4 on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.
Maybe a way to resume so all the runtime isn't just lost? | null | null | null | {'base_commit': 'be56351e000a0f08562820fb04f6fdbe34d9e655', 'files': [{'path': 'metagpt/provider/openai_api.py', 'Loc': {"('OpenAIGPTAPI', '_achat_completion_stream', 150)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"metagpt/provider/openai_api.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | fd7feb57fac8d37509b1325cad502d2f65d59956 | https://github.com/geekan/MetaGPT/issues/1553 | inactive | ValueError: Creator not registered for key: LLMType.OLLAMA | **Bug description**
<!-- Clearly and directly describe the current bug -->
I using ***MetaGPT ver 0.8.1*** but when use RAG with method **SimpleEngine.from_docs** have error ***ValueError: Creator not registered for key: LLMType.OLLAMA***
<!-- **Bug solved method** -->
<!-- If you solved the bug, describe the idea or process to solve the current bug. Of course, you can also paste the URL address of your Pull Request. -->
<!-- If not, provide more auxiliary information to facilitate our further positioning and investigation -->
**Environment information**
<!-- Environment:System version (like ubuntu 22.04), Python version (conda python 3.7), LLM type and model (OpenAI gpt-4-1106-preview) -->
- LLM type and model name: ollama and model: hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF
- System version:
- Python version: 3.10
- MetaGPT version or branch: 0.8.1
<!-- Dependent packagess:the packages version cause the bug(like `pydantic 1.10.8`), installation method(like `pip install metagpt` or `pip install from source` or `run in docker`) -->
- packages version:
- installation method:
**Screenshots or logs**
<!-- Screenshots or logs of the bug can help us understand the problem more quickly -->
***config2.yaml***
embedding:
api_type: "ollama"
model: "hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF"
base_url: "http://127.0.0.1:11434/api"
llm:
api_type: "ollama"
model: "hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF"
base_url: "http://127.0.0.1:11434/api"
***Error Response***
[/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/base.py](https://localhost:8080/#) in get_instance(self, key, **kwargs)
27 return creator(**kwargs)
28
---> 29 raise ValueError(f"Creator not registered for key: {key}")
30
31
ValueError: Creator not registered for key: LLMType.OLLAMA
| null | null | null | {} | [
{
"path": "config/config2.yaml",
"Loc": [
28
]
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config2.yaml"
],
"asset": []
} | null |
geekan | MetaGPT | df8d1124c68b62bb98c71b6071abf5efe6293dba | https://github.com/geekan/MetaGPT/issues/15 | 请问如何配置使用Azure上的api? | 你好,
我看到文档中需要配置openAI的key,但是我注意到在provider中有azure_api的相关文件,
请问是否在哪个地方可以配置让他使用azure提供的服务? | null | null | null | {'base_commit': 'df8d1124c68b62bb98c71b6071abf5efe6293dba', 'files': [{'path': 'config/config.yaml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config.yaml"
],
"asset": []
} | null | |
geekan | MetaGPT | dfa33fcdaade1e4f8019835bf065d372d76724ae | https://github.com/geekan/MetaGPT/issues/924 | GLM4一直报错 | 2024-02-22 16:50:26.666 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 80.109(s), this was the 5th time calling it. exp: 1 validation error for PM_NODE_AN
Value error, Missing fields: {'Full API spec', 'Required Python packages', 'Required Other language third-party packages'} [type=value_error, input_value={'Required JavaScript pac...ation and development.'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/value_error | null | null | null | {'base_commit': 'dfa33fcdaade1e4f8019835bf065d372d76724ae', 'files': [{'path': 'config/config2.yaml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config\nCode"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config2.yaml"
],
"asset": []
} | null | |
geekan | MetaGPT | 80a189ad4a1546f8c1a9dbe00c42725868c35e5e | https://github.com/geekan/MetaGPT/issues/135 | failed to launch chromium browser process errors | get errors on launch of browser process; below is the error from terminal which happens for all browser processes trying to launch.
```
INFO | metagpt.utils.mermaid:mermaid_to_file:38 - Generating /Users/lopezdp/DevOps/Ai_MetaGPT/workspace/test_app/resources/competitive_analysis.pdf..
Error: Failed to launch the browser process! spawn /usr/bin/chromium ENOENT
TROUBLESHOOTING: https://pptr.dev/troubleshooting
at ChildProcess.onClose (file:///Users/lopezdp/DevOps/Ai_MetaGPT/node_modules/@puppeteer/browsers/lib/esm/launch.js:253:24)
at ChildProcess.emit (node:events:513:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
at onErrorNT (node:internal/child_process:485:16)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
```
| null | null | null | {'base_commit': '80a189ad4a1546f8c1a9dbe00c42725868c35e5e', 'files': [{'path': 'config/puppeteer-config.json', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [
"config/puppeteer-config.json"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | 8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d | https://github.com/geekan/MetaGPT/issues/1115 | The following error appears on every run | 
2024-03-27 11:15:59.019 | ERROR | metagpt.utils.common:wrapper:631 - Exception occurs, start to serialize the project, exp:
Traceback (most recent call last):
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "d:\下载\metagpt-main\metagpt\utils\repair_llm_raw_output.py", line 296, in retry_parse_json_text
parsed_data = CustomDecoder(strict=False).decode(output)
json.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
File "d:\下载\metagpt-main\metagpt\actions\action_node.py", line 425, in _aask_v1
parsed_data = llm_output_postprocess(
tenacity.RetryError: RetryError[<Future at 0x1f1a7f31d30 state=finished raised JSONDecodeError>]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "d:\下载\metagpt-main\metagpt\utils\common.py", line 640, in wrapper
return await func(self, *args, **kwargs)
File "d:\下载\metagpt-main\metagpt\roles\role.py", line 550, in run
rsp = await self.react()
tenacity.RetryError: RetryError[<Future at 0x1f1a7f31160 state=finished raised RetryError>]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "d:\下载\metagpt-main\metagpt\utils\common.py", line 626, in wrapper
result = await func(self, *args, **kwargs)
File "d:\下载\metagpt-main\metagpt\team.py", line 134, in run
await self.env.run()
Exception: Traceback (most recent call last):
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "d:\下载\metagpt-main\metagpt\utils\repair_llm_raw_output.py", line 296, in retry_parse_json_text
parsed_data = CustomDecoder(strict=False).decode(output)
File "d:\下载\metagpt-main\metagpt\utils\custom_decoder.py", line 297, in decode
return super().decode(s)
File "D:\andconda\envs\metagpt\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "D:\andconda\envs\metagpt\lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
File "d:\下载\metagpt-main\metagpt\utils\custom_decoder.py", line 65, in scan_once
return _scan_once(string, idx)
File "d:\下载\metagpt-main\metagpt\utils\custom_decoder.py", line 36, in _scan_once
return parse_object((string, idx + 1), strict, _scan_once, object_hook, object_pairs_hook, memo)
File "d:\下载\metagpt-main\metagpt\utils\custom_decoder.py", line 164, in JSONObject
value, end = scan_once(s, end)
File "d:\下载\metagpt-main\metagpt\utils\custom_decoder.py", line 34, in _scan_once
return parse_string(string, idx + 1, strict, delimiter=nextchar)
File "d:\下载\metagpt-main\metagpt\utils\custom_decoder.py", line 227, in py_scanstring
raise JSONDecodeError("Unterminated string starting at", s, begin)
json.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
File "d:\下载\metagpt-main\metagpt\actions\action_node.py", line 425, in _aask_v1
parsed_data = llm_output_postprocess(
File "d:\下载\metagpt-main\metagpt\provider\postprocess\llm_output_postprocess.py", line 19, in llm_output_postprocess
result = postprocess_plugin.run(output=output, schema=schema, req_key=req_key)
File "d:\下载\metagpt-main\metagpt\provider\postprocess\base_postprocess_plugin.py", line 68, in run
new_output = self.run_repair_llm_output(output=output, schema=schema, req_key=req_key)
File "d:\下载\metagpt-main\metagpt\provider\postprocess\base_postprocess_plugin.py", line 32, in run_repair_llm_output
parsed_data = self.run_retry_parse_json_text(content)
File "d:\下载\metagpt-main\metagpt\provider\postprocess\base_postprocess_plugin.py", line 47, in run_retry_parse_json_text
parsed_data = retry_parse_json_text(output=content) # should use output=content
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x1f1a7f31d30 state=finished raised JSONDecodeError>]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "d:\下载\metagpt-main\metagpt\utils\common.py", line 640, in wrapper
return await func(self, *args, **kwargs)
File "d:\下载\metagpt-main\metagpt\roles\role.py", line 550, in run
rsp = await self.react()
File "d:\下载\metagpt-main\metagpt\roles\role.py", line 517, in react
rsp = await self._react()
File "d:\下载\metagpt-main\metagpt\roles\role.py", line 463, in _react
rsp = await self._act()
File "d:\下载\metagpt-main\metagpt\roles\role.py", line 392, in _act
response = await self.rc.todo.run(self.rc.history)
File "d:\下载\metagpt-main\metagpt\actions\design_api.py", line 58, in run
doc = await self._update_system_design(filename=filename)
File "d:\下载\metagpt-main\metagpt\actions\design_api.py", line 86, in _update_system_design
system_design = await self._new_system_design(context=prd.content)
File "d:\下载\metagpt-main\metagpt\actions\design_api.py", line 73, in _new_system_design
node = await DESIGN_API_NODE.fill(context=context, llm=self.llm)
File "d:\下载\metagpt-main\metagpt\actions\action_node.py", line 505, in fill
return await self.simple_fill(schema=schema, mode=mode, images=images, timeout=timeout, exclude=exclude)
File "d:\下载\metagpt-main\metagpt\actions\action_node.py", line 457, in simple_fill
content, scontent = await self._aask_v1(
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
File "D:\andconda\envs\metagpt\lib\site-packages\tenacity\__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x1f1a7f31160 state=finished raised RetryError>] | null | null | null | {'base_commit': '8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d', 'files': [{'path': 'metagpt/strategy/planner.py', 'Loc': {"('Planner', 'update_plan', 68)": {'mod': [75]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"metagpt/strategy/planner.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | bdf9d224b5a05228897553a29214adc074fbc465 | https://github.com/geekan/MetaGPT/issues/754 | SubscriptionRunner | import asyncio
from metagpt.subscription import SubscriptionRunner
from metagpt.roles import Searcher
from metagpt.schema import Message
async def trigger():
while True:
yield Message("the latest news about OpenAI")
await asyncio.sleep(1)
async def callback(msg: Message):
print(msg.content)
# async def main():
# aa = trigger()
# async for i in aa:
# await callback(i)
async def main():
pd = SubscriptionRunner()
await pd.subscribe(Searcher(), trigger(), callback)
await pd.run()
asyncio.run(main())
在创建Runner时候报错,0.6.3版本
Traceback (most recent call last):
File "e:\tmp\metatest\OSSWatcher .py", line 44, in <module>
asyncio.run(main())
File "C:\Users\888888\.conda\envs\mp\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\uweih034\.conda\envs\mp\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\888888\.conda\envs\mp\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "e:\tmp\metatest\OSSWatcher .py", line 40, in main
pd = SubscriptionRunner()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\888888\.conda\envs\mp\Lib\site-packages\pydantic\main.py", line 164, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\888888\.conda\envs\mp\Lib\site-packages\pydantic\_internal\_mock_val_ser.py", line 47, in __getattr__
raise PydanticUserError(self._error_message, code=self._code)
pydantic.errors.PydanticUserError: `SubscriptionRunner` is not fully defined; you should define `Environment`, then call `SubscriptionRunner.model_rebuild()`.
For further information visit https://errors.pydantic.dev/2.5/u/class-not-fully-defined | null | null | null | {'base_commit': 'bdf9d224b5a05228897553a29214adc074fbc465', 'files': [{'path': 'metagpt/environment.py', 'Loc': {"('Environment', None, 27)": {'mod': []}}, 'status': 'modified'}, {'Loc': [21], 'path': None}]} | [
{
"Loc": [
21
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null,
"metagpt/environment.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | f88fa9e2df09c28f867bda54ec24fa25b50be830 | https://github.com/geekan/MetaGPT/issues/178 | Specify Directory of pdf documents as Knowledge Base | Hi, how can we specify any folder which includes pdf documents as a knowledge base and create a new Role of Document Controller to extract specific information from within the documents in KB?
Any help would be highly appreciated
Thanks much appreciated | null | null | null | {'base_commit': 'f88fa9e2df09c28f867bda54ec24fa25b50be830', 'files': [{'path': 'metagpt/document_store', 'Loc': {}}, {'path': 'tests/metagpt/document_store', 'Loc': {}}, {'path': 'examples/search_kb.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"examples/search_kb.py"
],
"doc": [
"metagpt/document_store",
"tests/metagpt/document_store"
],
"test": [],
"config": [],
"asset": []
} | null | |
langflow-ai | langflow | 7e756b9db56677636e6920c1e6628d13e980aec7 | https://github.com/langflow-ai/langflow/issues/6006 | bug | All custom components throw errors after update to latest version | ### Bug Description
```
[01/29/25 00:15:00] ERROR 2025-01-29 00:15:00 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405
<class 'pydantic._internal._model_construction.ModelMetaclass'>
```
### Reproduction
1. langflow updated to v1.1.2 from v1.1.1
2. all previously created custom components throwing error:
[01/29/25 00:24:09] ERROR 2025-01-29 00:24:09 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405
<class 'pydantic._internal._model_construction.ModelMetaclass'>
### Expected behavior
Langflow should build tool correctly, as on previous version.
Simplified failing code:
```python
from langflow.custom import Component
from langflow.io import Output
from langflow.schema import Data
from langflow.field_typing import Tool
from langchain.tools import StructuredTool
from pydantic import BaseModel, Field
class MinimalSchema(BaseModel):
input_text: str = Field(..., description="Text Input")
class SimpleToolComponentMinimalSchema(Component):
display_name = "Simple Tool Minimal Schema Test"
description = "Component with StructuredTool and minimal schema"
outputs = [Output(display_name="Tool", name="test_tool", method="build_tool")]
class MinimalSchema(BaseModel): # Define inner schema
input_text: str = Field(..., description="Text Input")
def build_tool(self) -> Tool:
return StructuredTool.from_function( # Return directly - simplified
name="minimal_tool",
description="Minimal tool for testing schema",
func=self.run_tool,
args_schema=SimpleToolComponentMinimalSchema.MinimalSchema
)
def run_tool(self, input_text: str) -> str:
return f"Tool received: {input_text}"
```
### Who can help?
_No response_
### Operating System
wsl Ubuntu latest
### Langflow Version
1.1.2
### Python Version
3.12
### Screenshot
_No response_
### Flow File
_No response_ | null | null | null | {} | [
{
"Loc": [
40
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
langflow-ai | langflow | 19818db68b507332be71f30dd90d16bf4c7d6f83 | https://github.com/langflow-ai/langflow/issues/3718 | enhancement | Add pgVector in the building instructions for the PostgreSQL Docker image | ### Feature Request
Include the pgVector component with the Docker build instructions. This would provide the use with a fully functional PostgreSQL Vector DB, ready to be used inside LangFlow.
### Motivation
I am not a programmer, neither I do have proper knowledge of SQL, but I liked to play with some RAG ideas and LangFlow seems perfect.
So, after installing the Docker version for development of LangFlow, I noticed that the PostgreSQL server is missing the pgVector component, or at least that is what I understood from the error messages.
Perhaps, it would be useful if the pgVector could be included in the Docker container, so having the user to just activate it on the SQL database. Anyway, I might be wrong, so in that case please forgive me.
### Your Contribution
After looking into the repository and searching around, with the help of AI (of course!), I found that the Docker instructions for the PostgreSQL server are defined inside the file \docker\cdk.Dockerfile (hope it's correct), and these might be the instructions to include pgVector:
```
FROM --platform=linux/amd64 python:3.10-slim
WORKDIR /app
# Install Poetry and build dependencies
RUN apt-get update && apt-get install -y \
gcc \
g++ \
curl \
build-essential \
git \
postgresql-server-dev-all \
&& rm -rf /var/lib/apt/lists/*
# Install Poetry
RUN curl -sSL https://install.python-poetry.org | python3 -
# Add Poetry to PATH
ENV PATH="${PATH}:/root/.local/bin"
# Copy the pyproject.toml and poetry.lock files
COPY poetry.lock pyproject.toml ./
# Copy the rest of the application codes
COPY ./ ./
# Install dependencies
RUN poetry config virtualenvs.create false && poetry install --no-interaction --no-ansi
# Install pgvector extension
RUN git clone https://github.com/pgvector/pgvector.git /tmp/pgvector && \
cd /tmp/pgvector && \
make && \
make install && \
rm -rf /tmp/pgvector
# Install additional dependencies
RUN poetry add botocore
RUN poetry add pymysql
# Command to run your application
CMD ["sh", "./container-cmd-cdk.sh"]
```
| null | null | null | {'base_commit': '19818db68b507332be71f30dd90d16bf4c7d6f83', 'files': [{'path': 'docker_example/docker-compose.yml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3\nor\n4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config\nCode"
} | {
"code": [],
"doc": [
"docker_example/docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null |
langflow-ai | langflow | 2ddd7735129b0f35fd617f2634d35a3690b06630 | https://github.com/langflow-ai/langflow/issues/4528 | bug | Can't access flow directly by link | ### Bug Description
When you try to access a flow using it's URL (ex. http://localhost:55650/flow/0b95342f-6ce4-43d0-9d60-c28bf66a3781), the page doesn't load and in the browser's console is shown the following message: ``Uncaught SyntaxError: Unexpected token '<' (at index-DK9323ab.js:1:1)``. I think that this problem is related to #1182 .
Navegating through the main page to access this flow works fine. If I reload the page, it doesn't load as described before.
### Reproduction
1. Run the Docker image langflowui/langflow
2. Open the langflow main page
3. Creates a new flow
4. Copy the flow link into a new tab or just reload the page
### Expected behavior
To open the flow editor page.
### Who can help?
_No response_
### Operating System
Docker image (langflowai/langflow) running in K8s
### Langflow Version
1.0.19
### Python Version
None
### Screenshot
Instead of loading the JS file, is loaded the HTML file as shown in the following picture:

All requests in this image loads the same HTML.
### Flow File
_No response_ | null | null | null | {'base_commit': '2ddd7735129b0f35fd617f2634d35a3690b06630', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Version"
]
} | null |
langflow-ai | langflow | ed53fcd3b042ecb5ed04c9c4562c459476bd6763 | https://github.com/langflow-ai/langflow/issues/3896 | bug | redis.exceptions.ResponseError: unknown command 'module' | ### Bug Description
redis.exceptions.ResponseError: unknown command 'module'
https://github.com/user-attachments/assets/32ea6046-d5f1-4d85-96b5-41d381776986
### Reproduction
Add a redis click run error, see the video
### Expected behavior
ResponseError: unknown command 'MODULE'
### Who can help?
_No response_
### Operating System
windows
### Langflow Version
1.0.18
### Python Version
3.11
### Screenshot
_No response_
### Flow File
_No response_ | null | null | null | {'base_commit': 'ed53fcd3b042ecb5ed04c9c4562c459476bd6763', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Version"
]
} | null |
langflow-ai | langflow | 7d400903644230a8842ce189ca904ea9f8048b07 | https://github.com/langflow-ai/langflow/issues/1239 | bug | cannot import name 'DEFAULT_CONNECTION_STRING' in v0.6.3a5 |
```
% git branch
* (HEAD detached at v0.6.3a5)
dev
% cd docker_example
% docker compose up
[+] Running 1/0
✔ Container docker_example-langflow-1 Created 0.0s
Attaching to langflow-1
langflow-1 | Traceback (most recent call last):
langflow-1 | File "/home/user/.local/bin/langflow", line 5, in <module>
langflow-1 | from langflow.__main__ import main
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/__init__.py", line 5, in <module>
langflow-1 | from langflow.processing.process import load_flow_from_json
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/processing/process.py", line 10, in <module>
langflow-1 | from langflow.graph import Graph
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/graph/__init__.py", line 2, in <module>
langflow-1 | from langflow.graph.graph.base import Graph
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/graph/graph/base.py", line 7, in <module>
langflow-1 | from langflow.graph.graph.constants import lazy_load_vertex_dict
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/graph/graph/constants.py", line 1, in <module>
langflow-1 | from langflow.graph.vertex import types
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/graph/vertex/types.py", line 5, in <module>
langflow-1 | from langflow.graph.vertex.base import Vertex
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/graph/vertex/base.py", line 9, in <module>
langflow-1 | from langflow.interface.initialize import loading
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/interface/initialize/loading.py", line 17, in <module>
langflow-1 | from langflow.interface.custom_lists import CUSTOM_NODES
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/interface/custom_lists.py", line 7, in <module>
langflow-1 | from langflow.interface.agents.custom import CUSTOM_AGENTS
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/interface/agents/__init__.py", line 1, in <module>
langflow-1 | from langflow.interface.agents.base import AgentCreator
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/interface/agents/base.py", line 5, in <module>
langflow-1 | from langflow.custom.customs import get_custom_nodes
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/custom/customs.py", line 1, in <module>
langflow-1 | from langflow.template import frontend_node
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/template/frontend_node/__init__.py", line 1, in <module>
langflow-1 | from langflow.template.frontend_node import (
langflow-1 | File "/home/user/.local/lib/python3.10/site-packages/langflow/template/frontend_node/memories.py", line 7, in <module>
langflow-1 | from langchain.memory.chat_message_histories.postgres import DEFAULT_CONNECTION_STRING
langflow-1 | ImportError: cannot import name 'DEFAULT_CONNECTION_STRING' from 'langchain.memory.chat_message_histories.postgres' (/home/user/.local/lib/python3.10/site-packages/langchain/memory/chat_message_histories/postgres.py)
langflow-1 exited with code 1
```
| null | null | null | {'base_commit': '7d400903644230a8842ce189ca904ea9f8048b07', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Version"
]
} | null |
langflow-ai | langflow | 12a46b6936e23829d9956d4d5f1fa51faff76137 | https://github.com/langflow-ai/langflow/issues/965 | stale | Method for Dynamically Manipulating Parameters of a Custom Component | ```python
class DynamicConfigCustomComponent(CustomComponent):
def build_config(self, prev_selection=None):
config = {
"param1": {"display_name": "Parameter 1"},
"param2": {
"display_name": "Parameter 2",
"options": [1, 2, 3],
"value": 1,
},
}
if prev_selection is not None:
if prev_selection == 2:
config["param3"] = {"display_name": "Parameter 3", "value": "hello"}
return config
```
I want to dynamically change different values depending on the type of component that is input or connected when using a custom component, as shown in the attached code. For example, in Langflow's prompt template, when you change the text, the key value input into that component is dynamically displayed in the list. Is there any way to do this?
| null | null | null | {'base_commit': '12a46b6936e23829d9956d4d5f1fa51faff76137', 'files': [{'path': 'src/frontend/src/types/components/index.ts', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/frontend/src/types/components/index.ts"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
Significant-Gravitas | AutoGPT | ad7cefa10c0647feee85114d58559fcf83ba6743 | https://github.com/Significant-Gravitas/AutoGPT/issues/1902 | setup | Error with 'python -m autogpt' command. Please set your OpenAI API key in .env or as an environment variable. You can get your key from https://beta.openai.com/account/api-keys | ### Duplicates
- [X] I have searched the existing issues
### Steps to reproduce 🕹
Installed the 'stable' version of the program
I run 'python -m autogpt' command and comes up with an error.

I have paid Chat GPT and Open AI API accounts.
For Chat GPT I have access to version 4
For Open AI API I do not have access to version 4, I am on the version before this.
### Current behavior 😯
Error message ;Please set your OpenAI API key in .env or as an environment variable.
You can get your key from https://beta.openai.com/account/api-keys'
### Expected behavior 🤔
Should load the program as to start commands
### Your prompt 📝
```yaml
python -m autogpt```
| null | null | null | {'base_commit': 'ad7cefa10c0647feee85114d58559fcf83ba6743', 'files': [{'path': 'run.sh', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "1\n0",
"info_type": "Other\n环境变量 /script shell等"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"run.sh"
]
} | null |
Significant-Gravitas | AutoGPT | 90e6a55e378bc80352f01eb08122300b4d1a64ec | https://github.com/Significant-Gravitas/AutoGPT/issues/2428 | function: logging | Add logging of user input of the role and goals | ### Duplicates
- [X] I have searched the existing issues
### Summary 💡
Now logs reflect only gpt's response but i don't really remember what exactly i input before. Please log it same as in the console.
Current logging makes it a lot harder to debug
### Examples 🌈
```
All packages are installed.
Welcome back! Would you like me to return to being sc3?
Continue with the last settings?
Name: sc3
Role: warhammer 40k writer
Goals: ['research the theme', 'do a 5000 symbols structurized explanation on wh40k lore', 'terminate']
Continue (y/n): n
Welcome to Auto-GPT! run with '--help' for more information.
Create an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.
Name your AI: For example, 'Entrepreneur-GPT'
AI Name: da23eads
da23eads here! I am at your service.
Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
da23eads is: wh 40k writer
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Goal 1: research the theme
Goal 2: do a plot esplanation on warhammer 40k universe
Goal 3: terminate
Goal 4:
Using memory of type: LocalCache
Using Browser: chrome
- Thinking...
```
### Motivation 🔦
make the world better | null | null | null | {} | [] | [
"ai_settings.yml"
] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"ai_settings.yml"
],
"asset": []
} | null |
Significant-Gravitas | AutoGPT | 16b7e7a91e7b6c73ddf3e7193cea53f1b45671fa | https://github.com/Significant-Gravitas/AutoGPT/issues/4218 | setup | AutoGPT v0.3.1 crashes immediately after task given | ### Which Operating System are you using?
Windows
### Which version of Auto-GPT are you using?
Latest Release v0.3.1
### GPT-3 or GPT-4?
GPT-3.5
### Steps to reproduce 🕹
Welcome to Auto-GPT! run with '--help' for more information.
Create an AI-Assistant: input '--manual' to enter manual mode.
Asking user via keyboard...
I want Auto-GPT to: Search Big Mac prices in EU countries
Unable to automatically generate AI Config based on user desire. Falling back to manual mode.
Create an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.
Name your AI: For example, 'Entrepreneur-GPT'
Asking user via keyboard...
AI Name: MacGPT
MacGPT here! I am at your service.
Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
Asking user via keyboard...
MacGPT is: Search for Big Mc prices in EU countries
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Asking user via keyboard...
Goal 1: Conduct a thorough and accurate search of BigMc prices across EU countries
Asking user via keyboard...
Goal 2: Provide price per each EU capital
Asking user via keyboard...
Goal 3: Ensure that the information provided is up-to-date and accurate
Asking user via keyboard...
Goal 4: Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.
Asking user via keyboard...
Goal 5: Do not crash ang give error - "openai.error.AuthenticationError: <empty message>"
Enter your budget for API calls: For example: $1.50
Enter nothing to let the AI run without monetary limit
Asking user via keyboard...
Budget: $1
MacGPT has been created with the following details:
Name: MacGPT
Role: Search for Big Mc prices in EU countries
Goals:
- Conduct a thorough and accurate search of BigMc prices across EU countries
- Provide price per each EU capital
- Ensure that the information provided is up-to-date and accurate
- Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.
- Do not crash ang give error - "openai.error.AuthenticationError: <empty message>"
Using memory of type: LocalCache
Using Browser: chrome
Traceback (most recent call last):
File "C:\Users\makkolev\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\agpt\autogpt\__main__.py", line 5, in <module>
autogpt.cli.main()
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 1635, in invoke
rv = super().invoke(ctx)
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\agpt\autogpt_env\lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "C:\agpt\autogpt\cli.py", line 90, in main
run_auto_gpt(
File "C:\agpt\autogpt\main.py", line 186, in run_auto_gpt
agent.start_interaction_loop()
File "C:\agpt\autogpt\agent\agent.py", line 113, in start_interaction_loop
assistant_reply = chat_with_ai(
File "C:\agpt\autogpt\llm\chat.py", line 244, in chat_with_ai
assistant_reply = create_chat_completion(
File "C:\agpt\autogpt\llm\llm_utils.py", line 166, in create_chat_completion
response = api_manager.create_chat_completion(
File "C:\agpt\autogpt\llm\api_manager.py", line 55, in create_chat_completion
response = openai.ChatCompletion.create
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.AuthenticationError: <empty message>
### Current behavior 😯
Crashes multiple times. Open_API_key has been provided. Restarted virtual environment a couple of times.
NB! Tried to start AutoGPT both with Windows Python3.10 way and via Docker. In both cases can't start start search and receive immediately error (below) - openai.error.AuthenticationError: <empty message>
### Expected behavior 🤔
Starts correctly
### Your prompt 📝
```AI Name: MacGPT
MacGPT here! I am at your service.
Describe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'
Asking user via keyboard...
MacGPT is: Search for Big Mc prices in EU countries
Enter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'
Enter nothing to load defaults, enter nothing when finished.
Asking user via keyboard...
Goal 1: Conduct a thorough and accurate search of BigMc prices across EU countries
Asking user via keyboard...
Goal 2: Provide price per each EU capital
Asking user via keyboard...
Goal 3: Ensure that the information provided is up-to-date and accurate
Asking user via keyboard...
Goal 4: Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.
Asking user via keyboard...
Goal 5: Do not crash ang give error - "openai.error.AuthenticationError: <empty message>"
Enter your budget for API calls: For example: $1.50
Enter nothing to let the AI run without monetary limit
Asking user via keyboard...
Budget: $1
MacGPT has been created with the following details:
Name: MacGPT
Role: Search for Big Mc prices in EU countries
Goals:
- Conduct a thorough and accurate search of BigMc prices across EU countries
- Provide price per each EU capital
- Ensure that the information provided is up-to-date and accurate
- Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.
- Do not crash ang give error - "openai.error.AuthenticationError: <empty message>"
Using memory of type: LocalCache
Using Browser: chrome
```
### Your Logs 📒
```log
Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\makkolev\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\agpt\autogpt\__main__.py", line 5, in <module>
autogpt.cli.main()
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 1635, in invoke
rv = super().invoke(ctx)
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\agpt\autogpt_env\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\agpt\autogpt_env\lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "C:\agpt\autogpt\cli.py", line 90, in main
run_auto_gpt(
File "C:\agpt\autogpt\main.py", line 186, in run_auto_gpt
agent.start_interaction_loop()
File "C:\agpt\autogpt\agent\agent.py", line 113, in start_interaction_loop
assistant_reply = chat_with_ai(
File "C:\agpt\autogpt\llm\chat.py", line 244, in chat_with_ai
assistant_reply = create_chat_completion(
File "C:\agpt\autogpt\llm\llm_utils.py", line 166, in create_chat_completion
response = api_manager.create_chat_completion(
File "C:\agpt\autogpt\llm\api_manager.py", line 55, in create_chat_completion
response = openai.ChatCompletion.create(
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "C:\agpt\autogpt_env\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.AuthenticationError: <empty message>
Press any key to continue . . .
```
| null | null | null | {} | [] | [
".env"
] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "1",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
".env"
],
"asset": []
} | null |
fastapi | fastapi | c6aa28bea2f751a91078bd8d845133ff83f352bf | https://github.com/fastapi/fastapi/issues/5424 | question
answered
question-migrate | How to identify query params with keys only and no value | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
@router.get("/events")
def get_alerts(request: Request)
params = request.query_params
```
### Description
I want to handle a use case where I want to handle a use case where if a query param is passed but no value is set, I would return a specific message. I want a different behavior when to when it is not passed at all.
I tried using request.query_params but it doesn't get the Key in the request as well.
Postman request looks like this:
<img width="805" alt="image" src="https://user-images.githubusercontent.com/104721284/192010955-160c2418-63f3-46ac-9f64-a416b92c03ae.png">
### Operating System
macOS
### Operating System Details
_No response_
### FastAPI Version
0.70.0
### Python Version
3.9
### Additional Context
_No response_ | null | null | null | {} | [
{
"Loc": [
20
],
"path": null
}
] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | c6aa28bea2f751a91078bd8d845133ff83f352bf | https://github.com/fastapi/fastapi/issues/5425 | question
answered
question-migrate | Error while opening swagger docs while uploading file in APIRouter | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
router = APIRouter(
prefix='/predict',
tags=["Prediction"],
responses={404: {"description": "Not Found"}}
)
@router.post("/")
async def predict(file: UploadFile = File(...)):
extension = file.filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not extension:
raise HTTPException(status_code=400, detail="File Format Error : Uploaded file must be a JPG, JPEG or PNG file")
image = read_image_file(await file.read())
result = predict_pneumonia(image)
if result > 0.6:
return JSONResponse(content={"prediction": "pneumonia"})
return JSONResponse(content={"prediction": "no pneumonia"})
```
### Description
I am just trying to create a ML prediction application using FastAPI. While uploading images, swagger docs doesn't load and its showing the below mentioned error. But the endpoint works perfectly when tried with Postman.

```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\fastapi\applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\starlette\applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\starlette\middleware\exceptions.py", line 75, in __call__
raise exc
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\starlette\middleware\exceptions.py", line 64, in __call__
await self.app(scope, receive, sender)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
raise e
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\starlette\routing.py", line 680, in __call__
await route.handle(scope, receive, send)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\starlette\routing.py", line 275, in handle
await self.app(scope, receive, send)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\starlette\routing.py", line 65, in app
response = await func(request)
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\fastapi\applications.py", line 225, in openapi
return JSONResponse(self.openapi())
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\fastapi\applications.py", line 200, in openapi
self.openapi_schema = get_openapi(
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\fastapi\openapi\utils.py", line 423, in get_openapi
definitions = get_model_definitions(
File "D:\Programming_Languages\Anaconda\envs\Medaignostic-Playground\lib\site-packages\fastapi\utils.py", line 39, in get_model_definitions
model_name = model_name_map[model]
KeyError: <class 'pydantic.main.Body_predict_predict__post'>
```
### Operating System
Windows
### Operating System Details
_No response_
### FastAPI Version
0.85.0
### Python Version
3.9
### Additional Context
_No response_ | null | null | null | {'base_commit': 'c6aa28bea2f751a91078bd8d845133ff83f352bf', 'files': [{'path': 'fastapi/routing.py', 'Loc': {"('APIRouter', 'add_api_route', 513)": {'mod': [593]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"fastapi/routing.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | c6aa28bea2f751a91078bd8d845133ff83f352bf | https://github.com/fastapi/fastapi/issues/5422 | question
question-migrate | Unidirectional websocket connections where only the server pushes data to the clients | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
while True:
data = await websocket.receive_text()
await websocket.send_text(f"Message text was: {data}")
```
### Description
Hello,
Is there a way I could send data to clients over websocket without listening for when clients send data back. I'm trying to have a websocket endpoint where the server is pushing data to the client in a unidirectional way without the option for the client to send responses back. There doesn't seem to be any code that I could find that supports this since all the documentation seems to require that the server is listening for a `websocket.recieve_text()`. Any help would be much appreciated, thanks.
### Operating System
Linux
### Operating System Details
_No response_
### FastAPI Version
0.81.0
### Python Version
3.8.13
### Additional Context
_No response_ | null | null | null | {} | [
{
"Loc": [
23
],
"path": null
}
] | [] | [] | {
"iss_type": "3",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | 55afb70b3717969565499f5dcaef54b1f0acc7da | https://github.com/fastapi/fastapi/issues/891 | question
answered
question-migrate | SQL related tables and corresponding nested pydantic models in async | Really impressed with FastAPI so far... I have search docs github, tickets and googled the issue described below.
### Description
How best to work with related tables and corresponding nested pydantic models whilst persisting data in a relational database in an async application?
### Additional context
I have been attempting to extend the example in the docs
https://fastapi.tiangolo.com/advanced/async-sql-databases/
which relies on https://github.com/encode/databases
Using three test pydantic models as an example:
```
class UserModel(BaseModel):
id: int
title: str = Field(..., min_length=2, max_length=50)
firstname: str = Field(..., min_length=1, max_length=50)
firstname: str = Field(..., min_length=1, max_length=50)
username: str = Field(..., min_length=3, max_length=50)
email: str = Field(..., min_length=3, max_length=50)
favourite_book: int = Field(...)
class FavouriteBook(BaseModel):
id: int
title: str = Field(...)
author: str = Field(...)
class ExtendedUser(BaseModel):
id: int
title: str = Field(..., min_length=2, max_length=50)
firstname: str = Field(..., min_length=1, max_length=50)
firstname: str = Field(..., min_length=1, max_length=50)
username: str = Field(..., min_length=3, max_length=50)
email: str = Field(..., min_length=3, max_length=50)
favourite_book: FavouriteBook
```
the route would ideally be along the lines of...
```
@router.get("/extended", response_model=List[ExtendedUser])
async def list():
query = **sqlAlchemy/databases call that works**
return database.fetch_all(query=query)
```
How can a user create a route that returns the nested ExtendedUser from the database without resorting to performing two queries?
An SQL join is a standard way to do this with a single query. However, this does not work with SQLAlchemy core as the two tables contain 'id' and 'title' columns.
It is possible to work with SQLAlchemy orm - but not in an async way as far as I know. (async is my reason for using FastAPI ). I could rename the columns to something unique ( but to rename 'id' column seems like poor database design to me).
| null | null | null | {} | [
{
"Loc": [
31
],
"path": null
}
] | [] | [] | {
"iss_type": "3",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | 1760da0efa55585c19835d81afa8ca386036c325 | https://github.com/fastapi/fastapi/issues/3882 | question
question-migrate | Doing work after the HTTP response has been sent | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from fastapi import FastAPI, Request
app = FastAPI()
@app.middleware("http")
async def write_log(request: Request, call_next):
response = await call_next(request)
# write log
return response
```
### Description
I want to log data for each request, however since my application is latency sensitive, I would want to return as quickly as possible. Is there an equivalent to Symfony's "[terminate](https://symfony.com/doc/current/reference/events.html#kernel-terminate)" event (which I guess is the `request_finished` signal in Django)? The idea is to do the log writing after the HTTP response has been sent.
The above code is from the middleware documentation, but it basically means the code for writing the log will be executed before the response is sent.
### Operating System
Linux
### Operating System Details
_No response_
### FastAPI Version
0.65.1
### Python Version
3.8.5
### Additional Context
_No response_ | null | null | null | {'base_commit': '1760da0efa55585c19835d81afa8ca386036c325', 'files': [{'path': 'fastapi/background.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"fastapi/background.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | a0e4d38bea74940de013e04a6d6f399d62f04280 | https://github.com/fastapi/fastapi/issues/1498 | question
reviewed
question-migrate | RedirectResponse from a POST request route to GET request route shows 405 Error code. | _Summary of the total issue is:_ **How to do a Post/Redirect/Get (PRG) in FastAPI?**
_This is not necessarily a bug, rather a question._
### Things i tried:
I want to redirect response from 2nd route to 1st route. This [Issue#199](https://github.com/tiangolo/fastapi/issues/199) here explains **GET to GET** but not a **POST to GET**. **N.B:** `I have done this type of POST -> GET redirecting in flask, it was working there but not here.` And also this [Issue#863](https://github.com/tiangolo/fastapi/issues/863) has the same problem but doesn't really solves the problem. To re produce the error check the bottom.
```Python3
#1st route (GET request)
@admin_content_edit_router.get('/admin/edit_content/set_category')
async def set_category(request:Request):
return templates.TemplateResponse("admin/category_edit.html", {'request': request})
#2nd route (POST request)
@admin_content_edit_router.post('/admin/edit_content/add_category')
async def add_category(request:Request):
# here forms are getting processed
return RedirectResponse(app.url_path_for('set_category')) # from here to 1st route
```
But it shows :
```Python3
{"detail":"Method Not Allowed"}
```
Full traceback:
```Python3
INFO: 127.0.0.1:58415 - "POST /admin/edit_content/add_category HTTP/1.1" 307 Temporary Redirect
INFO: 127.0.0.1:58415 - "POST /admin/edit_content/set_category HTTP/1.1" 405 Method Not Allowed
ERROR: Exception in callback _SelectorSocketTransport._read_ready()
handle: <Handle _SelectorSocketTransport._read_ready()>
Traceback (most recent call last):
File "c:\users\aminp\appdata\local\programs\python\python36\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "c:\users\aminp\appdata\local\programs\python\python36\lib\asyncio\selector_events.py", line 730, in _read_ready
self._protocol.data_received(data)
File "c:\users\aminp\appdata\local\programs\python\python36\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 162, in data_received
self.handle_events()
File "c:\users\aminp\appdata\local\programs\python\python36\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 247, in handle_events
self.transport.resume_reading()
File "c:\users\aminp\appdata\local\programs\python\python36\lib\asyncio\selector_events.py", line 711, in resume_reading
raise RuntimeError('Not paused')
RuntimeError: Not paused
```
But when i do a GET to GET redirect response it works without any issue but a POST to GET blows things up! Am i completely missing something here? i did look up in starlette doc here on reverse route lookup but nothing helps. [https://www.starlette.io/routing/#reverse-url-lookups](url)
Quick Re produce the error:
```Python3
from fastapi import FastAPI
from starlette.responses import RedirectResponse
import os
from starlette.status import HTTP_302_FOUND,HTTP_303_SEE_OTHER
app = FastAPI()
@app.post("/")
async def login():
# HTTP_302_FOUND,HTTP_303_SEE_OTHER : None is working:(
return RedirectResponse(url="/ressource/1",status_code=HTTP_303_SEE_OTHER)
@app.get("/ressource/{r_id}")
async def get_ressource(r_id:str):
return {"r_id": r_id}
if __name__ == '__main__':
os.system("uvicorn tes:app --host 0.0.0.0 --port 80")
```
| null | null | null | {'base_commit': 'a0e4d38bea74940de013e04a6d6f399d62f04280', 'files': [{'Loc': [58], 'path': None}]} | [
{
"Loc": [
58
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | b93f8a709ab3923d1268dbc845f41985c0302b33 | https://github.com/fastapi/fastapi/issues/4551 | question
question-migrate | Attribute not found while testing a Beanie Model inside fast api | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [x] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
My Code:
My Route:
@router.post("/login")
async def internalLogin(request: Request,
email: str = Form(...),
password: str = Form(...)):
try:
res, token = await Controller.internalLogin(email=email, password=password)
if res:
return {"message": "Success"}
else:
return {"message": "Failure"}
except DocumentNotFound as documentNotFoundException:
return {"message": "Error"}
```
Controller:
```
@staticmethod
async def internalLogin(email: str, password: str) -> List[bool | str]:
logger.info(message="Inside OpenApi Controller", fileName=__name__, functionName="OpenApiController")
try:
user = await internalUserDb(email=email)
if user is not None and user.verifyPassword(password):
print("Logged In")
return [True, ""]
else:
print("Failed)
return [False, ""]
except DocumentNotFound as documentNotFound:
raise documentNotFound
```
DB:
```
async def internalUserDb(email: str) -> InternalUserModel:
try:
user: InternalUserModel = await InternalUserModel.find_one(InternalUserModel.email == email)
return user
except DocumentNotFound as documentNotFound:
raise documentNotFound
```
My TestCode:
```
@pytest.mark.anyio
async def testLogin():
response = await asyncClient.post("/internalLogin",
data={"email": "sample@mail.com", "password": "samplePass"})
assert response.status_code == 303
```
My error while testing is:
```
FAILED Tests/TestLogin.py::testLogin[asyncio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'
FAILED Tests/TestLogin.py::testLogin[trio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'
```
### Description
Hello, I am new to FastAPI. I am trying to test the fast api with PyTest. Normal tests are working perfectly fine but I am using MongoDB as backend to store my data. While I try to test the route that does some data fetching from database it shows error like `attribute not inside the model`. I am using Beanie ODM for MongoDB.
### Operating System
macOS
### Operating System Details
_No response_
### FastAPI Version
0.73
### Python Version
3.10
### Additional Context
_No response_ | null | null | null | {'base_commit': 'b93f8a709ab3923d1268dbc845f41985c0302b33', 'files': [{'path': 'docs/en/docs/advanced/testing-events.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"docs/en/docs/advanced/testing-events.md"
],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | 78b07cb809e97f400e196ff3d89862b9d5bd5dc2 | https://github.com/fastapi/fastapi/issues/4587 | question
question-migrate | Use the raw response in Reponse classes | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class CustomEncoder():
def encode(self, dict_data)
return dict_data
class PhotonJSONResponse(JSONResponse):
def __init__(self, content: typing.Any = None, status_code: int = 200, headers: dict = None, media_type: str = None,
background: BackgroundTask = None) -> None:
# Fetch the untouched response in the upper stacks
current_frame = inspect.currentframe()
self.raw_response = None
while current_frame.f_back:
if 'raw_response' in current_frame.f_locals:
self.raw_response = current_frame.f_locals['raw_response']
break
current_frame = current_frame.f_back
self._encoder = CustomEncoder()
super().__init__(content, status_code, headers, media_type, background)
def render(self, content: Any) -> bytes:
dict_data = self._encoder.encode(self.raw_response)
return super().render(dict_data)
```
### Description
I want to access the raw response that hasn't been through the json_encoder inside my response class. This is because I have custom types that are handled in a custom encoder. I have looked through the relevant fastapi code and I can't find a way to override the encoder for all requests either. As you can see in the example code I currently use reflection to fetch the raw_response in the upper stack frame, however this is not very reliable. I also can't seem to do this using an APIRoute implementation because it would require re-implementing the route handler which is messy, maybe it would be more relevant in there though.
### Operating System
Windows
### Operating System Details
_No response_
### FastAPI Version
0.63.0
### Python Version
3.8.12
### Additional Context
_No response_ | null | null | null | {'base_commit': '78b07cb809e97f400e196ff3d89862b9d5bd5dc2', 'files': [{'path': 'fastapi/routing.py', 'Loc': {"('APIRoute', None, 300)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"fastapi/routing.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92 | https://github.com/oobabooga/text-generation-webui/issues/3341 | bug | state isn't clearly understood how to incorporate for script.py | ### Describe the bug
I see that output_modifier and a few other functions require state object, which is not defined in script.py nor are any of the existing plugins (that I looked at) use a state object.
As a result, I am unable to use the functions. I get a message about needing to pass state
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
try to use this snippet
https://github.com/ChobPT/oobaboogas-webui-langchain_agent/blob/main/script.py#L185-L190
```
def input_modifier(string):
if string[:3] == "/do Story":
print('hi')
string += ' Tell me a story.'
else:
output_modifier(string.split("###")[0].split("Human:")[0])
return string.replace('/do ', '')
```
### Screenshot
_No response_
### Logs
```shell
File "/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py", line 144, in input_modifier
output_modifier(string.split("###")[0].split("Human:")[0],state_dict)
NameError: name 'state_dict' is not defined
```
```
File "/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py", line 144, in input_modifier
output_modifier(string.split("###")[0].split("Human:")[0],state)
NameError: name 'state' is not defined
```
```
output_modifier(string.split("###")[0].split("Human:")[0])
TypeError: output_modifier() missing 1 required positional argument: 'state'
```
and if I removed state from output_modifier (as you see in my snippet above w print) I get no modified output nor print statement at console
Output generated in 1.99 seconds (9.06 tokens/s, 18 tokens, context 66, seed 123523724)
Traceback (most recent call last):
File "/home/user/oobabooga_linux/text-generation-webui/server.py", line 1181, in <module>
time.sleep(0.5)
```
### System Info
```shell
python 3.9 oracle linux 8.5
```
| null | null | null | {'base_commit': 'ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92', 'files': []} | [] | [] | [
{
"org": "ChobPT",
"pro": "oobaboogas-webui-langchain_agen",
"path": [
"script.py"
]
}
] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
"script.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 8962bb173e9bdc36eb9cf28fe9e1952b2976e781 | https://github.com/oobabooga/text-generation-webui/issues/5337 | bug | Generation slows at max context, even when truncated | ### Describe the bug
### Issue Summary
When generating, if the context is near the maximum set via n_ctx (and the truncate value in Parameters is set to match it), generation will be quite slow. This does not occur if the context is more than approximately 300-500 below the set value. It still occurs even if the n_ctx and truncation numbers are reduced (though the slowdown becomes less severe.
### Observations
- Since speed is perfectly fine up until we near the context limit, then immediately drops, I suspect this has something to do with how the context is truncated; the actual act of truncating the input seems to cause the slowdown, despite the fact that this should be a simple operation.
- Increasing the limit back up after lowering also does not help;; makes sense, since it just pulls in as much of the conversation as will fit and hits the context limit again, requiring truncation.
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
- Set your n_ctx to a given value. (In my case, 8192).
- Chat with the model, noting the speed. At this point, it should be fairly rapid. (In my case, 4.72 tokens/s up to context 7792).
- As soon as the context reaches approximately 7800, generation slows. (In my case, 0.87 tokens/s on the message immediately after the above, at context 7798).
- At this point, reducing n_ctx and reloading the model only partially helps. (In my case; reducing to 4092 produced 2.51 tokens/s at context 3641.
### Screenshot
_No response_
### Logs
```shell
N/A
```
### System Info
```shell
- Model: TheBloke/Silicon-Maid-7B-GGUF, using the 5_K_M quant.
- Branch: dev
- Commit: 8962bb173e9bdc36eb9cf28fe9e1952b2976e781
- OS: Windows 11
```
| null | null | null | {'base_commit': '8962bb173e9bdc36eb9cf28fe9e1952b2976e781', 'files': [{'path': 'modules/ui_model_menu.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui_model_menu.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 564a8c507fffc8b25a056d8930035c63da71fc7b | https://github.com/oobabooga/text-generation-webui/issues/3042 | bug | ERROR:Task exception was never retrieved | ### Describe the bug
Right after installation i open the webui in the browser and i receive an error.
### Is there an existing issue for this?
- [x] I have searched the existing issues
### Reproduction
Right after installation i open the webui in the browser and i receive this error.
### Screenshot
_No response_
### Logs
```shell
2023-07-07 21:25:11 ERROR:Task exception was never retrieved
future: <Task finished name='3s4vbrhqz8a_103' coro=<Queue.process_events() done, defined at D:\oobabooga\oobabooga_windows\installer_files\env\lib\site-packages\gradio\queueing.py:343> exception=1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'data': [], 'event_data'...on_hash': '3s4vbrhqz8a'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.1.2/v/missing>
Traceback (most recent call last):
File "D:\oobabooga\oobabooga_windows\installer_files\env\lib\site-packages\gradio\queueing.py", line 347, in process_events
client_awake = await self.gather_event_data(event)
File "D:\oobabooga\oobabooga_windows\installer_files\env\lib\site-packages\gradio\queueing.py", line 220, in gather_event_data
data, client_awake = await self.get_message(event, timeout=receive_timeout)
File "D:\oobabooga\oobabooga_windows\installer_files\env\lib\site-packages\gradio\queueing.py", line 456, in get_message
return PredictBody(**data), True
File "D:\oobabooga\oobabooga_windows\installer_files\env\lib\site-packages\pydantic\main.py", line 150, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'data': [], 'event_data'...on_hash': '3s4vbrhqz8a'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.1.2/v/missing
```
### System Info
```shell
Windows 11
EVGA RTX3080
```
| null | null | null | {'base_commit': '564a8c507fffc8b25a056d8930035c63da71fc7b', 'files': [{'path': 'requirements.txt', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null |
oobabooga | text-generation-webui | 07510a24149cbd6fd33df0c4a440d60b9783a18e | https://github.com/oobabooga/text-generation-webui/issues/2171 | enhancement
stale | support for fastest-inference-4bit branch of GPTQ-for-LLaMa | **Description**
There is new branch of GPTQ-for-LLaMa - fastest-inference-4bit that combines triton and cuda and people say it's much faster. It would be nice if it was supported here. I tried to compile it myself but it doesnt work with this webui because there is no llama_inference_offload.py in the new branch.
**Additional Context**
https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-inference-4bit
| null | null | null | {'base_commit': '07510a24149cbd6fd33df0c4a440d60b9783a18e', 'files': [{'path': 'modules/GPTQ_loader.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/GPTQ_loader.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 7ddf6147accfb5b95e7dbbd7f1822cf976054a2a | https://github.com/oobabooga/text-generation-webui/issues/446 | bug | Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ | ### Describe the bug
I get factual answers in ?? like this Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Common sense questions and answers
Question: Hi
Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
### Screenshot
<img width="1535" alt="Screenshot 2023-03-20 at 12 43 35 AM" src="https://user-images.githubusercontent.com/25454015/226214371-e9424c75-6b81-4189-9865-70446b62235d.png">
### Logs
```shell
Loading LLaMA-7b...
Loading checkpoint shards: 100%|██████████████████████████████████████| 33/33 [00:06<00:00, 5.47it/s]
Loaded the model in 147.25 seconds.
Output generated in 12.96 seconds (4.71 tokens/s, 61 tokens)
Output generated in 13.20 seconds (0.61 tokens/s, 8 tokens)
```
### System Info
```shell
MacOS Ventura 13.2.1, Apple M1 Max
```
| null | null | null | {'base_commit': '7ddf6147accfb5b95e7dbbd7f1822cf976054a2a', 'files': [{'path': 'download-model.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2\n结果奇怪",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"download-model.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 3609ea69e4c4461a4f998bd12cc559d5a016f328 | https://github.com/oobabooga/text-generation-webui/issues/5761 | bug | api broke: AttributeError: 'NoneType' object has no attribute 'replace' | ### Describe the bug
api calls result in
AttributeError: 'NoneType' object has no attribute 'replace'
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
install no requirements and llama-cpp-python by source then try to run curl
curl http://192.168.3.17:5000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'tell me a story.'}], 'max_new_tokens': 1024, 'preset': 'None', 'do_sample': False, 'temperature': 1.0, 'top_p': 0, 'typical_p': 1, 'epsilon_cutoff': 0, 'eta_cutoff': 0, 'tfs': 1, 'top_a': 0, 'repetition_penalty': 1.18, 'repetition_penalty_range': 0, 'top_k': 50, 'min_length': 0, 'no_repeat_ngram_size': 2, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': True, 'mirostat_mode': 0, 'mirostat_tau': 5, 'mirostat_eta': 0.1, 'seed': -1, 'add_bos_token': True, 'truncation_length': 1068, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [], 'mode': 'instruct', 'instruction_template': 'Alpaca'}'
Exception in ASGI application
Traceback (most recent call last):
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in __call__
await self.app(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 758, in __call__
await self.middleware_stack(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "/data/text-generation-webui/extensions/openai/script.py", line 137, in openai_chat_completions
response = OAIcompletions.chat_completions(to_dict(request_data), is_legacy=is_legacy)
File "/data/text-generation-webui/extensions/openai/completions.py", line 536, in chat_completions
return deque(generator, maxlen=1).pop()
File "/data/text-generation-webui/extensions/openai/completions.py", line 315, in chat_completions_common
prompt = generate_chat_prompt(user_input, generate_params)
File "/data/text-generation-webui/modules/chat.py", line 97, in generate_chat_prompt
user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),
File "/data/text-generation-webui/modules/chat.py", line 636, in replace_character_names
text = text.replace('{{user}}', name1).replace('{{char}}', name2)
AttributeError: 'NoneType' object has no attribute 'replace'
### Screenshot
_No response_
### Logs
```shell
install no avx2 requirements and llama-cpp-python by source then try to run curl
curl http://192.168.3.17:5000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'tell me a story.'}], 'max_new_tokens': 1024, 'preset': 'None', 'do_sample': False, 'temperature': 1.0, 'top_p': 0, 'typical_p': 1, 'epsilon_cutoff': 0, 'eta_cutoff': 0, 'tfs': 1, 'top_a': 0, 'repetition_penalty': 1.18, 'repetition_penalty_range': 0, 'top_k': 50, 'min_length': 0, 'no_repeat_ngram_size': 2, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': True, 'mirostat_mode': 0, 'mirostat_tau': 5, 'mirostat_eta': 0.1, 'seed': -1, 'add_bos_token': True, 'truncation_length': 1068, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [], 'mode': 'instruct', 'instruction_template': 'Alpaca'}'
Exception in ASGI application
Traceback (most recent call last):
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in __call__
await self.app(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 758, in __call__
await self.middleware_stack(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "/data/text-generation-webui/extensions/openai/script.py", line 137, in openai_chat_completions
response = OAIcompletions.chat_completions(to_dict(request_data), is_legacy=is_legacy)
File "/data/text-generation-webui/extensions/openai/completions.py", line 536, in chat_completions
return deque(generator, maxlen=1).pop()
File "/data/text-generation-webui/extensions/openai/completions.py", line 315, in chat_completions_common
prompt = generate_chat_prompt(user_input, generate_params)
File "/data/text-generation-webui/modules/chat.py", line 97, in generate_chat_prompt
user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),
File "/data/text-generation-webui/modules/chat.py", line 636, in replace_character_names
text = text.replace('{{user}}', name1).replace('{{char}}', name2)
AttributeError: 'NoneType' object has no attribute 'replace'
```
### System Info
```shell
oracle linux 8, rocky linux 9
```
| null | null | null | {'base_commit': '3609ea69e4c4461a4f998bd12cc559d5a016f328', 'files': [{'path': 'modules/chat.py', 'Loc': {"(None, 'replace_character_names', 637)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/chat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a | https://github.com/oobabooga/text-generation-webui/issues/5774 | bug | The checksum verification for miniconda_installer.exe has failed. | ### Describe the bug
The checksum verification for miniconda_installer.exe has failed.
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
After I extracted the files, I clicked start_windows.bat.
### Screenshot
_No response_
### Logs
```shell
Downloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Windows-x86_64.exe to D:\text-generation-webui\installer_files\miniconda_installer.exe
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 53.8M 100 53.8M 0 0 23.2M 0 0:00:02 0:00:02 --:--:-- 23.3M
find: '/i': No such file or directory
find: '/v': No such file or directory
find: ' ': No such file or directory
find: '/i': No such file or directory
find: '307194e1f12bbeb52b083634e89cc67db4f7980bd542254b43d3309eaf7cb358': No such file or directory
The checksum verification for miniconda_installer.exe has failed.
```
### System Info
```shell
windows11,CPU:i711800H,GPU:NVDIA RTXA2000Laptop
```
| null | null | null | {'base_commit': '1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a', 'files': [{'path': 'start_windows.bat', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"start_windows.bat"
]
} | null |
oobabooga | text-generation-webui | c17624432726ab5743dfa21af807d559e4f4ff8c | https://github.com/oobabooga/text-generation-webui/issues/6209 | bug
stale | Oobabooga login not working through reverse proxy | ### Describe the bug
I have the latest text-generation-webui (just ran the update script) running on my home computer running Windows 11. I am running it on a LAN IP (192.168.1.102) and reverse-proxying it with Nginx so I can access it remotely over the Internet.
Some recent update to text-generation-webui appears to have broken the login code. When I'm logging in from the LAN, I see the normal login screen, and authentication works. When I'm logging in from the WAN, I get a bare-bones UI which refuses to accept my login creds.
I have been running this setup for months without change, so my assumption is that it's a recent change in the text-generation-webui codebase that's behind it.
My CMD_FLAGS.txt is:
--gradio-auth myusername:mypassword
--auto-devices
--listen
--listen-host 192.168.1.102
--listen-port 7860
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
1. Start the webui on a WAN port.
2. Reverse-proxy to a publically-accessible IP.
3. Try to login.
### Screenshot

### Logs
```shell
I see repeated errors in the console: "WARNING: invalid HTTP request received", but no Python trace info.
```
### System Info
```shell
Windows 11, NVidia Founder RTX 2060 Super.
Reverse proxy is NGinx running on Debian. It uses Let's Encrypt so I can encrypt my remote connection.
```
| null | null | null | {'base_commit': 'c17624432726ab5743dfa21af807d559e4f4ff8c', 'files': [{'path': 'requirements/full/requirements.txt', 'Loc': {'(None, None, 7)': {'mod': [7]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n依赖声明"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements/full/requirements.txt"
],
"asset": []
} | null |
hacksider | Deep-Live-Cam | 69d863b44ab5c7dad6eea04b7e3563f491c714a4 | https://github.com/hacksider/Deep-Live-Cam/issues/376 | Unable to select camera device through UI | It would be nice to have a way to select which camera to use. I am on Ubuntu 22.04 with a Linux laptop. Since I use an external camera and keep my laptop closed, the program is defaulting to the on-board camera.
I was unable to find a quick/easy way to change the default camera in Ubuntu, so it would be nice if the program was able to allow a selection in the UI. | null | null | null | {'base_commit': '69d863b44ab5c7dad6eea04b7e3563f491c714a4', 'files': [{'path': 'modules/ui.py', 'Loc': {"(None, 'webcam_preview', 252)": {'mod': [259]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | 080d6f5110d2e185e8ce4e10451ac96313079be2 | https://github.com/hacksider/Deep-Live-Cam/issues/315 | How to select the correct camera? | How to select the correct camera ?
Is there any method to improve the output resolution of the camera? | null | null | null | {'base_commit': '080d6f5110d2e185e8ce4e10451ac96313079be2', 'files': [{'path': 'modules/ui.py', 'Loc': {"(None, 'webcam_preview', 252)": {'mod': [259]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | 5bc3ada6324a28a8d8556da1176b546f2d2140f8 | https://github.com/hacksider/Deep-Live-Cam/issues/922 | ERROR: Cannot install -r requirements.txt (line 13), tensorflow and typing-extensions>=4.8.0 because these package versions have conflicting dependencies. | The conflict is caused by:
The user requested typing-extensions>=4.8.0
torch 2.5.1+cu121 depends on typing-extensions>=4.8.0
tensorflow-intel 2.12.1 depends on typing-extensions<4.6.0 and >=3.6.6 | null | null | null | {'base_commit': '5bc3ada6324a28a8d8556da1176b546f2d2140f8', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 19)': {'mod': [19]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n依赖声明"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | 6b0cc749574d7307b2f7deedfa2a0dbb363329da | https://github.com/hacksider/Deep-Live-Cam/issues/243 | [experimental] doesn't show the camera I want.. | I'm using the `experimental` branch so I could choose the camera I wanted (OBS Virtual Camera) which is (2) but it only shows "Camera 0", so I made a test script and I was able to pull my OBS Virtual Camera using 'matplotlib',
```
(venv) (base) PS E:\deep-live-cam> python list.py
[ WARN:0@10.769] global cap_msmf.cpp:1769 CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638
[ WARN:0@10.839] global cap.cpp:304 cv::VideoCapture::open VIDEOIO(DSHOW): raised OpenCV exception:
OpenCV(4.10.0) D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_dshow.cpp:2763: error: (-215:Assertion failed) pVih in function 'videoInput::start'
[ERROR:0@10.846] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range
[ERROR:0@16.478] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range
[ERROR:0@16.563] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range
[ERROR:0@16.635] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range
[ERROR:0@16.711] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range
[ERROR:0@16.787] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range
[ERROR:0@16.862] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range
Available camera indices: [2]
Enter the camera index you want to use: 2
Camera 2 opened successfully. Press 'q' to quit.
Press 'q' and Enter to quit, or just Enter to continue: q
(venv) (base) PS E:\deep-live-cam>
```
It shows up like this:
<img width="419" alt="Screen Shot 2024-08-12 at 8 31 51 PM" src="https://github.com/user-attachments/assets/3f16b4f6-6ac7-492f-88a5-6abdc58e29b0">
So I know it's possible, is there a way to force 'deep-live-cam' to use "Camera (2)" ?
| null | null | null | {'base_commit': '6b0cc749574d7307b2f7deedfa2a0dbb363329da', 'files': [{'path': 'modules/ui.py', 'Loc': {"(None, 'webcam_preview', 307)": {'mod': [322]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | 513e41395687921d589fc10bbaf2f72ed579c84a | https://github.com/hacksider/Deep-Live-Cam/issues/915 | Subject: Missing ui.py file in modules directory - preventing project execution | Hi,
I'm trying to run the Deep-Live-Cam project, but I'm encountering a problem. The ui.py file is missing from the modules directory. I've tried the following:
* Cloning the repository using git clone: `git clone https://github.com/hacksider/Deep-Live-Cam.git`
* Cloning the repository using GitHub Desktop.
* Downloading the repository as a ZIP file.
In all cases, the ui.py file is not present. I've also checked the repository on GitHub.com directly in my browser, and the file is missing there as well.
The modules directory contains the following files: [List the files you see].
Could you please let me know how to obtain the ui.py file? Is it intentionally missing, or is there a separate download/generation step required?
Thanks for your help! | null | null | null | {'base_commit': '513e41395687921d589fc10bbaf2f72ed579c84a', 'files': [{'path': 'modules/ui.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "4",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | a49d3fc6e5a228a6ac92e25831c507996fdc0042 | https://github.com/hacksider/Deep-Live-Cam/issues/697 | [Solved] inswapper_128_fp16.onnx failed:Protobuf parsing failed | I have this error on macOS Apple Silicon.
`Exception in Tkinter callback
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.10/3.10.15/Frameworks/Python.framework/Versions/3.10/lib/python3.10/tkinter/__init__.py", line 1921, in __call__
return self.func(*args)
File "/Users//PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/customtkinter/windows/widgets/ctk_button.py", line 554, in _clicked
self._command()
File "/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py", line 242, in <lambda>
command=lambda: webcam_preview(
File "/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py", line 649, in webcam_preview
create_webcam_preview(
File "/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py", line 707, in create_webcam_preview
temp_frame = frame_processor.process_frame(source_image, temp_frame)
File "/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 65, in process_frame
temp_frame = swap_face(source_face, target_face, temp_frame)
File "/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 49, in swap_face
return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
File "/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py", line 44, in get_face_swapper
FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)
File "/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
model = router.get_model(providers=providers, provider_options=provider_options)
File "/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
session = PickableInferenceSession(self.onnx_file, **kwargs)
File "/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 25, in __init__
super().__init__(model_path, **kwargs)
File "/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 384, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/PycharmProjects/Deep-Live-Cam/models/inswapper_128_fp16.onnx failed:Protobuf parsing failed.`
This https://github.com/hacksider/Deep-Live-Cam/issues/613 didn't help.
| null | null | null | {} | [] | [] | [
{
"org": "hacksider",
"pro": "deep-live-cam",
"path": [
"inswapper_128_fp16.onnx"
]
}
] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2\n+\n0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"inswapper_128_fp16.onnx"
]
} | null | |
hacksider | Deep-Live-Cam | d4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33 | https://github.com/hacksider/Deep-Live-Cam/issues/94 | Can't find onnxruntime-silicon==1.13.1 | Hi,
Currently on MacOS (Silicon, M2 Max), it seems not possible to download (with pip at least) the 1.13.1 version of onnxruntime.
`ERROR: Could not find a version that satisfies the requirement onnxruntime-silicon==1.13.1 (from versions: 1.14.1, 1.15.0, 1.16.0, 1.16.3)
ERROR: No matching distribution found for onnxruntime-silicon==1.13.1`
And, if I'm right, Deep-Live-Cam doesn't support more recent versions of onnxruntime, right ? So if that's the case, what could be a workaround ?
Thanks ! | null | null | null | {'base_commit': 'd4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 16)': {'mod': [16]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "install require"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa | https://github.com/hacksider/Deep-Live-Cam/issues/345 | Program crashes when processing with DirectML | I am using an AMD RX 6600 XT GPU with the latest drivers and attempting to run the program with DirectML. The program's UI turns white and then crashes. It works fine with CPU execution but fails with DirectML.
I already tried to reinstall onnxruntime-directml with no effect. Terminal:
(myenv) E:\Edesktop\deep-live\Deep-Live-Cam>python run.py --execution-provider dml
Applied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}
find model: C:\Users\USER/.insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}
find model: C:\Users\USER/.insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}
find model: C:\Users\USER/.insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}
find model: C:\Users\USER/.insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}
find model: C:\Users\USER/.insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
100%|███████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:01<00:00, 50.67it/s]
[DLC.CORE] Creating temp resources...
[DLC.CORE] Extracting frames...
[DLC.FACE-SWAPPER] Progressing...
Processing: 0%| | 0/125 [00:00<?, ?frame/s, execution_providers=['DmlExecutionProvider'], execution_threads=8, max_memory=16
(myenv) E:\Edesktop\deep-live\Deep-Live-Cam>
| null | null | null | {'base_commit': 'eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa', 'files': [{'path': 'modules/ui.py', 'Loc': {"(None, 'create_root', 93)": {'mod': [139, 140, 141]}}, 'status': 'modified'}, {'path': 'modules/core.py', 'Loc': {"(None, 'parse_args', 47)": {'mod': [67, 71]}, '(None, None, None)': {'mod': [11]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py",
"modules/core.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
Textualize | rich | 7e1928efee53da1ac7d156912df04aef83eefea5 | https://github.com/Textualize/rich/issues/1247 | Needs triage | [REQUEST] Extra caching for `get_character_cell_size` | **How would you improve Rich?**
Add a small `lru_cache` to https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L28 , similar to cache one layer down for https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L46
Size `4096` was plenty for what I describe below.
**What problem does it solved for you?**
I'm working on some optimizations for a TUI application here https://github.com/JoshKarpel/spiel/pull/37
This was my first idea on how to improve rendering time, based on https://github.com/benfred/py-spy telling me that a lot of time was being spent in `get_character_cell_size`, and this was my first thought for a solution.
Adding the cache described above gives a ~30% speedup on the benchmarks I was using to work on that PR. In that application I'm repeatedly re-rendering the same content (in a `Live`), so adding a small cache to `get_character_cell_size` represents a significant speedup since the set of characters is usually the same from frame to frame. The benchmark is mostly printing colorized ASCII, with some unicode also drawn from a small set (box-drawing characters, block shapes, etc.).
I guess that since there's lots of `Layout` and `Padding` going on, the most common character is probably space... perhaps the ASCII set that there's already a shortcut for could just be pre-computed and stored in a set? There's probably a lot of good ways to approach this :) | null | null | null | {'base_commit': '7e1928efee53da1ac7d156912df04aef83eefea5', 'files': [{'path': 'rich/cells.py', 'Loc': {"(None, 'get_character_cell_size', 28)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"rich/cells.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
Textualize | rich | 5c9161d0c48254fb579827249a9ee7d88f4589b7 | https://github.com/Textualize/rich/issues/1489 | Needs triage | [REQUEST] current item of a progress | when creating progress bars for logical items (that are then supported with additional progress pars,
i would consider it helpful if it was possible to add a name/render able for the current item, and to push those in updates
i`m not yet sure how this is best expressed/implemented | null | null | null | {'base_commit': '5c9161d0c48254fb579827249a9ee7d88f4589b7', 'files': [{'path': 'rich/progress.py', 'Loc': {"('Progress', 'update', 739)": {'mod': []}}, 'status': 'modified'}, {'path': 'rich/progress.py', 'Loc': {"('Task', None, 437)": {'mod': [466]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"rich/progress.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
Textualize | rich | 0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80 | https://github.com/Textualize/rich/issues/2457 | bug | [BUG] Console(no_color=True) does not work on Windows 10 | You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).
**Describe the bug**
The "no_color=True" Console parameter does not seem to do anything on Windows 10. I tested on both Cmder and native cmd.exe terminals and got the same results. See screenshots below.
Cmder:

cmd.exe

for reference, this is what it looks like from my Ubuntu laptop:

Also happy to help fix this if you can point me in the right direction. Thank you!
**Platform**
<details>
<summary>Click to expand</summary>
OS: Windows 10
**Cmder:**
┌───────────────────────── <class 'rich.console.Console'> ─────────────────────────┐
│ A high level console interface. │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────┐ │
│ │ <console width=155 ColorSystem.WINDOWS> │ │
│ └──────────────────────────────────────────────────────────────────────────────┘ │
│ │
│ color_system = 'windows' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 83 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = True │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=155, height=83), │
│ legacy_windows=True, │
│ min_width=1, │
│ max_width=155, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=83, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=155, height=83) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 155 │
└──────────────────────────────────────────────────────────────────────────────────┘
┌─── <class 'rich._windows.WindowsConsoleFeatures'> ────┐
│ Windows features available. │
│ │
│ ┌───────────────────────────────────────────────────┐ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ └───────────────────────────────────────────────────┘ │
│ │
│ truecolor = False │
│ vt = False │
└───────────────────────────────────────────────────────┘
┌────── Environment Variables ───────┐
│ { │
│ 'TERM': 'cygwin', │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': '157', │
│ 'LINES': '83', │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
└────────────────────────────────────┘
platform="Windows"
**cmd.exe**
┌───────────────────────── <class 'rich.console.Console'> ─────────────────────────┐ │ A high level console interface. │ │ │ │ ┌──────────────────────────────────────────────────────────────────────────────┐ │ │ │ <console width=119 ColorSystem.WINDOWS> │ │ │ └──────────────────────────────────────────────────────────────────────────────┘ │ │ │ │ color_system = 'windows' │ │ encoding = 'utf-8' │ │ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │ │ height = 30 │ │ is_alt_screen = False │ │ is_dumb_terminal = False │ │ is_interactive = True │ │ is_jupyter = False │ │ is_terminal = True │ │ legacy_windows = True │ │ no_color = False │ │ options = ConsoleOptions( │ │ size=ConsoleDimensions(width=119, height=30), │ │ legacy_windows=True, │ │ min_width=1, │ │ max_width=119, │ │ is_terminal=True, │ │ encoding='utf-8', │ │ max_height=30, │ │ justify=None, │ │ overflow=None, │ │ no_wrap=False, │ │ highlight=None, │ │ markup=None, │ │ height=None │ │ ) │ │ quiet = False │ │ record = False │ │ safe_box = True │ │ size = ConsoleDimensions(width=119, height=30) │ │ soft_wrap = False │ │ stderr = False │ │ style = None │ │ tab_size = 8 │ │ width = 119 │ └──────────────────────────────────────────────────────────────────────────────────┘ ┌─── <class 'rich._windows.WindowsConsoleFeatures'> ────┐ │ Windows features available. │ │ │ │ ┌───────────────────────────────────────────────────┐ │ │ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │ │ └───────────────────────────────────────────────────┘ │ │ │ │ truecolor = False │ │ vt = False │ └───────────────────────────────────────────────────────┘ ┌────── Environment Variables ───────┐ │ { │ │ 'TERM': None, │ │ 'COLORTERM': None, │ │ 'CLICOLOR': None, │ │ 'NO_COLOR': None, │ │ 'TERM_PROGRAM': None, │ │ 'COLUMNS': None, │ │ 'LINES': None, │ │ 'JUPYTER_COLUMNS': None, │ │ 'JUPYTER_LINES': None, │ │ 'JPY_PARENT_PID': None, │ │ 'VSCODE_VERBOSE_LOGGING': None │ │ } │ └────────────────────────────────────┘ platform="Windows"
rich==12.5.1
</details>
| null | null | null | {'base_commit': '0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80', 'files': [{'path': 'rich/console.py', 'Loc': {"('Console', None, 583)": {'mod': [612]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"rich/console.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ytdl-org | youtube-dl | 427cc215310804127b55744fcc3664ede38a4a0d | https://github.com/ytdl-org/youtube-dl/issues/21363 | question | How does youtube-dl detect advertisements? | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm asking a question
- [x] I've looked through the README and FAQ for similar questions
- [x] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
-->
Fox Sports Go recently changed their streaming service. Previously, I used to be able to record live streams and download event replays by passing headers into streamlink. However, recording live with streamlink "works" just fine, but because commercials have some kind of different codec than the actual content, I can't do anything with the resulting .ts file.
However, I can download replays from FOX.com just fine, using a youtube-dl command like this: `youtube-dl --hls-prefer-native -f 3750 https://content-auso1.uplynk.com/preplay2/6f324d0648b34576b36ce49160add428/391dec8c1a9a07b70d3062e4bf1a6e3c/4sQNPrWNbJWMzPMP2RXiNy2SFAhlIDUYbUwS2TJwN94h.m3u8?pbs=38dc148aad7c4a7f981a8dd57493a625`
The big problems with this are that a) I have to wait until a replay is posted; and b) FOX is very inconsistent as to which events get replays posted and which do not, meaning I'm SOL if I'm trying to save an event that just doesn't have a replay for some reason. If I could record live, this wouldn't be an issue, but again, the commercials are throwing things off.
One of the output lines from youtube-dl is `[hlsnative] Total fragments: 1815 (not including 504 ad)`.
So my question is: how does youtube-dl detect which segments are ads in the .m3u8 file? If I can figure that out, perhaps I can rig streamlink to ignore those segments when recording, saving me a lot of trouble.
Thanks!
| null | null | null | {'base_commit': '427cc215310804127b55744fcc3664ede38a4a0d', 'files': [{'path': 'youtube_dl/downloader/hls.py', 'Loc': {"('HlsFD', 'is_ad_fragment_start', 78)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/downloader/hls.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ytdl-org | youtube-dl | 8b7340a45eb0e3aeaa996896ff8690b6c3a32af6 | https://github.com/ytdl-org/youtube-dl/issues/15955 | use youtube-dl with cookies file in code not from command line | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.03.20*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2018.03.20**
### Before submitting an *issue* make sure you have:
- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [ ] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [X ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2018.03.20
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
...
<end of log>
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Explanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.
If work on your *issue* requires account credentials please provide them or explain how one can obtain them.
```
from __future__ import unicode_literals
import youtube_dl
ydl_opts = {}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])
```
this for downoalding simple youtube video i need how add the cookies file untill i can downoald from my account on linda im trying to create small downaolder untill help fast the process any idea how add cookies file | null | null | null | {'base_commit': '8b7340a45eb0e3aeaa996896ff8690b6c3a32af6', 'files': [{'path': 'youtube_dl/YoutubeDL.py', 'Loc': {"('YoutubeDL', None, 113)": {'mod': [208]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/YoutubeDL.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
ytdl-org | youtube-dl | 267d81962a0709f15f82f96b7aadbb5473a06992 | https://github.com/ytdl-org/youtube-dl/issues/16870 | [bilibili]how can i download video on page2? | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.25*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.06.25**
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [x] Question
- [ ] Other
I try to use youtube-dl to download a video on bilibili like https://www.bilibili.com/video/av18178195
The video have 2 pages, but when i type **youtube-dl -f 1 https://www.bilibili.com/video/av18178195**
i just get the video on page1, how can i get video on page2?
I have see this page https://github.com/rg3/youtube-dl/pull/16354
but i use
**youtube-dl -f 1 https://www.bilibili.com/video/av18178195/index_2.html** or
**youtube-dl -f 1 https://www.bilibili.com/video/av18178195/?p=2**
It will get the same video on page1
How can i solve this problem? Thank you.
Is this problem fixed? I use the standalone exe version. | null | null | null | {'base_commit': '267d81962a0709f15f82f96b7aadbb5473a06992', 'files': [{'path': 'youtube_dl/extractor/bilibili.py', 'Loc': {"('BiliBiliIE', None, 25)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/extractor/bilibili.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
ytdl-org | youtube-dl | eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71 | https://github.com/ytdl-org/youtube-dl/issues/16883 | [Feature request] Network retry, with configurability | I just ran some large youtube-dl scripts, and noticed a few videos were missing finally.
This was probably due to intermittent network downtimes, and apparently youtube-dl doesn't do any network retry at all (I may be wrong).
Thus, I suggest adding an option named for example `--network-retry`, related to `--socket-timeout`. The default would be 0 to keep the current youtube-dl behavior, and I could configure it to something like 5. | null | null | null | {'base_commit': 'eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71', 'files': [{'path': 'youtube_dl/options.py', 'Loc': {"(None, 'parseOpts', 41)": {'mod': [458, 462]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/options.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
ytdl-org | youtube-dl | 5014bd67c22b421207b2650d4dc874b95b36dda1 | https://github.com/ytdl-org/youtube-dl/issues/30539 | question | velocidad de descarga limitada | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [ yes] I'm asking a question
- [ yes] I've looked through the README and FAQ for similar questions
- [yes ] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
-->
WRITE QUESTION HERE
hola .. hace unos dias estoy experimento una baja en la velocidad de descarga desde la pagina de youtube usando youtube-dl .. lo pueden resolver? probe bajando videos desde otros sitios webs y descarga a toda velocidad .. solo me pasa desde la pagina de youtube .. para mi hicieron algun cambio en su plataforma | null | null | null | {'base_commit': '5014bd67c22b421207b2650d4dc874b95b36dda1', 'files': [{'path': 'youtube_dl/extractor/youtube.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/extractor/youtube.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ytdl-org | youtube-dl | e90d175436e61e207e0b0cae7f699494dcf15922 | https://github.com/ytdl-org/youtube-dl/issues/9104 | Chinese title was missing ! | ```
root@kangland:/var/www/ydy# youtube-dl -v w0dMz8RBG7g
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'w0dMz8RBG7g']
[debug] Encodings: locale ANSI_X3.4-1968, fs ANSI_X3.4-1968, out ANSI_X3.4-1968, pref ANSI_X3.4-1968
[debug] youtube-dl version 2016.04.01
[debug] Python version 2.7.6 - Linux-2.6.32-042stab113.11-i686-with-Ubuntu-14.04-trusty
[debug] exe versions: none
[debug] Proxy map: {}
[youtube] w0dMz8RBG7g: Downloading webpage
[youtube] w0dMz8RBG7g: Downloading video info webpage
[youtube] w0dMz8RBG7g: Extracting video information
[youtube] {22} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] w0dMz8RBG7g: Downloading player https://s.ytimg.com/yts/jsbin/player-en_US-vfli5QvRo/base.js
[youtube] {43} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {18} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {5} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {36} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {17} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {136} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {247} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {135} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {244} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {134} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {243} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {133} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {242} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {160} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {278} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {140} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {171} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {249} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {250} signature length 41.43, html5 player en_US-vfli5QvRo
[youtube] {251} signature length 41.43, html5 player en_US-vfli5QvRo
[debug] Invoking downloader on u'https://r2---sn-a8au-vgqe.googlevideo.com/videoplayback?ms=au&mt=1460039622&pl=40&mv=m&key=yt6&pte=yes&mm=31&mn=sn-a8au-vgqe&sver=3&fexp=9407059%2C9416126%2C9416891%2C9420452%2C9422596%2C9423662%2C9426926%2C9427902%2C9428398%2C9432364&ratebypass=yes&ipbits=0&initcwndbps=26957500&expire=1460061513&upn=NhCteH8M5OA&mime=video%2Fmp4&axtags=tx%3D9417362&id=o-AEE-ylzEiNeRWF2HIs5_rsDGUftXqgxkV7V0eUSq7oZ4&dur=214.111&source=youtube&ip=2602%3Aff62%3A104%3Ae6%3A%3A&sparams=axtags%2Cdur%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cpte%2Cratebypass%2Crequiressl%2Csource%2Cupn%2Cexpire&requiressl=yes&lmt=1458219184364643&itag=22&signature=B1E1AF27412C916392FF49F1D60F0771145BE274.DA5587721204D947940DB57A584188E732C36433'
[download] Destination: Wanting - (You Exist In My Song) [Trad. Chinese] [Official Music Video]-w0dMz8RBG7g.mp4
[download] 100% of 32.20MiB in 00:00
```
```
root@kangland:/var/www/ydy# locale
LANG=
LANGUAGE=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=
```
```
root@kangland:/var/www/ydy# locale -a
C
C.UTF-8
POSIX
zh_CN.utf8
zh_HK.utf8
zh_TW.utf8
```
**Run :** `youtube-dl -f 'best[height=360]' --restrict-filenames -i -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' PL1OKxDwI_y_AO1Lb-zO57wYdpWqhk7MUs`
**Result :** [download] _/01 - _.mp4
How to fix chinese title ?
Thank you so much !
| null | null | null | {'base_commit': 'e90d175436e61e207e0b0cae7f699494dcf15922', 'files': [{'path': 'youtube_dl/options.py', 'Loc': {"(None, 'parseOpts', 22)": {'mod': [447]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/options.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
localstack | localstack | 3794f1e20a56f3b7bcd23f82a006e266f2a57a05 | https://github.com/localstack/localstack/issues/2511 | type: usage | Cannot connect to DynamoDB from lambda | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
- [x] bug report
- [ ] feature request
# Detailed description
I'm using localstack for local development. I have a DynamoDB table named `readings` and I'd like
to insert items from a lambda function.
I have a simple function in python runtime:
```python
import os
import boto3
def lambda_handler(events, context):
DYNAMODB_ENDPOINT_URL = os.environ.get("DYNAMODB_ENDPOINT_URL")
dynamodb = boto3.resource("dynamodb", endpoint_url=DYNAMODB_ENDPOINT_URL)
readings_table = dynamodb.Table(DYNAMODB_READINGS_TABLE_NAME)
readings_table.put_item(Item={"reading_id": "10", "other": "test"})
```
I cannot figure out what is the proper endpoint url for my local DynamoDB.
I have tried different combinations of `localhost`, `localstack` and ports `4566`, `4569`, each time I get error `EndpointConnectionError`
## Expected behavior
Items are inserted in the table.
## Actual behavior
Lambda cannot connect to dynamodb and error `[ERROR] EndpointConnectionError: Could not connect to the endpoint URL: "http://localstack:4569/"` is raised.
# Steps to reproduce
Run localstack image with docker-compose, set `LOCALSTACK_HOSTNAME=localstack` and try to access dynamodb resource from lambda.
## Command used to start LocalStack
docker-compose service I'm using:
```yml
localstack:
image: localstack/localstack:0.11.2
ports:
- 4566:4566
- 8080:8080
environment:
SERVICES: "dynamodb,sqs,lambda,iam"
DATA_DIR: "/tmp/localstack/data"
PORT_WEB_UI: "8080"
LOCALSTACK_HOSTNAME: localstack
LAMBDA_EXECUTOR: docker
AWS_ACCESS_KEY_ID: "test"
AWS_SECRET_ACCESS_KEY: "test"
AWS_DEFAULT_REGION: "us-east-1"
volumes:
- localstack_volume:/tmp/localstack/data
- /var/run/docker.sock:/var/run/docker.sock
# When a container is started for the first time, it will execute files with extensions .sh that are found in /docker-entrypoint-initaws.d.
# Files will be executed in alphabetical order. You can easily create aws resources on localstack using `awslocal` (or `aws`) cli tool in the initialization scripts.
# Here I run creating dynamodb tables, roles, etc.
- ./localstack-startup-scripts/:/docker-entrypoint-initaws.d/
```
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
...
| null | null | null | {} | [
{
"Loc": [
19
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
localstack | localstack | 1c5f2e9650155a839cc842a9cd07faf3e76ed5d2 | https://github.com/localstack/localstack/issues/1078 | Connect to localhost:4568 [localhost/127.0.0.1] failed: Connection refused (Connection refused) | Hi there, I am having trouble connecting to Kinesis on localstack. Everything runs fine when I run it locally, the error happens inside of our Jenkins pipeline.
Here is the Dockerfile I am using:
```
FROM hseeberger/scala-sbt:8u181_2.12.7_1.2.6
USER root
RUN apt-get update
RUN apt-get -y install curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get -y install nodejs
RUN apt-get install npm
RUN curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
```
And here is my docker-compose.yml:
```
version: '3.6'
services:
# AWS services in docker env
localstack:
image: localstack/localstack:latest
environment:
- SERVICES=kinesis,dynamodb,s3,cloudwatch
- HOSTNAME_EXTERNAL=localstack
- DATA_DIR=/tmp/localstack/data
volumes:
- "/tmp:/tmp"
ports:
- "4568:4568"
- "4569:4569"
- "4572:4572"
- "4582:4582"
postgres:
image: "postgres:9.6"
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: *******
POSTGRES_DB: table
PGPASSWORD: *******
volumes:
- ./docker/postgres-init:/docker-entrypoint-initdb.d
mocks:
image: "jordimartin/mmock"
volumes:
- "./docker/mocks:/config"
ports:
- "8082:8082"
- "8083:8083"
- "8084:8084"
aws-create-stream:
image: "ivonet/aws-cli:1.0.0"
links:
- localstack
volumes:
- ${HOME}/.aws:/root/.aws:ro
command: --endpoint-url=http://localstack:4568 kinesis create-stream --stream-name RawScanPipe --shard-count 1
environment:
- AWS_DEFAULT_REGION=us-east-1
#PGAdmin gives a nice gui on the PostgreSQL DB
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
ports:
- "8888:80"
volumes:
- ./docker/pgadmin:/var/lib/pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: *********
PGADMIN_DEFAULT_PASSWORD: *********
```
In case it matters, here is the segment in our Jenkins file where this gets called:
```
def sbtInside() {
return "-u root -v /usr/bin/docker:/usr/bin/docker " +
"-v /usr/local/bin/aws:/usr/local/bin/aws " +
"-v /var/run/docker.sock:/var/run/docker.sock " +
"-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/libltdl.so.7 " +
"-v $HOME/.ivy2:/root/.ivy2 " +
"-v $HOME/.sbt:/root/.sbt"
}
stage("Unit/Functional Tests & Create Dockerfile") {
app.inside(sbtInside()) {
try {
echo "Starting unit tests..."
sh "TARGET=LOCAL sbt clean test"
echo "Starting up test stack..."
sh "docker-compose -f docker-compose.yml up -d"
echo "Starting functional tests..."
sh "TARGET=LOCAL " +
"PRODUCT_ENABLED=true " +
"sbt clean functional/test"
} finally {
echo "Tests complete!"
sh "docker-compose -f docker-compose.yml down -v"
sh "sbt docker"
}
}
}
```
I am sure I am missing something simple, I just can't figure out what it is! | null | null | null | {'base_commit': '1c5f2e9650155a839cc842a9cd07faf3e76ed5d2', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config\nCode"
} | {
"code": [],
"doc": [
"docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null | |
localstack | localstack | 1c5f2e9650155a839cc842a9cd07faf3e76ed5d2 | https://github.com/localstack/localstack/issues/1095 | Healthcheck when running in docker | I'm running localstack with docker-compose as a dependency for a service that I'm developing. The problem is that my service calls localstack before it's fully initialized. The only solution I could find so far is a hard `sleep <seconds>` at start-up, but that only works on my specific system and produces unexpected results for other developers. Can localstack expose a healthcheck, so I can have docker-compose start my service after localstack is "healthy"?
A trimmed down version of my docker-compose.yml looks something like this:
```yaml
myservice:
command: "sh -c 'sleep 10 && npm run start'" #grrrrr
depends_on:
- aws
# aws:
# condition: service_healthy
aws:
image: localstack/localstack
environment:
SERVICES: s3:81,sqs:82,ses:83
HOSTNAME_EXTERNAL: aws
``` | null | null | null | {'base_commit': '1c5f2e9650155a839cc842a9cd07faf3e76ed5d2', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [
"docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null | |
localstack | localstack | 5d11af78ae1d19560f696a9e1abb707bd115c390 | https://github.com/localstack/localstack/issues/4970 | type: bug
status: triage needed
area: configuration
aws:cloudformation
area: networking | Lambda invocation exception | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Creating and/or updating Lambda functions in docker does not work after updating LocalStack image to the latest version with the following error in LocalStack logs:
```
2021-11-20T03:33:32.357:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-2:000000000000:function:lambda-socket-local-custom-resource-apigw-cw-role result / log output:
> standard_init_linux.go:228: exec user process caused: exec format error
2021-11-20T03:33:32.814:INFO:localstack.services.awslambda.lambda_api: Error executing Lambda function arn:aws:lambda:us-east-2:000000000000:function:lambda-socket-local-custom-resource-apigw-cw-role: Lambda process returned with error. Result: . Output:
standard_init_linux.go:228: exec user process caused: exec format error Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 608, in run_lambda_executor
result, log_output = self.execute_in_container(
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/awslambda/lambda_launcher.py.enc", line 272, in docker_separate_execute_in_container
File "/opt/code/localstack/localstack/utils/docker_utils.py", line 1335, in start_container
raise ContainerException(
localstack.utils.docker_utils.ContainerException: Docker container returned with exit code 1
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 809, in run_lambda
result = LAMBDA_EXECUTOR.execute(
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 441, in execute
return do_execute()
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 431, in do_execute
return _run(func_arn=func_arn)
File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 158, in wrapped
raise e
File "/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py", line 154, in wrapped
result = func(*args, **kwargs)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 418, in _run
raise e
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 414, in _run
result = self._execute(lambda_function, inv_context)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 726, in _execute
result = self.run_lambda_executor(lambda_function=lambda_function, inv_context=inv_context)
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc", line 548, in run_lambda_executor
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 649, in run_lambda_executor
raise InvocationException(
localstack.services.awslambda.lambda_executors.InvocationException: Lambda process returned with error. Result: . Output:
standard_init_linux.go:228: exec user process caused: exec format error
2021-11-20T03:33:55.187:INFO:localstack_ext.services.cloudformation.service_models: Unable to fetch CF custom resource result from s3://localstack-cf-custom-resources-results/62c433d4 . Existing keys: []
2021-11-20T03:33:55.189:DEBUG:localstack.utils.cloudformation.template_deployer: Error applying changes for CloudFormation stack "lambda-socket-local": An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist. Traceback (most recent call last):
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1482, in _run
self.do_apply_changes_in_loop(changes, stack, stack_name)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1554, in do_apply_changes_in_loop
self.apply_change(change, stack, new_resources, stack_name=stack_name)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1619, in apply_change
result = deploy_resource(resource_id, new_resources, stack_name)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 778, in deploy_resource
result = execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 829, in execute_resource_action
result = func["function"](resource_id, resources, resource_type, func, stack_name)
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/models/custom.py", line 61, in create_custom_resource
result=retry(fetch_result,retries=KIGak(CUSTOM_RESOURCES_RESULT_POLL_TIMEOUT/2),sleep=2)
File "/opt/code/localstack/localstack/utils/common.py", line 812, in retry
raise raise_error
File "/opt/code/localstack/localstack/utils/common.py", line 808, in retry
return function(**kwargs)
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/models/custom.py", line 58, in fetch_result
return aws_utils.download_s3_object(CUSTOM_RESOURCES_RESULTS_BUCKET,result_key)
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/utils/aws/aws_utils.py.enc", line 31, in download_s3_object
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py", line 391, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py", line 719, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.
```
### Expected Behavior
Lambda create and/or update operations should pass successfully all the way to the end without any errors.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
```yml
services:
localstack:
container_name: localstack
image: localstack/localstack
ports:
- 443:443
- 4510-4530:4510-4530
- 4566:4566
- 4571:4571
environment:
- LOCALSTACK_API_KEY=${LOCALSTACK_LICENSE}
- USE_LIGHT_IMAGE=1
- IMAGE_NAME=localstack/localstack
- MAIN_CONTAINER_NAME=localstack
- SERVICES=cloudformation,cloudfront,apigateway,apigatewayv2,iam,secretsmanager,lambda,s3,sqs,sts,ec2,kafka,elb,elbv2
- DEFAULT_REGION=us-east-1
- AWS_ACCESS_KEY_ID=test
- AWS_SECRET_ACCESS_KEY=test
- EAGER_SERVICE_LOADING=1
- S3_SKIP_SIGNATURE_VALIDATION=1
- DEBUG=1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
network_mode: bridge
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
A test case available at [GitHub](https://github.com/abbaseya/localstack-msk-lambda-test) - test command `./socket.sh`
### Environment
```markdown
- OS: macOS 12.0.1
- LocalStack: latest
- AWS CLI: 2.2.35
```
### Anything else?
#4932 | null | null | null | {} | [
{
"Loc": [
96
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
localstack | localstack | c07094dbf52c947e77d952825eb4daabf409655d | https://github.com/localstack/localstack/issues/5516 | type: bug
status: triage needed
status: response required
aws:cognito | bug: JWT ID Token issued by cognito-idp can not be verified in v0.14.0 but can in 0.11.5 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
JWT tokens issued by cognito can not be verified.
### Expected Behavior
JWT tokens issues by cognito should be verifiable.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
`LOCALSTACK_API_KEY={MY_KEY} SERVICES=cognito-idp,iam,lambda,cloudformation,s3,s3api,sts DISABLE_CORS_CHECKS=1 localstack start`
`LocalStack CLI 0.14.0.1`
`LocalStack version: 0.14.0`
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Create the following files in some directory:
`package.json` file:
```json
{
"name": "localstack-jwt",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"jsonwebtoken": "^8.5.1",
"jwk-to-pem": "^2.0.5",
"node-fetch": "^2.6.7"
}
}
```
`create-user-pool.json` file:
```json
{
"PoolName": "test",
"Policies": {
"PasswordPolicy": {
"MinimumLength": 6,
"RequireUppercase": false,
"RequireLowercase": false,
"RequireNumbers": false,
"RequireSymbols": false,
"TemporaryPasswordValidityDays": 5
}
},
"AdminCreateUserConfig": {
"AllowAdminCreateUserOnly": false,
"UnusedAccountValidityDays": 5
}
}
```
`localstack.js` file:
```javascript
const jwkToPem = require('jwk-to-pem');
const jwt = require('jsonwebtoken');
const ps = require('process');
const fetch = require('node-fetch');
(async () => {
const token = ps.argv[2];
console.log('<== TOKEN:', token);
console.log('==> http://localhost:4566/userpool/.well-known/jwks.json')
const jwksResponse = await fetch('http://localhost:4566/userpool/.well-known/jwks.json');
const jwks = await jwksResponse.json();
console.log('<==', jwks);
let decodedToken = jwt.decode(token, { complete: true });
console.log('DECODED TOKEN:', decodedToken);
const publicKey = jwkToPem(jwks.keys[0]);
console.log('PUBLIC KEY:', publicKey);
try {
const decoded = jwt.verify(token, publicKey);
console.log('!!! JWT is valid');
} catch (err) {
console.error('!!! ERROR:', err.message);
}
})();
```
`setup.sh` file:
```bash
#!/bin/bash
echo "Creating User Pool"
USERNAME=user1
PASSWORD=password1
USER_POOL_ID=$( aws --endpoint-url=http://localhost:4566 cognito-idp create-user-pool \
--pool-name test \
--cli-input-json file://create-user-pool.json | jq -r '.UserPool.Id' )
echo "User Pool Created: ${USER_POOL_ID}"
echo "Creating User Pool Client"
CLIENT_ID=$( aws --endpoint-url=http://localhost:4566 cognito-idp create-user-pool-client \
--user-pool-id "$USER_POOL_ID" \
--client-name client \
--explicit-auth-flows ALLOW_USER_PASSWORD_AUTH | jq -r '.UserPoolClient.ClientId')
echo "User Pool Created: ${CLIENT_ID}"
echo "Sign Up User: user1/password1"
aws --endpoint-url=http://localhost:4566 cognito-idp sign-up \
--client-id "$CLIENT_ID" \
--username "$USERNAME" \
--password "$PASSWORD" && echo "Sign Up Success" || echo "Failed to Sign Up"
echo "Please enter confirmation code printed in terminal with 'localstack start' and hit Enter:"
read CONFIRMATION_CODE
aws --endpoint-url=http://localhost:4566 cognito-idp confirm-sign-up \
--client-id "$CLIENT_ID" \
--username "$USERNAME" \
--confirmation-code "$CONFIRMATION_CODE" && echo "User Confirmed" || echo "Unable to confirm"
echo "Authenticating User"
ID_TOKEN=$( aws --endpoint-url=http://localhost:4566 cognito-idp initiate-auth \
--auth-flow USER_PASSWORD_AUTH \
--client-id "$CLIENT_ID" \
--auth-parameters USERNAME="$USERNAME",PASSWORD="$PASSWORD" | jq -r '.AuthenticationResult.IdToken' )
echo "Validating ID TOKEN"
node localstack.js "$ID_TOKEN"
```
## Run
* `npm install`
* start localstack `LOCALSTACK_API_KEY={MY_KEY} SERVICES=cognito-idp,iam,lambda,cloudformation,s3,s3api,sts DISABLE_CORS_CHECKS=1 localstack start`
* run `./setup.sh`
* script will ask for confirmation code printed in localstack console
* finally script will output `!!! ERROR: invalid signature`
## Try the same with 0.11.5
* `./setup.sh` will print `!!! JWT is valid`
### Environment
```markdown
- OS: MacOS Monterey 12.2.1
- LocalStack: 0.14.0
```
### Anything else?
Repository with scripts you can use to reproduce issue: https://github.com/poul-kg/localstack-jwt | null | null | null | {} | [
{
"Loc": [
82
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
OpenInterpreter | open-interpreter | dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad | https://github.com/OpenInterpreter/open-interpreter/issues/499 | Bug | raise Exception("`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.") | ### Describe the bug
Fresh install on ubuntu 22,
I'm using interpreter in terminal.
After sending a prompt, at some point on the answer the program crashes
```
> Traceback (most recent call last):
File "/home/fauxprophet/Documents/Ops/openai/bin/interpreter", line 8, in <module>
sys.exit(cli())
File "/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py", line 21, in cli
cli(self)
File "/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/cli/cli.py", line 145, in cli
interpreter.chat()
File "/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py", line 65, in chat
for _ in self._streaming_chat(message=message, display=display):
File "/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py", line 86, in _streaming_chat
yield from terminal_interface(self, message)
File "/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/terminal_interface/terminal_interface.py", line 50, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py", line 106, in _streaming_chat
raise Exception("`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.")
Exception: `interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.
```
### Reproduce
1. open terminal
2. run cmd : "interpreter"
3. ask something like "can you change the color of my termninal? provide me with a few different options, and let me choose using a keystroke (1,2,3)?"
4. Wait for answers
5. While answering the program crashes
### Expected behavior
Not crash
### Screenshots
_No response_
### Open Interpreter version
0.1.5
### Python version
3.10.12
### Operating System name and version
Ubuntu 22
### Additional context
_No response_ | null | null | null | {'base_commit': 'dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad', 'files': [{'path': 'interpreter/core/core.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"interpreter/core/core.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
OpenInterpreter | open-interpreter | 1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d | https://github.com/OpenInterpreter/open-interpreter/issues/15 | Error: cannot import name 'cli' from 'interpreter' | ```console
╰─$ uname -a
Linux lab 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
╰─$ pip --version 1 ↵
pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)
╰─$ interpreter
Traceback (most recent call last):
File "/usr/local/bin/interpreter", line 5, in <module>
from interpreter import cli
ImportError: cannot import name 'cli' from 'interpreter' (unknown location)
``` | null | null | null | {'base_commit': '1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d', 'files': [{'path': 'interpreter/interpreter.py', 'Loc': {'(None, None, None)': {'mod': [1]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"interpreter/interpreter.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
OpenInterpreter | open-interpreter | 36ec07125efec86594c91e990f68e0ab214e7edf | https://github.com/OpenInterpreter/open-interpreter/issues/1548 | run interpreter --model ollama/qwen2.5:3b error | ### Bug Description
When executing the command `interpreter --model ollama/qwen2.5:3b`, an error occurs with the specific error message:
```
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
```
This error indicates that there is an unterminated string while trying to parse a JSON string, which usually happens when the response data is incomplete or improperly formatted.
### Error Log
```plaintext
C:\Users\unsia>interpreter --model ollama/qwen2.5:3b
▌ Model set to ollama/qwen2.5:3b
Loading qwen2.5:3b...
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Scripts\interpreter.exe\__main__.py", line 7, in <module>
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 560, in start_terminal_interface
validate_llm_settings(
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\terminal_interface\validate_llm_settings.py", line 109, in validate_llm_settings
interpreter.llm.load()
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 455, in ollama_completion_stream
raise e
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 433, in ollama_completion_stream
function_call = json.loads(response_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
```
### Analysis Process
- **Call Stack**: The error occurs in the file `litellm/llms/ollama.py` when attempting to parse the model's response using `json.loads(response_content)`.
- **Potential Causes**:
- The format of the data returned by the model may not meet expectations.
- It might be due to network issues, server-side problems, or the model's response format being non-compliant, leading to empty or partial responses from the model.
### Suggested Solutions
1. **Check the Model's Response**: Ensure that the API response from the model is complete and properly formatted as JSON. Debugging can be facilitated by printing out `response_content`.
2. **Catch Errors and Print More Information**: Before calling `json.loads()`, add checks to ensure that `response_content` is indeed a valid JSON string.
Example Code:
```python
if response_content:
try:
parsed_data = json.loads(response_content)
except json.JSONDecodeError as e:
print(f"JSON Decode Error: {e}")
print(f"Response content: {response_content}")
else:
print("Empty response content")
```
### Steps to Reproduce
To be filled with specific steps to reproduce this issue.
### Expected Behavior
To be filled with the expected behavior from the user's perspective.
### Environment Information
- **Open Interpreter Version**: Open Interpreter 0.4.3 Developer Preview
- **Python Version**: Python 3.11.0
- **Operating System**: Windows 11
| null | null | null | {'base_commit': '36ec07125efec86594c91e990f68e0ab214e7edf', 'files': [{'path': 'docs/usage/terminal/arguments.mdx', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n"
} | {
"code": [],
"doc": [
"docs/usage/terminal/arguments.mdx"
],
"test": [],
"config": [],
"asset": []
} | null | |
OpenInterpreter | open-interpreter | 8fb4668dc7451ac58ac57ba587ed77194469f739 | https://github.com/OpenInterpreter/open-interpreter/issues/1175 | Error when inporting interpreter | ### Describe the bug
I have the following error when I try to import interpreter:
```
Traceback (most recent call last):
File "/home/seba/workspace/AutoProgrammer/interpreter.py", line 1, in <module>
from interpreter import interpreter
File "/home/seba/workspace/AutoProgrammer/interpreter.py", line 1, in <module>
from interpreter import interpreter
ImportError: cannot import name 'interpreter' from partially initialized module 'interpreter' (most likely due to a circular import
```
I'm not python expert, but can't figure out what I did wrong. I installed open-interpreter with pip, pip in venv, conda but nothing helps. Other libs like crewai have no problem with imports.
### Reproduce
1. install open-interpreter
2. inport into .py file `from interpreter import interpreter`
3. run file
### Expected behavior
Import works
### Screenshots
_No response_
### Open Interpreter version
0.2.4
### Python version
3.11.8
### Operating System name and version
Fedora
### Additional context
Tested with open-interpreter `0.2.0` and `0.2.4`, python `3.10` and `3.11` | null | null | null | {} | [
{
"path": "/home/seba/workspace/AutoProgrammer/interpreter.py"
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
"/home/seba/workspace/AutoProgrammer/interpreter.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 3bc25680529cdb6b5d407c8332e820aeb2e0b948 | https://github.com/abi/screenshot-to-code/issues/66 | WebSocket error code |
"Your demonstration website has the same error, please take a look." | null | null | null | {'base_commit': '3bc25680529cdb6b5d407c8332e820aeb2e0b948', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [
"docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 2f88cf9b2568163954ecc7c20ef9879263bfc9ba | https://github.com/abi/screenshot-to-code/issues/476 | Error generating code. Please contact support. | I have already started the project both frontend and backend but when placing the image I get the following error "Error generating code. Please contact support." Could you help me with this problem?

| null | null | null | {} | [] | [
".env"
] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "1",
"info_type": "Other\n环境变量\n文档的一个loc的误解"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
".env"
],
"asset": []
} | null | |
abi | screenshot-to-code | 4e30b207c1ee9ddad05a37c31a11ac5a182490b7 | https://github.com/abi/screenshot-to-code/issues/270 | Error configuring ANTHROPIC API KEY in.env file | I added "ANTHROPIC_API_KEY=s****" to the.env file
"No Anthropic API key found. Please add the environment variable ANTHROPIC_API_KEY to backend/.env"
| null | null | null | {'base_commit': '4e30b207c1ee9ddad05a37c31a11ac5a182490b7', 'files': [{'path': 'backend/config.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "",
"info_type": "Code"
} | {
"code": [
"backend/config.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 226af5bf4183539c97c7bab825cb9324b8c570c0 | https://github.com/abi/screenshot-to-code/issues/136 | error generating code | Error generating code. Check the Developer Console AND the backend logs for details. Feel free to open a Github issue.
while hiiting the url and pasting the screenshot it shows below error ,am i doing it correctly
<img width="940" alt="Screenshot 2023-11-30 212304" src="https://github.com/abi/screenshot-to-code/assets/152517537/38d9b1af-125b-45d4-9c4a-cbb600f5ec7d">
<img width="940" alt="Screenshot 2023-11-30 212304" src="https://github.com/abi/screenshot-to-code/assets/152517537/9c5bf85b-8109-44f7-842d-ec69dd2c49d0">
| null | null | null | {'base_commit': '226af5bf4183539c97c7bab825cb9324b8c570c0', 'files': [{'path': 'Troubleshooting.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [],
"doc": [
"Troubleshooting.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1 | https://github.com/abi/screenshot-to-code/issues/452 | build failed | **Describe the bug**
Docker container Exited for `screenshot-to-code-main-frontend-1`
**To Reproduce**
OS: Ubuntu 22.04.4 LTS
Docker Compose version v2.28.1
Build version: (commit id) b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1
**Screenshots of backend AND frontend terminal logs**
Nginx conf
```
location /screenshot {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_send_timeout 1000;
proxy_read_timeout 1000;
send_timeout 1000;
client_max_body_size 5M;
proxy_pass http://127.0.0.1:5173;
}
```
```
~ docker logs --tail 444 screenshot-to-code-main-frontend-1
yarn run v1.22.22
$ vite --host 0.0.0.0
VITE v4.5.0 ready in 1390 ms
➜ Local: http://localhost:5173/
➜ Network: http://172.20.0.3:5173/
ERROR
[TypeScript] Found 0 errors. Watching for file changes.
WARN Browserslist: caniuse-lite is outdated. Please run:
npx update-browserslist-db@latest
Why you should do it regularly: https://github.com/browserslist/update-db#readme
file:///app/tailwind.config.js:2
module.exports = {
^
ReferenceError: module is not defined
at file:///app/tailwind.config.js:2:1
at ModuleJobSync.runSync (node:internal/modules/esm/module_job:395:35)
at ModuleLoader.importSyncForRequire (node:internal/modules/esm/loader:329:47)
at loadESMFromCJS (node:internal/modules/cjs/loader:1414:24)
at Module._compile (node:internal/modules/cjs/loader:1547:5)
at Object..js (node:internal/modules/cjs/loader:1677:16)
at Module.load (node:internal/modules/cjs/loader:1318:32)
at Function._load (node:internal/modules/cjs/loader:1128:12)
at TracingChannel.traceSync (node:diagnostics_channel:322:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:219:24)
at Module.require (node:internal/modules/cjs/loader:1340:12)
at require (node:internal/modules/helpers:138:16)
at /app/node_modules/tailwindcss/lib/lib/load-config.js:35:27
at loadConfig (/app/node_modules/tailwindcss/lib/lib/load-config.js:39:6)
at getTailwindConfig (/app/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:71:116)
at /app/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:100:92
at /app/node_modules/tailwindcss/lib/processTailwindFeatures.js:48:11
at plugins (/app/node_modules/tailwindcss/lib/plugin.js:38:69)
at LazyResult.runOnRoot (/app/node_modules/postcss/lib/lazy-result.js:329:16)
at LazyResult.runAsync (/app/node_modules/postcss/lib/lazy-result.js:258:26)
at LazyResult.async (/app/node_modules/postcss/lib/lazy-result.js:160:30)
at LazyResult.then (/app/node_modules/postcss/lib/lazy-result.js:404:17)
Node.js v22.12.0
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```

| null | null | null | {'base_commit': 'b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1', 'files': [{'path': 'frontend/tailwind.config.js', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"frontend/tailwind.config.js"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 214163b0e02176333b5543740cf6262e5da99602 | https://github.com/abi/screenshot-to-code/issues/268 | model evaluation method | How to evaluate the performance of the model on generalized data, such as comparing the original screenshots with the generated results? Are there any indicators? | null | null | null | {'base_commit': '214163b0e02176333b5543740cf6262e5da99602', 'files': [{'path': 'blog/evaluating-claude.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"blog/evaluating-claude.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1 | https://github.com/abi/screenshot-to-code/issues/443 | ReferenceError: module is not defined | When running the frontend yarn dev command, I get the error below.
Steps to reproduce the behavior:
1. Go to frontend folder
2. execute: `yarn`
3. execute: `yarn dev`
Immediately after executing the yarn dev command, I get a message that says:
```
ERROR 16:31:02
[TypeScript] Found 0 errors. Watching for file changes.
```
Then when I navigate to http://localhost:5173/, it crashes with the following output:
```
(base) user@192 frontend % yarn dev
yarn run v1.22.22
warning ../../../package.json: No license field
$ vite
16:31:00
VITE v4.5.0 ready in 544 ms
➜ Local: http://localhost:5173/ 16:31:00
➜ Network: use --host to expose 16:31:00
➜ press h to show help 16:31:00
ERROR 16:31:02
[TypeScript] Found 0 errors. Watching for file changes.
WARN Browserslist: caniuse-lite is outdated. Please run: 16:31:37
npx update-browserslist-db@latest
Why you should do it regularly: https://github.com/browserslist/update-db#readme
ERROR (node:91140) ExperimentalWarning: CommonJS module /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js is loading ES Module /Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js using require().
Support for loading ES Module in require() is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
file:///Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js:2
module.exports = {
^
ReferenceError: module is not defined
at file:///Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js:2:1
at ModuleJobSync.runSync (node:internal/modules/esm/module_job:395:35)
at ModuleLoader.importSyncForRequire (node:internal/modules/esm/loader:329:47)
at loadESMFromCJS (node:internal/modules/cjs/loader:1376:24)
at Module._compile (node:internal/modules/cjs/loader:1528:5)
at Object..js (node:internal/modules/cjs/loader:1698:10)
at Module.load (node:internal/modules/cjs/loader:1303:32)
at Function._load (node:internal/modules/cjs/loader:1117:12)
at TracingChannel.traceSync (node:diagnostics_channel:322:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)
at Module.require (node:internal/modules/cjs/loader:1325:12)
at require (node:internal/modules/helpers:136:16)
at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js:35:27
at loadConfig (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js:39:6)
at getTailwindConfig (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:71:116)
at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:100:92
at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/processTailwindFeatures.js:48:11
at plugins (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/plugin.js:38:69)
at LazyResult.runOnRoot (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:329:16)
at LazyResult.runAsync (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:258:26)
at LazyResult.async (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:160:30)
at LazyResult.then (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:404:17)
Node.js v23.3.0
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
Edit: I am running MacOS 15.1 M2 chip.
Edit 2: I only set OpenAI key, I do not intend to use both APIs. | null | null | null | {'base_commit': 'b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1', 'files': [{'path': 'frontend/tailwind.config.js', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"frontend/tailwind.config.js"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b | https://github.com/abi/screenshot-to-code/issues/132 | Why Connection closed 1006 | 


| null | null | null | {'base_commit': '1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b', 'files': [{'path': 'backend/main.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"backend/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 689783eabd552151fa511e44cba90c14f3ee4dcd | https://github.com/abi/screenshot-to-code/issues/83 | code error | Hi, I tried the [online version](https://picoapps.xyz/free-tools/screenshot-to-code) of your tool with my API key but I got an error from that following screenshot

which return this in the console :
```JS
WebSocket error code CloseEvent {isTrusted: true, wasClean: false, code: 1006, reason: '', type: 'close', …}isTrusted: truebubbles: falsecancelBubble: falsecancelable: falsecode: 1006composed: falsecurrentTarget: WebSocket {url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null, …}defaultPrevented: falseeventPhase: 0reason: ""returnValue: truesrcElement: WebSocket {url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null, …}target: WebSocket {url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null, …}timeStamp: 70399.80000001192type: "close"wasClean: false[[Prototype]]: CloseEventcode: (...)reason: (...)wasClean: (...)constructor: ƒ CloseEvent()Symbol(Symbol.toStringTag): "CloseEvent"bubbles: (...)cancelBubble: (...)cancelable: (...)composed: (...)currentTarget: (...)defaultPrevented: (...)eventPhase: (...)returnValue: (...)srcElement: (...)target: (...)timeStamp: (...)type: (...)get code: ƒ code()get reason: ƒ reason()get wasClean: ƒ wasClean()[[Prototype]]: Event
(anonymous) @ index-9af3e78e.js:225
```
<img width="946" alt="image" src="https://github.com/abi/screenshot-to-code/assets/482210/b8403fbe-fc6b-479d-92ea-5f70610b3d6c">
any idea on that topic ?
david
| null | null | null | {'base_commit': '689783eabd552151fa511e44cba90c14f3ee4dcd', 'files': [{'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 7d6fde2deafa014dc1a90c3b1dcb2ed88680a2ff | https://github.com/abi/screenshot-to-code/issues/1 | Error: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte | Hello, thank you for your contribution, I am having the above problem, can you help me?
` File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte` | null | null | null | {} | [] | [
".env"
] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "1",
"info_type": "Other\n环境变量"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
".env"
],
"asset": []
} | null | |
abi | screenshot-to-code | fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c | https://github.com/abi/screenshot-to-code/issues/150 | Error generating code. Check the Developer Console AND the backend logs for details | My ChatGPT has access to GPT-VISION. and the web app loads well but when I upload an image. it returns this error 'Error generating code. Check the Developer Console AND the backend logs for details'
<img width="466" alt="error" src="https://github.com/abi/screenshot-to-code/assets/100529823/97c337b7-de54-45f9-8def-f984ade50a6d">
| null | null | null | {'base_commit': 'fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c', 'files': [{'path': 'docker-compose.yml', 'Loc': {'(None, None, 20)': {'mod': [20]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null | |
pytorch | pytorch | 4622b3395276b37e10141fab43ffea33941ca0c2 | https://github.com/pytorch/pytorch/issues/2384 | How the grad is transferred between layer | consider a simple example here:
```python
import torch
from torch.autograd import Variable
input = Variable(torch.randn(20, 3, 28, 28), requires_grad=True)
m = torch.nn.Conv2d(3, 16, 5)
output = m(input)
loss = torch.sum(output)# define loss to perform backprop
m.zero_grad()
loss.backward()
print(type(input))
print(input.grad.size())
print(type(output))
print(output.grad)
```
the output is:
```
<class 'torch.autograd.variable.Variable'>
torch.Size([20, 3, 28, 28])
<class 'torch.autograd.variable.Variable'>
None
```
I find the `output.grad` is `None`. I don't know how the `input.grad` is calculated without `output.grad`.
and want to know how to get the values of `output.grad`.
thanks! | null | null | null | {'base_commit': '4622b3395276b37e10141fab43ffea33941ca0c2', 'files': [{'path': 'torch/autograd/variable.py', 'Loc': {"('Variable', 'retain_grad', 236)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"torch/autograd/variable.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
pytorch | pytorch | 2abcafcfd8beb4f6a22e08532d58f9f09c490f0f | https://github.com/pytorch/pytorch/issues/96983 | module: binaries
triaged
module: arm | PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support | ### 🐛 Describe the bug
PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support, where as PyTorch 1.13.0 had support.
Solution:
the wheels need to be built with the `--enable-mkldnn` option while building them from the pytorch/builder repo.
example command for pytorch wheel builder script:
`./build_aarch64_wheel.py --python-version 3.8 --use-docker --keep-running --os ubuntu20_04 --enable-mkldnn --branch release/2.0`
To reproduce the issue, create c6g or c7g instance from AWS EC2, and in the below output, look for `USE_MKLDNN=`, this was ON for PyTorch 1.13.0 but OFF for PyTorch2.0.0.
non-working scenario
```
pip install torch==2.0.0
time python3 -c "import torch; torch.set_num_threads(8); print(torch.__version__, torch.__config__.show(), torch.get_num_threads());a=torch.rand(100, 100, 100); b=torch.rand(100,100, 100); [torch.bmm(a,b).sum() for i in range(1000)]"
2.0.0 PyTorch built with:
- GCC 10.2
- C++ Version: 201703
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: NO AVX
- Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
```
working scenario:
```
pip3 install torch==1.13.0
time python3 -c "import torch; torch.set_num_threads(8); print(torch.__version__, torch.__config__.show(), torch.get_num_threads());a=torch.rand(100, 100, 100); b=torch.rand(100,100, 100); [torch.bmm(a,b).sum() for i in range(1000)]"
1.13.0 PyTorch built with:
- GCC 10.2
- C++ Version: 201402
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: NO AVX
- Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_VERSION=1.13.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.5 LTS (aarch64)
GCC version: (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1028-aws-aarch64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: ARM
Model: 1
Stepping: r1p1
BogoMIPS: 2100.00
L1d cache: 1 MiB
L1i cache: 1 MiB
L2 cache: 16 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-15
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
Versions of relevant libraries:
[pip3] numpy==1.24.2
[pip3] torch==2.0.0
[pip3] torchvision==0.14.1
[conda] Could not collect
```
cc @ezyang @seemethere @malfet | null | null | null | {'base_commit': '2abcafcfd8beb4f6a22e08532d58f9f09c490f0f', 'files': [{'path': '.ci/aarch64_linux/build_aarch64_wheel.py', 'Loc': {'(None, None, None)': {'mod': [8]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
".ci/aarch64_linux/build_aarch64_wheel.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
pytorch | pytorch | 2dff0b3e918530719f7667cb31541f036a25e3f2 | https://github.com/pytorch/pytorch/issues/48435 | AttributeError: module 'torch.cuda' has no attribute 'comm' | ## ❓ Questions and Help
I'm using torch 1.7.0, and get this kind of error
my torch is installed via
pip install torch==1.7.0+cu101 torchvision==0.8.1+cu101 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
my os is win10 | null | null | https://github.com/facebookresearch/InterHand2.6M/commit/874eb9f740ef54c275433d1bd27f8fb8f6a8f17d | {} | [] | [] | [
{
"org": "facebookresearch",
"pro": "InterHand2.6M",
"path": [
"{'base_commit': '874eb9f740ef54c275433d1bd27f8fb8f6a8f17d', 'files': [{'path': 'common/nets/module.py', 'status': 'modified', 'Loc': {\"('PoseNet', 'soft_argmax_1d', 41)\": {'mod': [43]}}}]}"
]
}
] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [
"common/nets/module.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"InterHand2.6M"
]
} | null | |
xtekky | gpt4free | e8f6013d0349229fd8f7d298952cfe56fc4b8761 | https://github.com/xtekky/gpt4free/issues/2070 | bug
stale | Liaobots and You don't work | Liaobots and You do not work, they give the following errors:
```
Liaobots: ResponseStatusError: Response 500: Error
```
```
You: ResponseStatusError: Response 401: {"status_code":401,"request_id":"request-id-live-183191e7-adc1-4838-8e29-6e0c5c3ca048","error_type":"endpoint_not_authorized_for_sdk","error_message":"The project owner has not authorized the SDK to call this endpoint. Please enable it in the dashboard to continue: https://stytch.com/dashboard/sdk-configuration.","error_url":"https://stytch.com/docs/api/errors/401#endpoint_not_authorized_for_sdk"}
```
@xtekky @hlohaus | null | null | null | {'base_commit': 'e8f6013d0349229fd8f7d298952cfe56fc4b8761', 'files': [{'path': 'g4f/Provider/Liaobots.py', 'Loc': {"('Liaobots', 'create_async_generator', 111)": {'mod': [149]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"g4f/Provider/Liaobots.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | fa2d608822540c9b73350bfa036e8822ade4e23f | https://github.com/xtekky/gpt4free/issues/2305 | stale | ValueError: Unknown model: dall-e-3 | ```
C:\Users\MAX\Desktop>pip install -U g4f[all]
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: g4f[all] in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (0.3.3.2)
Requirement already satisfied: requests in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (2.32.3)
Requirement already satisfied: aiohttp in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (3.9.3)
Requirement already satisfied: brotli in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (1.1.0)
Requirement already satisfied: pycryptodome in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (3.20.0)
Requirement already satisfied: curl-cffi>=0.6.2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (0.7.3)
Requirement already satisfied: cloudscraper in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (1.2.71)
Requirement already satisfied: certifi in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (2024.8.30)
Requirement already satisfied: browser-cookie3 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (0.19.1)
Requirement already satisfied: PyExecJS in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (1.5.1)
Requirement already satisfied: duckduckgo-search>=5.0 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (6.3.2)
Requirement already satisfied: beautifulsoup4 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (4.12.3)
Requirement already satisfied: pywebview in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (5.2)
Requirement already satisfied: platformdirs in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (4.2.2)
Requirement already satisfied: plyer in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (2.1.0)
Requirement already satisfied: cryptography in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (43.0.0)
Requirement already satisfied: aiohttp-socks in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (0.8.4)
Requirement already satisfied: pillow in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (10.2.0)
Requirement already satisfied: cairosvg in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (2.7.1)
Requirement already satisfied: werkzeug in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (3.0.1)
Requirement already satisfied: flask in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (3.0.2)
Requirement already satisfied: loguru in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (0.7.2)
Requirement already satisfied: fastapi in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (0.109.2)
Requirement already satisfied: uvicorn in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (0.27.0.post1)
Requirement already satisfied: nest-asyncio in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from g4f[all]) (1.6.0)
Requirement already satisfied: cffi>=1.12.0 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from curl-cffi>=0.6.2->g4f[all]) (1.17.0)
Requirement already satisfied: typing-extensions in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from curl-cffi>=0.6.2->g4f[all]) (4.12.2)
Requirement already satisfied: click>=8.1.7 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from duckduckgo-search>=5.0->g4f[all]) (8.1.7)
Requirement already satisfied: primp>=0.6.4 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from duckduckgo-search>=5.0->g4f[all]) (0.6.4)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->g4f[all]) (1.3.1)
Requirement already satisfied: attrs>=17.3.0 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->g4f[all]) (23.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->g4f[all]) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->g4f[all]) (6.0.5)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from aiohttp->g4f[all]) (1.9.4)
Requirement already satisfied: python-socks<3.0.0,>=2.4.3 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->g4f[all]) (2.4.4)
Requirement already satisfied: soupsieve>1.2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from beautifulsoup4->g4f[all]) (2.5)
Requirement already satisfied: lz4 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from browser-cookie3->g4f[all]) (4.3.3)
Requirement already satisfied: pycryptodomex in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from browser-cookie3->g4f[all]) (3.20.0)
Requirement already satisfied: cairocffi in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from cairosvg->g4f[all]) (1.6.1)
Requirement already satisfied: cssselect2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from cairosvg->g4f[all]) (0.7.0)
Requirement already satisfied: defusedxml in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from cairosvg->g4f[all]) (0.7.1)
Requirement already satisfied: tinycss2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from cairosvg->g4f[all]) (1.2.1)
Requirement already satisfied: pyparsing>=2.4.7 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from cloudscraper->g4f[all]) (3.1.2)
Requirement already satisfied: requests-toolbelt>=0.9.1 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from cloudscraper->g4f[all]) (1.0.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from requests->g4f[all]) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from requests->g4f[all]) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from requests->g4f[all]) (2.1.0)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from fastapi->g4f[all]) (2.6.1)
Requirement already satisfied: starlette<0.37.0,>=0.36.3 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from fastapi->g4f[all]) (0.36.3)
Requirement already satisfied: Jinja2>=3.1.2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from flask->g4f[all]) (3.1.3)
Requirement already satisfied: itsdangerous>=2.1.2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from flask->g4f[all]) (2.1.2)
Requirement already satisfied: blinker>=1.6.2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from flask->g4f[all]) (1.7.0)
Requirement already satisfied: MarkupSafe>=2.1.1 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from werkzeug->g4f[all]) (2.1.5)
Requirement already satisfied: colorama>=0.3.4 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from loguru->g4f[all]) (0.4.6)
Requirement already satisfied: win32-setctime>=1.0.0 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from loguru->g4f[all]) (1.1.0)
Requirement already satisfied: six>=1.10.0 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from PyExecJS->g4f[all]) (1.16.0)
Requirement already satisfied: proxy-tools in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pywebview->g4f[all]) (0.1.0)
Requirement already satisfied: bottle in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pywebview->g4f[all]) (0.13.1)
Requirement already satisfied: pythonnet in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pywebview->g4f[all]) (3.0.3)
Requirement already satisfied: h11>=0.8 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from uvicorn->g4f[all]) (0.14.0)
Requirement already satisfied: pycparser in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from cffi>=1.12.0->curl-cffi>=0.6.2->g4f[all]) (2.22)
Requirement already satisfied: annotated-types>=0.4.0 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->g4f[all]) (0.6.0)
Requirement already satisfied: pydantic-core==2.16.2 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->g4f[all]) (2.16.2)
Requirement already satisfied: async-timeout>=3.0.1 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->g4f[all]) (4.0.3)
Requirement already satisfied: anyio<5,>=3.4.0 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from starlette<0.37.0,>=0.36.3->fastapi->g4f[all]) (4.2.0)
Requirement already satisfied: webencodings in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from cssselect2->cairosvg->g4f[all]) (0.5.1)
Requirement already satisfied: clr-loader<0.3.0,>=0.2.6 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from pythonnet->pywebview->g4f[all]) (0.2.6)
Requirement already satisfied: sniffio>=1.1 in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from anyio<5,>=3.4.0->starlette<0.37.0,>=0.36.3->fastapi->g4f[all]) (1.3.0)
C:\Users\MAX\Desktop>
Traceback (most recent call last):.py
File "C:\Users\MAX\Desktop\gptimg.py", line 4, in <module>
response = client.images.generate(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\MAX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\client\client.py", line 421, in generate
return asyncio.run(self.async_generate(prompt, model, response_format=response_format, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\Lib\asyncio\runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\Lib\asyncio\base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\MAX\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\g4f\client\client.py", line 426, in async_generate
raise ValueError(f"Unknown model: {model}")
ValueError: Unknown model: dall-e-3
``` | null | null | null | {'base_commit': 'fa2d608822540c9b73350bfa036e8822ade4e23f', 'files': [{'path': 'g4f/models.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"g4f/models.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | 1ade1d959cbc9aea7cf653bbe5b6c414ba486c97 | https://github.com/xtekky/gpt4free/issues/1292 | bug
stale | RecursionError: maximum recursion depth exceeded while calling a Python object | Ubuntu 22, g4f-0.1.9.0, pip installation method, python3.10
**Bug description**
G4F API has these errors after 5-10 requests. I have to restart constantly. It is very uncomfortable. This problem did not exist in the previous version.
**Errors**
```
RecursionError: maximum recursion depth exceeded in comparison
RecursionError: maximum recursion depth exceeded while calling a Python object
RuntimeError: RetryProvider failed:
You: RecursionError: maximum recursion depth exceeded
Chatgpt4Online: RecursionError: maximum recursion depth exceeded in comparison
ChatAnywhere: RecursionError: maximum recursion depth exceeded while encoding a JSON object
ChatgptX: RecursionError: maximum recursion depth exceeded in comparison
GptForLove: RuntimeUnavailableError: Could not find an available JavaScript runtime.
ChatBase: RecursionError: maximum recursion depth exceeded while encoding a JSON object
GptGo: RecursionError: maximum recursion depth exceeded while calling a Python object
```
**Traceback**
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/g4f/api/__init__.py", line 85, in chat_completions
response = g4f.ChatCompletion.create(
File "/usr/local/lib/python3.10/dist-packages/g4f/__init__.py", line 76, in create
return result if stream else ''.join(result)
File "/usr/local/lib/python3.10/dist-packages/g4f/Provider/retry_provider.py", line 59, in create_completion
self.raise_exceptions()
File "/usr/local/lib/python3.10/dist-packages/g4f/Provider/retry_provider.py", line 87, in raise_exceptions
raise RuntimeError("\n".join(["RetryProvider failed:"] + [
RuntimeError: RetryProvider failed:
ChatAnywhere: RecursionError: maximum recursion depth exceeded
ChatBase: RecursionError: maximum recursion depth exceeded
ChatgptX: RecursionError: maximum recursion depth exceeded
You: RecursionError: maximum recursion depth exceeded while calling a Python object
GptGo: RecursionError: maximum recursion depth exceeded
Chatgpt4Online: RecursionError: maximum recursion depth exceeded
GptForLove: RecursionError: maximum recursion depth exceeded
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1106, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 274, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "/usr/local/lib/python3.10/dist-packages/g4f/api/__init__.py", line 91, in chat_completions
logging.exception(e)
File "/usr/lib/python3.10/logging/__init__.py", line 2113, in exception
error(msg, *args, exc_info=exc_info, **kwargs)
File "/usr/lib/python3.10/logging/__init__.py", line 2105, in error
root.error(msg, *args, **kwargs)
File "/usr/lib/python3.10/logging/__init__.py", line 1506, in error
self._log(ERROR, msg, args, **kwargs)
File "/usr/lib/python3.10/logging/__init__.py", line 1624, in _log
self.handle(record)
File "/usr/lib/python3.10/logging/__init__.py", line 1634, in handle
self.callHandlers(record)
File "/usr/lib/python3.10/logging/__init__.py", line 1696, in callHandlers
hdlr.handle(record)
File "/usr/lib/python3.10/logging/__init__.py", line 968, in handle
self.emit(record)
File "/usr/lib/python3.10/logging/__init__.py", line 1100, in emit
msg = self.format(record)
File "/usr/lib/python3.10/logging/__init__.py", line 943, in format
return fmt.format(record)
File "/usr/lib/python3.10/logging/__init__.py", line 686, in format
record.exc_text = self.formatException(record.exc_info)
File "/usr/lib/python3.10/logging/__init__.py", line 636, in formatException
traceback.print_exception(ei[0], ei[1], tb, None, sio)
File "/usr/lib/python3.10/traceback.py", line 120, in print_exception
for line in te.format(chain=chain):
File "/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py", line 248, in format
yield from _ctx.emit(exc.format_exception_only())
File "/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py", line 64, in emit
for text in text_gen:
File "/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py", line 335, in format_exception_only
if isinstance(self.__notes__, collections.abc.Sequence):
File "/usr/lib/python3.10/abc.py", line 119, in __instancecheck__
return _abc_instancecheck(cls, instance)
RecursionError: maximum recursion depth exceeded in comparison
```
| null | null | null | {'base_commit': '1ade1d959cbc9aea7cf653bbe5b6c414ba486c97', 'files': [{'path': 'g4f/cli.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"g4f/cli.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | c159eebd494b1aef06340429b7b62cdfb84f783d | https://github.com/xtekky/gpt4free/issues/2556 | bug | Errors when generating images in the following models: | Hi!
errors when generating images in the following models:
Response 404: The page could not be found
sdxl, playground-v2.5, sd-3
dall-e-3: Missing "_U" cookie
midjourney: Cannot connect to host image.pollinations.ai:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')] | null | null | null | {'base_commit': 'c159eebd494b1aef06340429b7b62cdfb84f783d', 'files': [{'path': 'projects/windows/main.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"projects/windows/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | b7eee50930dbd782d7c068d1d29cd270b97bc741 | https://github.com/xtekky/gpt4free/issues/1710 | bug
stale | AttributeError: module 'g4f' has no attribute 'client' | **Bug description**
When trying to run script from Quickstart, i get this error.
Traceback (most recent call last):
File "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py", line 3, in <module>
engine = g4f.client.Client()
AttributeError: module 'g4f' has no attribute 'client'
**Environment**
Python version: 3.11.7 | null | null | null | {'base_commit': 'b7eee50930dbd782d7c068d1d29cd270b97bc741', 'files': [{'path': 'g4f/client/__init__.py', 'Loc': {}}, {'path': 'C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py'}]} | [
{
"path": "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
"g4f/client/__init__.py"
],
"doc": [],
"test": [
"C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"
],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | 2a54c36043b9d87b96c4b7699ce194f8523479b8 | https://github.com/xtekky/gpt4free/issues/552 | bug | Unable to fetch the response, Please try again. | 
| null | null | null | {'base_commit': '2a54c36043b9d87b96c4b7699ce194f8523479b8', 'files': [{'path': 'gpt4free/you/__init__.py', 'Loc': {"('Completion', 'create', 22)": {'mod': [41]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"gpt4free/you/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | c29487cdb522a2655ccff45bdfc33895ed4daf84 | https://github.com/xtekky/gpt4free/issues/2078 | bug | HuggingChat provider is not working - ResponseStatusError: Response 500 | ### Bug description
When I try to use the HuggingChat provider, having added a cookies/har file, I always get the same error: `An error occurred: HuggingChat: ResponseStatusError: Response 500:`
```
Using HuggingChat provider and CohereForAI/c4ai-command-r-plus model
INFO:werkzeug:192.168.80.1 - - [22/Jun/2024 16:31:48] "POST /backend-api/v2/conversation HTTP/1.1" 200 -
ERROR:root:Response 500:
Traceback (most recent call last):
File "/app/g4f/gui/server/api.py", line 177, in _create_response_stream
for chunk in ChatCompletion.create(**kwargs):
File "/app/g4f/providers/base_provider.py", line 223, in create_completion
yield loop.run_until_complete(await_callback(gen.__anext__))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/app/g4f/providers/base_provider.py", line 52, in await_callback
return await callback()
^^^^^^^^^^^^^^^^
File "/app/g4f/Provider/HuggingChat.py", line 99, in create_async_generator
await raise_for_status(response)
File "/app/g4f/requests/raise_for_status.py", line 28, in raise_for_status_async
raise ResponseStatusError(f"Response {response.status}: {message}")
g4f.errors.ResponseStatusError: Response 500:
```
### Steps to reproduce
1. Put your cookies json file / har file for `huggingface.co` in the `har_and_cookies` directory
2. Run gpt4free in Docker using docker compose
3. Open g4f web ui (using OpenAI compatible API (port `1337`) gives the same error, though)
4. Select this provider: `HuggingChat (Auth)`
5. Select any model, for example `CohereForAI/c4ai-command-r-plus`
6. Send any message to the LLM
7. See the error
### Screenshot

### Environment
- gpt4free version 0.3.2.0 (this git repository, commit `e8f6013d`)
- docker compose
- Ubuntu 22.04.4 LTS x86_64
-----
duplicates https://github.com/xtekky/gpt4free/issues/2053 which is closed | null | null | null | {'base_commit': 'c29487cdb522a2655ccff45bdfc33895ed4daf84', 'files': [{'path': 'g4f/Provider/HuggingChat.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"g4f/Provider/HuggingChat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
Z4nzu | hackingtool | c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e | https://github.com/Z4nzu/hackingtool/issues/68 | question | default username and password of social fish | hay man the tool works fine but what is the default username and password of social fish | null | null | null | {'base_commit': 'c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e', 'files': [{'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null |
scikit-learn | scikit-learn | f7026b04f5e5909aa15848b25de2becd675871a9 | https://github.com/scikit-learn/scikit-learn/issues/2475 | Multinomial Naive Bayes: Scikit and Weka have different results | Hi All,
I used the sklearn.naive_bayes.MultinomialNB on a toy example.
Comparing the results with WEKA, I've noticed a quite different AUC.
Scikit (0.579) - Weka (0.664)
| null | null | null | {'base_commit': 'f7026b04f5e5909aa15848b25de2becd675871a9', 'files': [{'path': 'sklearn/cross_validation.py', 'Loc': {"(None, 'cross_val_score', 1075)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"sklearn/cross_validation.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
scikit-learn | scikit-learn | 0ab5c678bba02888b62b777b4c757e367b3458d5 | https://github.com/scikit-learn/scikit-learn/issues/8470 | How to let gbdt = GradientBoostingRegressor(), gbdt.fit(X_feature, X_label) know whether the feature of input X is categorical or numerical? | null | null | null | {'base_commit': '0ab5c678bba02888b62b777b4c757e367b3458d5', 'files': [{'path': 'sklearn/preprocessing/_encoders.py', 'Loc': {"('OneHotEncoder', None, 151)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"sklearn/preprocessing/_encoders.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | ||
pandas-dev | pandas | 184f2dba255f279697cb1d7567428b3e6403c2d0 | https://github.com/pandas-dev/pandas/issues/3209 | BUG: read_csv: dtype={'id' : np.str}: Datatype not understood | I have a CSV with several columns. The first of which is a field called `id` with entries of the type `0001`, `0002`, etc.
When loading this file, the following works:
``` python
pd.read_csv(my_path, dtype={'id' : np.int})
```
but the following doesn't:
``` python
pd.read_csv(my_path, dtype={'id' : np.str})
```
nor does this either:
``` python
pd.read_csv(my_path, dtype={'id' : str})
```
I get: `Datatype not understood`
This is with `pandas-0.10.1`
| null | null | null | {} | [
{
"Loc": [
12,
18
],
"path": null
}
] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3\nand\n2",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
meta-llama | llama | 53011c3d7946dadb8274a4c5c7586ab54edf792d | https://github.com/meta-llama/llama/issues/48 | How to run 13B model on 4*16G V100? | RuntimeError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 15.78 GiB total capacity; 14.26 GiB already allocated; 121.19 MiB free; 14.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 143) of binary: /opt/conda/envs/torch1.12/bin/python | null | null | null | {} | [] | [] | [
{
"org": "fabawi",
"pro": "wrapyfi"
},
{
"org": "modular-ml",
"pro": "wrapyfi-examples_llama"
}
] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"wrapyfi",
"wrapyfi-examples_llama"
]
} | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.