organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
deepfakes | faceswap | 629c02a61e1ad5f769f8f7388a091d5ce9aa8160 | https://github.com/deepfakes/faceswap/issues/1254 | Can't Open GUI on Windows | **Describe the bug**
Whenever I try to open the GUI of Faceswap, I get an error and it doesn't open. I am on Windows, and I have uninstalled and reinstalled multiple times, including redoing the conda environment. CLI functions work, but the main GUI does not open, either from the shortcut or a manual terminal run. I have also tried running with and without admin
**To Reproduce**
Steps to reproduce the behavior:
1. Uninstall old Faceswap versions
2. Install the latest windows version
3. Run the Faceswap program in GUI mode
4. See error
**Expected behavior**
I want the Faceswap GUI to open. It doesn't.
**Screenshots**


**Desktop:**
- OS: [Windows 11]
- Python Version [3.9.12]
- Conda Version [4.13.0]
- Commit ID [6b2aac6]
**Crash Report**
[crash_report.2022.08.07.224753577271.log](https://github.com/deepfakes/faceswap/files/9278810/crash_report.2022.08.07.224753577271.log) | null | null | null | {'base_commit': '629c02a61e1ad5f769f8f7388a091d5ce9aa8160', 'files': [{'path': 'requirements/_requirements_base.txt', 'Loc': {'(None, None, 15)': {'mod': [15]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements/_requirements_base.txt"
],
"asset": []
} | null | |
deepfakes | faceswap | 9696b5606fd0963814fc0c3644565aa60face69d | https://github.com/deepfakes/faceswap/issues/462 | Modify extractor to focus on mouth | I'd like to modify the extractor script to focus on the lower half of the face - specifically the mouth area.
I'm experimenting with changing people's mouth movements, and I want to train a higher resolution "mouth only" network, so I can create new speech patterns that are re-composited onto the original footage.
Is there a way to modify which facial landmarks the extractor looks at so it just takes the mouth?
| null | null | null | {'base_commit': '9696b5606fd0963814fc0c3644565aa60face69d', 'files': [{'path': 'lib/aligner.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"lib/aligner.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
deepfakes | faceswap | 9fb70f13552927bea1bf65fe35f4866f99171eaf | https://github.com/deepfakes/faceswap/issues/656 | Not showing graph in gui | in log gui:
`Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python3.6/tkinter/__init__.py", line 1705, in __call__
return self.func(*args)
File "/home/telecast/Documents/faceswap/lib/gui/command.py", line 461, in <lambda>
command=lambda cmd=action: cmd(self.command))
File "/home/telecast/Documents/faceswap/lib/gui/utils.py", line 550, in load
self.add_to_recent(cfgfile.name, command)
File "/home/telecast/Documents/faceswap/lib/gui/utils.py", line 596, in add_to_recent
recent_files = self.serializer.unmarshal(inp.read().decode("utf-8"))
File "/home/telecast/Documents/faceswap/lib/Serializer.py", line 61, in unmarshal
return json.loads(input_string)
File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
`
faceswap.log:
`03/10/2019 00:02:36 MainProcess training_0 train training INFO Loading data, this may take a while...
03/10/2019 00:02:36 MainProcess training_0 plugin_loader _import INFO Loading Model from Villain plugin...
03/10/2019 00:02:40 MainProcess training_0 config load_config INFO Loading config: '/home/telecast/Documents/faceswap/config/train.ini'
03/10/2019 00:02:40 MainProcess training_0 _base replace_config INFO Using configuration saved in state file
03/10/2019 00:02:40 MainProcess training_0 deprecation new_func WARNING From /home/telecast/Documents/faceswap_env/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.
03/10/2019 00:02:49 MainProcess training_0 _base load WARNING Failed loading existing training data. Generating new models
03/10/2019 00:02:52 MainProcess training_0 plugin_loader _import INFO Loading Trainer from Original plugin...
03/10/2019 00:02:54 MainProcess training_0 _base set_tensorboard INFO Enabled TensorBoard Logging
03/10/2019 00:02:54 MainProcess training_0 deprecation new_func WARNING From /home/telecast/Documents/faceswap_env/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.
03/10/2019 00:03:35 MainProcess training_0 _base save_models INFO saved models
03/10/2019 00:04:29 MainProcess MainThread train end_thread INFO Exit requested! The trainer will complete its current cycle, save the models and quit (it can take up a couple of seconds depending on your training speed). If you want to kill it now, press Ctrl + c
03/10/2019 00:04:31 MainProcess training_0 _base save_models INFO saved models`
$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.10
$ pip3 list
Package Version
----------------------- --------
absl-py 0.7.0
astor 0.7.1
Click 7.0
cloudpickle 0.8.0
cmake 3.13.3
cycler 0.10.0
dask 1.1.3
decorator 4.3.2
dlib 19.16.0
face-recognition 1.2.3
face-recognition-models 0.3.0
ffmpy 0.2.2
gast 0.2.2
grpcio 1.19.0
h5py 2.9.0
Keras 2.2.4
Keras-Applications 1.0.7
Keras-Preprocessing 1.0.9
kiwisolver 1.0.1
Markdown 3.0.1
matplotlib 2.2.2
mock 2.0.0
networkx 2.2
numpy 1.15.4
nvidia-ml-py3 7.352.0
opencv-python 4.0.0.21
pathlib 1.0.1
pbr 5.1.3
Pillow 5.4.1
pip 19.0.3
protobuf 3.7.0
psutil 5.6.0
pyparsing 2.3.1
python-dateutil 2.8.0
pytz 2018.9
PyWavelets 1.0.2
PyYAML 3.13
scikit-image 0.14.2
scikit-learn 0.20.3
scipy 1.2.1
setuptools 40.8.0
six 1.12.0
tensorboard 1.13.1
tensorflow-estimator 1.13.0
tensorflow-gpu 1.13.1
termcolor 1.1.0
toolz 0.9.0
tqdm 4.31.1
Werkzeug 0.14.1
wheel 0.33.1
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
cudnn v7.4.1.5
| null | null | null | {'base_commit': '9fb70f13552927bea1bf65fe35f4866f99171eaf', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Version"
]
} | null | |
deepfakes | faceswap | e518206c8ef935ebc1b1ff64ae2901cc8ef05f94 | https://github.com/deepfakes/faceswap/issues/57 | Cannot install tensorflow-gpu requirement |
Tried installing the requirements-gpu.txt and get this error:
Collecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) Cache entry deserialization failed, entry ignored Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: ) No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))
I went here to troubleshoot the issue: https://github.com/tensorflow/tensorflow/issues/8251
Installed Python 64bit. Opened new command prompt window and typed in: pip3 install --upgrade tensorflow-gpu
Successfully uninstalled setuptools-28.8.0
Successfully installed bleach-1.5.0 enum34-1.1.6 html5lib-0.9999999 markdown-2.6.11 numpy-1.13.3 protobuf-3.5.1 setuptools-38.4.0 six-1.11.0 tensorflow-gpu-1.4.0 tensorflow-tensorboard-0.4.0rc3 werkzeug-0.14.1 wheel-0.30.0
Went back to my faceswap env to enter the requirements-gpu.txt and still get the same error:
(faceswap) C:\faceswap>pip install -r requirements-gpu.txt
Collecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))
Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: )
No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))
## Other relevant information
- **Operating system and version:** Windows 10
Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32
- **Faceswap version:** 1/5/2018
- **Faceswap method:** CPU/GPU "CPU method only works"
- ...
| null | null | null | {'base_commit': 'e518206c8ef935ebc1b1ff64ae2901cc8ef05f94', 'files': [{'path': 'requirements-gpu.txt', 'Loc': {'(None, None, 6)': {'mod': [6]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n依赖声明"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements-gpu.txt"
],
"asset": []
} | null | |
deepfakes | faceswap | 51f1993d93e0ffb581d44416f327f0cf731c34e8 | https://github.com/deepfakes/faceswap/issues/209 | doesn't work on 2GB GTX 960 even with LowMem model (what params could be reduced?) | LowMem is different from the common model with 2 lines:
ENCODER_DIM = 512 # instead of 1024
#x = self.conv(1024)(x) - commented out.
But it's still not enough to run under Ubuntu 16.04, cuda8, 1.7Gb of free video RAM.
It fails with OOM on any batch size, even with bs=1 and bs=2.
What about having some configurable params here? Like reducing filters numbers or ENCODER_DIM or smth else?
Also that would be great to have some doc which describes few main params and their influence on quality etc. For example fakeapp allows to select number of layers, nodes etc.
P.S. I managed to run it with ENCODER_DIM = 64 and bs=16, but results are not so good (after 15 hours).
| null | null | null | {'base_commit': '51f1993d93e0ffb581d44416f327f0cf731c34e8', 'files': [{'path': 'faceswap.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"faceswap.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
deepfakes | faceswap | a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd | https://github.com/deepfakes/faceswap/issues/1361 | Bounding boxes coordinates | It has been 2 weeks I have been working on it but cannot find the solution.
I want the bounding boxes on the original image, of the result that is produced by the "Extract" process of faceswap code.
"Extract" writes the faces extracted from the input image(s). I just want the coordinates from which this face is extracted (from original image).
If you could help me. I would be very grateful and would also help other people searching for the same problem.
Thank you. | null | null | null | {'base_commit': 'a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd', 'files': [{'path': 'lib/align/detected_face.py', 'Loc': {"('DetectedFace', '__init__', 82)": {'mod': [84, 85, 86, 87]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"lib/align/detected_face.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
3b1b | manim | 49582c35919097585699598ad0ca49fe3f2117b5 | https://github.com/3b1b/manim/issues/659 | Problem with FadeOutAndShift | t3 text is not going through FadeOutAndShift.
Also tell me how I can FadeOutAndShift t1 and t3 together
```# python -m manim try3.py test1 -pm
from manimlib.imports import *
class test1(Scene):
def construct(self):
t1=TextMobject("Hi!")
t2=TextMobject("My name is")
t3=TextMobject("Girish")
t1.set_color(RED)
t3.set_color(BLUE)
self.play(Write(t1), run_time=2)
self.play(ApplyMethod(t1.shift, 1*UP))
self.play(FadeIn(t2))
self.play(Transform(t2, t3), run_time=2)
self.wait(2)
self.play(FadeOutAndShift(t1))
self.play(FadeOutAndShift(t3))
``` | null | null | null | {'base_commit': '49582c35919097585699598ad0ca49fe3f2117b5', 'files': [{'path': 'manimlib/scene/scene.py', 'Loc': {"('Scene', 'play', 455)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"manimlib/scene/scene.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
3b1b | manim | ce06e58505dff26cccd497a9bd43969f74ae0da9 | https://github.com/3b1b/manim/issues/274 | ImportError: No module named animation | I've installed manim on Win10. After run "python extract_scene.py -s example_scenes.py",
the next error is shown in the python interactive interpretor:
> Traceback (most recent call last):
File "extract_scene.py", line 15, in <module>
from scene.scene import Scene
File "G:\python\manim\scene\scene.py", line 16, in <module>
from animation.transform import MoveToTarget
File "G:\python\manim\animation\transform.py", line 8, in <module>
from animation.animation import Animation
ImportError: No module named animation
What I can do? I'm looking forward to get help to solve this problem. | null | null | null | {'base_commit': 'ce06e58505dff26cccd497a9bd43969f74ae0da9', 'files': [{'path': 'animation/transform.py', 'Loc': {'(None, None, None)': {'mod': [8]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"animation/transform.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
3b1b | manim | 55ece141e898577ce44e71d718212a1ee816ed74 | https://github.com/3b1b/manim/issues/658 | How to add sound to video? | null | null | null | {'base_commit': '55ece141e898577ce44e71d718212a1ee816ed74', 'files': [{'path': 'manimlib/scene/scene.py', 'Loc': {"('Scene', 'add_sound', 543)": {'mod': []}}, 'status': 'modified'}, {'path': 'old_projects/clacks/solution2/simple_scenes.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"old_projects/clacks/solution2/simple_scenes.py",
"manimlib/scene/scene.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | ||
3b1b | manim | 97a0a707d759e0235450ea8c20f55a2529bd2973 | https://github.com/3b1b/manim/issues/878 | Swedish characters not working |
Include at least:
1. Steps to reproduce the issue (e.g. the command you ran)
2. The unexpected behavior that occurred (e.g. error messages or screenshots)
3. The environment (e.g. operating system and version of manim)
I am new to manim and want to include swedish characters in a text, but it gives an error message when rendering.
Code:
class Swe(Scene):
def construct(self):
text = TextMobject(r"$\"o$")
self.add(text)
self.wait()
Error message:
Traceback (most recent call last):
File "C:\Manim\manim\manim2020\manimlib\extract_scene.py", line 153, in main
scene = SceneClass(**scene_kwargs)
File "C:\Manim\manim\manim2020\manimlib\scene\scene.py", line 54, in __init__
self.construct()
File "Geony.py", line 115, in construct
text = TextMobject(r"$\"o$")
File "C:\Manim\manim\manim2020\manimlib\mobject\svg\tex_mobject.py", line 144, in __init__
self, self.arg_separator.join(tex_strings), **kwargs
File "C:\Manim\manim\manim2020\manimlib\mobject\svg\tex_mobject.py", line 45, in __init__
self.template_tex_file_body
File "C:\Manim\manim\manim2020\manimlib\utils\tex_file_writing.py", line 19, in tex_to_svg_file
dvi_file = tex_to_dvi(tex_file)
File "C:\Manim\manim\manim2020\manimlib\utils\tex_file_writing.py", line 67, in tex_to_dvi
"See log output above or the log file: %s" % log_file)
Exception: Latex error converting to dvi. See log output above or the log file: C:\Manim\manim\manim2020\manimlib\files\Tex\a26fbd67dc90adbc.log
I am running python 3.7 (64 bit) and MikTex 2.9. All other features of manim are working fine.
Any help would be much appreciated. Also, please keep in mind that I am new to manim and programing in general. | null | null | null | {} | [
{
"Loc": [
12
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
3b1b | manim | 6880ebcbc2525b2f3c0731439bef7ff981b4b5b4 | https://github.com/3b1b/manim/issues/924 | Reconsidering TEX_USE_CTEX / using XeLaTeX | I worked on manim back in 2018. I added the function for using CTeX (XeLaTeX package for Chinese) and XeLaTeX instead of LaTeX using the flag `TEX_USE_CTEX` in constants.py (#315).
I have stopped working on manim since 2019, but over the months there are apparently more and more people who want to use LaTeX rendering in non-English languages, and even on very old issues I still occasionally see people asking how to do that... Looking back at my change I really should have **decoupled using CTeX (TeX template) from XeLaTeX (rendering tool)**. This has caused a *lot* of confusions and made weird hacks/fixes necessary for only using XeLaTeX, especially for a language that is not Chinese or English, with the most recent #858 and #840. It really should have been a flag `TEX_USE_XELATEX` and another flag `TEMPLATE_TEX_NAME`, and the flag `TEX_USE_CTEX` is such that when it is `True`, `TEX_USE_XELATEX` is `True` and `TEMPLATE_TEX_NAME` is `"ctex_template.tex"`; otherwise `TEX_USE_XELATEX` is `False` and `TEMPLATE_TEX_NAME` is `"tex_template.tex"`. Then set `TEMPLATE_TEX_FILE` to `os.path.join(os.path.dirname(os.path.realpath(__file__)), TEMPLATE_TEX_NAME)`. Corresponding logic: constants.py lines 74–79.
It might be even better to set it dynamically using a function or as a parameter of `TexMobject()`, (see issues like #891). I looked at the source code and this is definitely possible. The options I can think of are
1. Use the current `TEX_USE_CTEX`
2. Add flags `TEX_USE_XELATEX` and `TEMPLATE_TEX_NAME`, and rework `TEX_USE_CTEX`
3. Add parameters for `TexMobject()` like `use_xelatex=False` and `tex_template="tex_template.tex"`
4. Use the flags of 2. as a default, and make it possible to change the default using 3.
Not really sure if this is the right place to raise this issue.
| null | null | null | {} | [] | [] | [
{
"pro": "ManimCommunity"
},
{
"pro": "manim",
"path": [
"manim/utils/tex_templates.py"
]
}
] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"manim/utils/tex_templates.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"ManimCommunity"
]
} | null | |
3b1b | manim | 49582c35919097585699598ad0ca49fe3f2117b5 | https://github.com/3b1b/manim/issues/660 | ColorByCaracter help | I want to color only theta of ```{ e }^{ i\theta }```
I was going through ColorByCaracter in 3_text_like_arrays.py .
But I fail to understand how you people separate the tex formula into arrays. I know about arrays but I can only copy the tex code from [Daum Equation Editor](http://s1.daumcdn.net/editor/fp/service_nc/pencil/Pencil_chromestore.html) and paste it. I don't know how to divide them into arrays.
Please help me.
| null | null | null | {'base_commit': '49582c35919097585699598ad0ca49fe3f2117b5', 'files': [{'path': 'manimlib/mobject/svg/tex_mobject.py', 'Loc': {"('TexMobject', None, 132)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"manimlib/mobject/svg/tex_mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
3b1b | manim | 32abbb9371308e8dff7410de387fe78e64b6fe7a | https://github.com/3b1b/manim/issues/700 | OSError: No file matching Suv.svg in image directory | I've tried putting the .SVG image into */media/designs/svg_images. But when I want to quote it in the .py file it still reports errors:
```
Traceback (most recent call last):
File "/home/jason/Documents/manim/manimlib/extract_scene.py", line 155, in main
scene = SceneClass(**scene_kwargs)
File "/home/jason/Documents/manim/manimlib/scene/scene.py", line 53, in __init__
self.construct()
File "SVGTEST.py", line 44, in construct
height=height_size
File "/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py", line 45, in __init__
self.ensure_valid_file()
File "/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py", line 63, in ensure_valid_file
self.file_name)
OSError: No file matching MYSVG.svg in image directory
```
(Manjaro Linux, Texlive) | null | null | null | {'base_commit': '32abbb9371308e8dff7410de387fe78e64b6fe7a', 'files': [{'path': 'manimlib/mobject/svg/svg_mobject.py', 'Loc': {"('SVGMobject', 'ensure_valid_file', 49)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"manimlib/mobject/svg/svg_mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
3b1b | manim | b74e5ca254bccc1575b4c7b7de3c1cb2010aac75 | https://github.com/3b1b/manim/issues/694 | can't graph trigonometric function of secx, cscx, cotx, tanx,... | source code:
class PlotFunctions(GraphScene):
CONFIG = {
"x_min" : -10,
"x_max" : 10.3,
"y_min" : -1.5,
"y_max" : 1.5,
"graph_origin" : ORIGIN ,
"function_color" : RED ,
"axes_color" : GREEN,
"x_labeled_nums" :range(-10,12,2),
}
def construct(self):
self.setup_axes(animate=True)
func_graph=self.get_graph(self.func_to_graph,self.function_color)
func_graph2=self.get_graph(self.func_to_graph2)
vert_line = self.get_vertical_line_to_graph(TAU,func_graph,color=YELLOW)
graph_lab = self.get_graph_label(func_graph, label = "\\cos(x)")
graph_lab2=self.get_graph_label(func_graph2,label = "\\sin(x)", x_val=-10, direction=UP/2)
two_pi = TexMobject("x = 2 \\pi")
label_coord = self.input_to_graph_point(TAU,func_graph)
two_pi.next_to(label_coord,RIGHT+UP)
self.play(ShowCreation(func_graph),ShowCreation(func_graph2))
self.play(ShowCreation(vert_line), ShowCreation(graph_lab), ShowCreation(graph_lab2),ShowCreation(two_pi))
def func_to_graph(self,x):
#return np.cos(x)
return np.tan(x)
def func_to_graph2(self,x):
return np.sin(x)
I replaced "return np.cos(x)" to "return np.tan(x)"...i got this:

and then I replaced "return np.cos(x)" to "return np.sec(x)/cot(x)/csc(x)"...i got this:
AttributeError: module 'numpy' has no attribute 'sec'...
| null | null | null | {'base_commit': 'b74e5ca254bccc1575b4c7b7de3c1cb2010aac75', 'files': [{'path': 'manimlib/mobject/types/vectorized_mobject.py', 'Loc': {"('VGroup', None, 868)": {'mod': []}}, 'status': 'modified'}, {'Loc': [17], 'path': None}]} | [
{
"Loc": [
17
],
"path": null
}
] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3\n+\n0",
"info_type": "Code"
} | {
"code": [
null,
"manimlib/mobject/types/vectorized_mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
3b1b | manim | fc153bb49a529e8cbb02dd1514f06387cbf0ee6e | https://github.com/3b1b/manim/issues/1206 | Manim can't find my png file | I'm new to coding and am trying to learn manim, which I'm using on my macbook pro. I'm trying to create a scene where manim draws a png file I saved. I saved the png file as "shirt.png" in my manim folder. I then ran the following code:
```
from manimlib.imports import *
class OutFit(Scene):
def construct(self):
shirt = ImageMobject("shirt")
self.play(Write(shirt))
```
I've looked up several ways of how to get manim to do images and some solutions, but since I'm pretty new at this I don't always understand the answers I've found from other people's issues or if it applies to mine. I keep getting this error response:
raise IOError("File {} not Found".format(file_name))
OSError: File shirt not Found
Any help is much appreciated.
| null | null | null | {'base_commit': 'fc153bb49a529e8cbb02dd1514f06387cbf0ee6e', 'files': [{'path': 'manimlib/animation/fading.py', 'Loc': {"('FadeIn', None, 34)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"manimlib/animation/fading.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
3b1b | manim | 64c960041b5b9dcb0aac50019268a3bdf69d9563 | https://github.com/3b1b/manim/issues/608 | What is VMobject exactly? | Can anyone explain what is the purpose of `VMobject` and how it differs from `Mobject`?
I am trying to make some `old_projects` work. For example, I had to change `PMobject` to inherit from `VMobject` instead of `Mobject` in order to fix `NumberLineScene`. I do not know if it is correct thing to do or how will it affect the other scripts because I am unable to find the fundamental differences between the two objects. The wiki does not explain a lot, so please tell some detailed information.
I dug commit histories and saw
> "Starting to vectorize all things"
kind of commit messages when the `VMobject` class is added to the engine. What does it mean "Vectorize" in this context? | null | null | null | {'base_commit': '64c960041b5b9dcb0aac50019268a3bdf69d9563', 'files': [{'path': 'manimlib/mobject/types/vectorized_mobject.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"manimlib/mobject/types/vectorized_mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
All-Hands-AI | OpenHands | a2779fe2f6c9ab29508676f21242b1c6b88e2f67 | https://github.com/All-Hands-AI/OpenHands/issues/5229 | documentation
enhancement
fix-me | [Documentation]: Micro-agents | **What problem or use case are you trying to solve?**
Currently in the `openhands/agenthub/codeact_agent` directory, we have an implementation of micro agents, but this is not documented.
To do so, we can:
1. read the implementation of codeact agent
2. read an example microagent in `openhands/agenthub/codeact_agent/micro/github.md`
3. add documentation to `openhands/agenthub/codeact_agent/README.md`
| null | null | null | {'base_commit': 'a2779fe2f6c9ab29508676f21242b1c6b88e2f67', 'files': [{'path': 'microagents/README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"microagents/README.md"
],
"test": [],
"config": [],
"asset": []
} | null |
All-Hands-AI | OpenHands | 08a2dfb01af1aec6743f5e4c23507d63980726c0 | https://github.com/All-Hands-AI/OpenHands/issues/635 | bug | Ollama support issue. | <!-- You MUST fill out this template. We will close issues that don't include enough information to reproduce -->
#### Describe the bug
When trying to configure OpenDevin to run with Ollama there are requests that are being sent to the ollama server like this:

The post request should look like this:
`"POST /chat/completions HTTP/1.1"`
<!-- a short description of the problem -->
#### Setup and configuration
**Current version**:
<!-- run `git log -n 1` to see this -->
```bash
commit 5c640c99cafb3c718dad60f377f3a725a8bab1de (HEAD -> local-llm-flag, origin/main, origin/HEAD, main)
```
<!-- tell us everything about your environment -->
**My config.toml and environment vars** (be sure to redact API keys):
```toml
WORKSPACE_DIR="./workspace"
LLM_BASE_URL="http://localhost:8000"
LLM_MODEL="ollama/starcoder2:15b"
LLM_EMBEDDING_MODEL="ollama/starcoder2:15b"
```
**My model and agent** (you can see these settings in the UI):
* Model: ollama/starcoder2
* Agent: MonologueAgent
**Commands I ran to install and run OpenDevin**:
```
git clone ...
make build
make start-backend
make start-frontend
```
**Steps to Reproduce**:
1. In `opendevin/llm/llm.py` in `__init__` replace `self.model = model if model else DEFAULT_MODEL_NAME` with `self.model_name = DEFAULT_MODEL_NAME`
2. Run your local model on litellm `litellm --model ollama/starcoder2:15b --port 8000`
3. Run `make build` then `make start-backend` and `make start-frontend`
4. Ask devin to do anything ex 'make a hello world script in python'
5. Observe 404 errors spammed in litellm server log
**Logs, error messages, and screenshots**:
This is a log from the backend server running from `make start-backend` steps 0-99 all look the same.
```
==============
STEP 99
PLAN:
please make a simple flask app that says hello world.
Traceback (most recent call last):
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1436, in function_with_retries
response = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 386, in _completion
raise e
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 334, in _completion
deployment = self.get_available_deployment(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 2313, in get_available_deployment
raise ValueError(f"No healthy deployment available, passed model={model}")
ValueError: No healthy deployment available, passed model=ollama/starcoder2:15b
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py", line 31, in condense
resp = llm.completion(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 328, in completion
raise e
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 325, in completion
response = self.function_with_fallbacks(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1419, in function_with_fallbacks
raise original_exception
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1344, in function_with_fallbacks
response = self.function_with_retries(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1496, in function_with_retries
raise e
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1462, in function_with_retries
response = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 386, in _completion
raise e
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 334, in _completion
deployment = self.get_available_deployment(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 2313, in get_available_deployment
raise ValueError(f"No healthy deployment available, passed model={model}")
ValueError: No healthy deployment available, passed model=ollama/starcoder2:15b
ERROR:
Error condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b
Traceback (most recent call last):
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1436, in function_with_retries
response = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 386, in _completion
raise e
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 334, in _completion
deployment = self.get_available_deployment(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 2313, in get_available_deployment
raise ValueError(f"No healthy deployment available, passed model={model}")
ValueError: No healthy deployment available, passed model=ollama/starcoder2:15b
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py", line 31, in condense
resp = llm.completion(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 328, in completion
raise e
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 325, in completion
response = self.function_with_fallbacks(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1419, in function_with_fallbacks
raise original_exception
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1344, in function_with_fallbacks
response = self.function_with_retries(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1496, in function_with_retries
raise e
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 1462, in function_with_retries
response = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 386, in _completion
raise e
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 334, in _completion
deployment = self.get_available_deployment(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py", line 2313, in get_available_deployment
raise ValueError(f"No healthy deployment available, passed model={model}")
ValueError: No healthy deployment available, passed model=ollama/starcoder2:15b
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/quimbo/OpenDevin/opendevin/controller/agent_controller.py", line 112, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/quimbo/OpenDevin/agenthub/monologue_agent/agent.py", line 153, in step
self._add_event(prev_action.to_dict())
File "/home/quimbo/OpenDevin/agenthub/monologue_agent/agent.py", line 96, in _add_event
self.monologue.condense(self.llm)
File "/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py", line 36, in condense
raise RuntimeError(f"Error condensing thoughts: {e}")
RuntimeError: Error condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b
OBSERVATION:
Error condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b
Exited before finishing
```
#### Additional Context
Litellm for local models is expecting api calls in the following format:

From: `http://localhost:8000/#/`
I know that the problem is whatever is managing the api calls is set to call `/api/generate/` because this is the convention, but for local server that is not supported. I do not know where to look to fix this, any ideas?
The server responds when I test it like this:
```
def query_local_llm(prompt, limit=TOKEN_LIMIT):
# Replace with your actual server address and port
url = "http://0.0.0.0:8000/chat/completions"
payload = {
"model": "ollama/mistral",
"messages" : [{"content": prompt, "role": "user"}],
"max_tokens": limit
}
response = requests.post(url, json=payload)
```

| null | null | null | {'base_commit': '08a2dfb01af1aec6743f5e4c23507d63980726c0', 'files': [{'path': 'opendevin/llm/LOCAL_LLM_GUIDE.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"opendevin/llm/LOCAL_LLM_GUIDE.md"
],
"test": [],
"config": [],
"asset": []
} | null |
scrapy | scrapy | d636e5baa8a077e2869bfe3b76525efec42392ec | https://github.com/scrapy/scrapy/issues/2276 | can LinkExtractor extract scrapy.link with node info | the html is like below, i want to extract the link `/example/category/pg{page}/`, but the `scrapy.link` does not contains the node info(`currentPage` and `totalPage`), how can i extract the link with the node info
``` html
<div class="page-box">
<div page-url="/example/category/pg{page}/"
totalPage="35"
currentPage="1"
</div>
</div>
```
| null | null | null | {'base_commit': 'd636e5baa8a077e2869bfe3b76525efec42392ec', 'files': [{'path': 'scrapy/http/response/text.py', 'Loc': {"('TextResponse', 'css', 117)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"scrapy/http/response/text.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
scrapy | scrapy | 892467cb8a40c54840284a08d0f98ab1b3af7bc4 | https://github.com/scrapy/scrapy/issues/4565 | AttributeError: module 'resource' has no attribute 'getrusage' | version : Scrapy 2.1.0
```
2020-05-11 20:05:28 [scrapy.core.engine] INFO: Spider opened
2020-05-11 20:05:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-05-11 20:05:28 [dy] INFO: Spider opened: dy
2020-05-11 20:05:28 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method MemoryUsage.engine_started of <scrapy.extensions.memusage.MemoryUsage object at 0x0000000004D3A358>>
Traceback (most recent call last):
File "D:\microsoft\python37\lib\site-packages\scrapy\utils\defer.py", line 161, in maybeDeferred_coro
result = f(*args, **kw)
File "D:\microsoft\python37\lib\site-packages\pydispatch\robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "D:\microsoft\python37\lib\site-packages\scrapy\extensions\memusage.py", line 55, in engine_started
self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())
File "D:\microsoft\python37\lib\site-packages\scrapy\extensions\memusage.py", line 48, in get_virtual_size
size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss
AttributeError: module 'resource' has no attribute 'getrusage'
```
```
2020-05-11 20:05:43 [scrapy.core.engine] INFO: Closing spider (finished)
2020-05-11 20:05:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 6751,
'downloader/request_count': 14,
'downloader/request_method_count/GET': 14,
'downloader/response_bytes': 12380415,
'downloader/response_count': 14,
'downloader/response_status_count/200': 10,
'downloader/response_status_count/302': 4,
'elapsed_time_seconds': 14.631021,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 5, 11, 12, 5, 43, 378200),
'item_scraped_count': 65,
'log_count/DEBUG': 85,
'log_count/ERROR': 1,
'log_count/INFO': 9,
'request_depth_max': 1,
'response_received_count': 10,
'scheduler/dequeued': 6,
'scheduler/dequeued/memory': 6,
'scheduler/enqueued': 6,
'scheduler/enqueued/memory': 6,
'start_time': datetime.datetime(2020, 5, 11, 12, 5, 28, 747179)}
2020-05-11 20:05:43 [scrapy.core.engine] INFO: Spider closed (finished)
2020-05-11 20:05:43 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method MemoryUsage.engine_stopped of <scrapy.extensions.memusage.MemoryUsage object at 0x0000000004D3A358>>
Traceback (most recent call last):
File "D:\microsoft\python37\lib\site-packages\scrapy\utils\defer.py", line 161, in maybeDeferred_coro
result = f(*args, **kw)
File "D:\microsoft\python37\lib\site-packages\pydispatch\robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "D:\microsoft\python37\lib\site-packages\scrapy\extensions\memusage.py", line 70, in engine_stopped
for tsk in self.tasks:
AttributeError: 'MemoryUsage' object has no attribute 'tasks'
```
(edited for text formatting) | null | null | null | {'base_commit': '892467cb8a40c54840284a08d0f98ab1b3af7bc4', 'files': [{'path': 'scrapy/commands/settings.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"scrapy/commands/settings.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
commaai | openpilot | ce9559cc54433244cb01d4781302eb072a3fd519 | https://github.com/commaai/openpilot/issues/30078 | bug
fingerprint
car
ford | 2023 Ford Maverick Not Recognized | ### Describe the bug
Car Not Recognized
Looks like all the values for firmware are the same as what is already in values.py
### Which car does this affect?
Ford Maverick 2023
### Provide a route where the issue occurs
66833387c2bbbca0|2023-09-27--21-13-05
### openpilot version
master-ci
### Additional info
`{'carParams': {'alternativeExperience': 1,
'autoResumeSng': True,
'carFingerprint': 'mock',
'carFw': [{'address': 2016,
'brand': 'ford',
'bus': 1,
'ecu': 'engine',
'fwVersion': b'PZ6A-14C204-JE\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 2024,
'subAddress': 0},
{'address': 1840,
'brand': 'ford',
'bus': 0,
'ecu': 'eps',
'fwVersion': b'NZ6C-14D003-AL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1848,
'subAddress': 0},
{'address': 1888,
'brand': 'ford',
'bus': 0,
'ecu': 'abs',
'fwVersion': b'PZ6C-2D053-ED\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1896,
'subAddress': 0},
{'address': 1798,
'brand': 'ford',
'bus': 0,
'ecu': 'fwdCamera',
'fwVersion': b'NZ6T-14F397-AC\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1806,
'subAddress': 0},
{'address': 1842,
'brand': 'ford',
'bus': 0,
'ecu': 'shiftByWire',
'fwVersion': b'NZ6P-14G395-AD\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1850,
'subAddress': 0},
{'address': 1892,
'brand': 'ford',
'bus': 0,
'ecu': 'fwdRadar',
'fwVersion': b'NZ6T-14D049-AA\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1900,
'subAddress': 0},
{'address': 2016,
'brand': 'mazda',
'bus': 1,
'ecu': 'engine',
'fwVersion': b'PZ6A-14C204-JE\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 2024,
'subAddress': 0},
{'address': 1840,
'brand': 'mazda',
'bus': 0,
'ecu': 'eps',
'fwVersion': b'NZ6C-14D003-AL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 1848,
'subAddress': 0},
{'address': 1888,
'brand': 'mazda',
'bus': 0,
'ecu': 'abs',
'fwVersion': b'PZ6C-2D053-ED\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 1896,
'subAddress': 0},
{'address': 1798,
'brand': 'mazda',
'bus': 0,
'ecu': 'fwdCamera',
'fwVersion': b'NZ6T-14F397-AC\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 1806,
'subAddress': 0},
{'address': 1892,
'brand': 'mazda',
'bus': 0,
'ecu': 'fwdRadar',
'fwVersion': b'NZ6T-14D049-AA\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 1900,
'subAddress': 0}],
'carName': 'mock',
'carVin': '3FTTW8E31PRA79783',
'centerToFront': 1.350000023841858,
'communityFeatureDEPRECATED': False,
'dashcamOnly': False,
'directAccelControlDEPRECATED': False,
'enableApgsDEPRECATED': False,
'enableBsm': False,
'enableCameraDEPRECATED': False,
'enableDsu': False,
'enableGasInterceptor': False,
'experimentalLongitudinalAvailable': False,
'fingerprintSource': 'can',
'flags': 0,
'fuzzyFingerprint': False,
'hasStockCameraDEPRECATED': False,
'isPandaBlackDEPRECATED': False,
'lateralTuning': {'pid': {'kf': 0.0}},
'longitudinalActuatorDelayLowerBound': 0.15000000596046448,
'longitudinalActuatorDelayUpperBound': 0.15000000596046448,
'longitudinalTuning': {'deadzoneBP': [0.0], 'deadzoneV': [0.0], 'kf': 1.0, 'kiBP': [0.0], 'kiV': [1.0], 'kpBP': [0.0], 'kpV': [1.0]},
'mass': 1836.0,
'maxLateralAccel': 10.0,
'maxSteeringAngleDegDEPRECATED': 0.0,
'minEnableSpeed': -1.0,
'minSpeedCanDEPRECATED': 0.0,
'minSteerSpeed': 0.0,
'networkLocation': 'fwdCamera',
'notCar': False,
'openpilotLongitudinalControl': False,
'pcmCruise': True,
'radarTimeStep': 0.05000000074505806,
'radarUnavailable': False,
'rotationalInertia': 3139.534912109375,
'safetyConfigs': [{'safetyModel': 'noOutput', 'safetyParam': 0, 'safetyParam2DEPRECATED': 0, 'safetyParamDEPRECATED': 0}],
'safetyModelDEPRECATED': 'silent',
'safetyModelPassiveDEPRECATED': 'silent',
'safetyParamDEPRECATED': 0,
'startAccel': 0.0,
'startingAccelRateDEPRECATED': 0.0,
'startingState': False,
'steerActuatorDelay': 0.0,
'steerControlType': 'torque',
'steerLimitAlert': False,
'steerLimitTimer': 1.0,
'steerRateCostDEPRECATED': 0.0,
'steerRatio': 13.0,
'steerRatioRear': 0.0,
'stopAccel': -2.0,
'stoppingControl': True,
'stoppingDecelRate': 0.800000011920929,
'tireStiffnessFactor': 1.0,
'tireStiffnessFront': 201087.203125,
'tireStiffnessRear': 317877.90625,
'transmissionType': 'unknown',
'vEgoStarting': 0.5,
'vEgoStopping': 0.5,
'wheelSpeedFactor': 1.0,
'wheelbase': 2.700000047683716},
'logMonoTime': 971923210573,
'valid': True}
{'carParams': {'alternativeExperience': 1,
'autoResumeSng': True,
'carFingerprint': 'mock',
'carFw': [{'address': 2016,
'brand': 'ford',
'bus': 1,
'ecu': 'engine',
'fwVersion': b'PZ6A-14C204-JE\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 2024,
'subAddress': 0},
{'address': 1840,
'brand': 'ford',
'bus': 0,
'ecu': 'eps',
'fwVersion': b'NZ6C-14D003-AL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1848,
'subAddress': 0},
{'address': 1888,
'brand': 'ford',
'bus': 0,
'ecu': 'abs',
'fwVersion': b'PZ6C-2D053-ED\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1896,
'subAddress': 0},
{'address': 1798,
'brand': 'ford',
'bus': 0,
'ecu': 'fwdCamera',
'fwVersion': b'NZ6T-14F397-AC\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1806,
'subAddress': 0},
{'address': 1842,
'brand': 'ford',
'bus': 0,
'ecu': 'shiftByWire',
'fwVersion': b'NZ6P-14G395-AD\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1850,
'subAddress': 0},
{'address': 1892,
'brand': 'ford',
'bus': 0,
'ecu': 'fwdRadar',
'fwVersion': b'NZ6T-14D049-AA\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'>\x00', b'"\xf1\x88'],
'responseAddress': 1900,
'subAddress': 0},
{'address': 2016,
'brand': 'mazda',
'bus': 1,
'ecu': 'engine',
'fwVersion': b'PZ6A-14C204-JE\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': False,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 2024,
'subAddress': 0},
{'address': 1840,
'brand': 'mazda',
'bus': 0,
'ecu': 'eps',
'fwVersion': b'NZ6C-14D003-AL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 1848,
'subAddress': 0},
{'address': 1888,
'brand': 'mazda',
'bus': 0,
'ecu': 'abs',
'fwVersion': b'PZ6C-2D053-ED\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 1896,
'subAddress': 0},
{'address': 1798,
'brand': 'mazda',
'bus': 0,
'ecu': 'fwdCamera',
'fwVersion': b'NZ6T-14F397-AC\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 1806,
'subAddress': 0},
{'address': 1892,
'brand': 'mazda',
'bus': 0,
'ecu': 'fwdRadar',
'fwVersion': b'NZ6T-14D049-AA\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'logging': True,
'obdMultiplexing': True,
'request': [b'"\xf1\x88'],
'responseAddress': 1900,
'subAddress': 0}],
'carName': 'mock',
'carVin': '3FTTW8E31PRA79783',
'centerToFront': 1.350000023841858,
'communityFeatureDEPRECATED': False,
'dashcamOnly': False,
'directAccelControlDEPRECATED': False,
'enableApgsDEPRECATED': False,
'enableBsm': False,
'enableCameraDEPRECATED': False,
'enableDsu': False,
'enableGasInterceptor': False,
'experimentalLongitudinalAvailable': False,
'fingerprintSource': 'can',
'flags': 0,
'fuzzyFingerprint': False,
'hasStockCameraDEPRECATED': False,
'isPandaBlackDEPRECATED': False,
'lateralTuning': {'pid': {'kf': 0.0}},
'longitudinalActuatorDelayLowerBound': 0.15000000596046448,
'longitudinalActuatorDelayUpperBound': 0.15000000596046448,
'longitudinalTuning': {'deadzoneBP': [0.0], 'deadzoneV': [0.0], 'kf': 1.0, 'kiBP': [0.0], 'kiV': [1.0], 'kpBP': [0.0], 'kpV': [1.0]},
'mass': 1836.0,
'maxLateralAccel': 10.0,
'maxSteeringAngleDegDEPRECATED': 0.0,
'minEnableSpeed': -1.0,
'minSpeedCanDEPRECATED': 0.0,
'minSteerSpeed': 0.0,
'networkLocation': 'fwdCamera',
'notCar': False,
'openpilotLongitudinalControl': False,
'pcmCruise': True,
'radarTimeStep': 0.05000000074505806,
'radarUnavailable': False,
'rotationalInertia': 3139.534912109375,
'safetyConfigs': [{'safetyModel': 'noOutput', 'safetyParam': 0, 'safetyParam2DEPRECATED': 0, 'safetyParamDEPRECATED': 0}],
'safetyModelDEPRECATED': 'silent',
'safetyModelPassiveDEPRECATED': 'silent',
'safetyParamDEPRECATED': 0,
'startAccel': 0.0,
'startingAccelRateDEPRECATED': 0.0,
'startingState': False,
'steerActuatorDelay': 0.0,
'steerControlType': 'torque',
'steerLimitAlert': False,
'steerLimitTimer': 1.0,
'steerRateCostDEPRECATED': 0.0,
'steerRatio': 13.0,
'steerRatioRear': 0.0,
'stopAccel': -2.0,
'stoppingControl': True,
'stoppingDecelRate': 0.800000011920929,
'tireStiffnessFactor': 1.0,
'tireStiffnessFront': 201087.203125,
'tireStiffnessRear': 317877.90625,
'transmissionType': 'unknown',
'vEgoStarting': 0.5,
'vEgoStopping': 0.5,
'wheelSpeedFactor': 1.0,
'wheelbase': 2.700000047683716},
'logMonoTime': 1021914306894,
'valid': True}` | null | null | null | {'base_commit': 'ce9559cc54433244cb01d4781302eb072a3fd519', 'files': []} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
psf | requests | 27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66 | https://github.com/psf/requests/issues/775 | Content marked as consumed in 0.13.6 | Content is immediately marked as consumed in 0.13.6, causing calls to e.g. response.iter_content() to throw an error.
Test code (tested with python 2.6):
```
import requests
r = requests.get('http://docs.python-requests.org/')
if r._content_consumed:
print 'consumed'
else:
print 'not consumed'
```
In 0.13.5 this prints:
not consumed
In 0.13.6 this prints:
consumed
| null | null | null | {'base_commit': '27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66', 'files': [{'path': 'requests/models.py', 'Loc': {"('Request', '__init__', 47)": {'mod': [62]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"requests/models.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
psf | requests | 2de907ad778de270911acaffe93883f0e2729a4a | https://github.com/psf/requests/issues/4602 | Chunk-encoded request doesn't recognize iter_content generator | Passing a generator created by iter_content() as request data raises "TypeError: sendall() argument 1 must be string or buffer, not generator".
## Expected Result
The POST request successfully delives the content from the GET request.
## Actual Result
A TypeError is raised:
```
Traceback (most recent call last):
File "..\test.py", line 7, in <module>
PostForward("http://myhost/img/foo.png", "http://myotherhost/convert")
File "..\test.py", line 6, in PostForward
return requests.post(url=dst, data=data, headers={'Content-Length': length})
File "C:\Python27\lib\site-packages\requests\api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Python27\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Python27\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Python27\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Python27\lib\httplib.py", line 1042, in request
self._send_request(method, url, body, headers)
File "C:\Python27\lib\httplib.py", line 1082, in _send_request
self.endheaders(body)
File "C:\Python27\lib\httplib.py", line 1038, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 886, in _send_output
self.send(message_body)
File "C:\Python27\lib\httplib.py", line 858, in send
self.sock.sendall(data)
File "C:\Python27\lib\socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
TypeError: sendall() argument 1 must be string or buffer, not generator
```
## Reproduction Steps
```python
import requests
def PostForward(src, dst):
with requests.get(url=src, stream=True) as srcResponse:
length = srcResponse.headers['Content-Length']
data = srcResponse.iter_content(1024)
return requests.post(url=dst, data=data, headers={'Content-Length': length})
PostForward("http://myhost/img/foo.png", "http://myotherhost/convert")
```
## System Information
$ python -m requests.help
```
{
"chardet": {
"version": "3.0.4"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "2.6"
},
"implementation": {
"name": "CPython",
"version": "2.7.14"
},
"platform": {
"release": "10",
"system": "Windows"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.18.4"
},
"system_ssl": {
"version": "100020bf"
},
"urllib3": {
"version": "1.22"
},
"using_pyopenssl": false
}
``` | null | null | null | {} | [] | [] | [
{
"pro": "requests"
},
{
"pro": "toolbelt",
"path": [
"requests_toolbelt/streaming_iterator.py"
]
}
] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [
"requests_toolbelt/streaming_iterator.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"requests"
]
} | null | |
psf | requests | f17ef753d2c1f4db0d7f5aec51261da1db20d611 | https://github.com/psf/requests/issues/3031 | Needs Info
Question/Not a bug | [WinError 10048] Only one usage of each socket address ... | I notice that despite using requests.Session() - I still seem to be creating new connections/sockets which eventually exhaust (TIME_WAIT) and I get the following error:
> [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted',))
```
s = requests.Session()
data = zip(url_routes, cycle(s))
calc_routes = pool.map(processRequest, data)
```
I posted a bit more [here](http://stackoverflow.com/questions/35793908/python-multiprocessing-associate-a-process-with-a-session), however not sure how to address this
| null | null | null | {} | [
{
"Loc": [
8
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
psf | requests | 6f659a41794045292b836859f1281d33eeed8260 | https://github.com/psf/requests/issues/3740 | File download weirdness | I noticed this issue building conda recipes. Conda uses requests to download files from the internet.
The file that is being fetched is: https://dakota.sandia.gov/sites/default/files/distributions/public/dakota-6.5-public.src.tar.gz
(link found here: https://dakota.sandia.gov/download.html)
Downloading with curl -O
filesize: 78MB
md5: 02c46e904d40bba6b308065db34c1ad7
Downloading with urllib2 (from the standard library):
filesize: 78MB
md5: 02c46e904d40bba6b308065db34c1ad7
Downloading with requests-2.12.1 (supplied with conda)
filesize: 248MB
md5: 41e4268140d850756812510512d8eee8
tar -tf doesn't indicate any corruption.
I'm not sure what is different with this particular URL, but the other files I tried with requests worked. I don't know where the extra 170MB is coming from?
code used to download files:
```python
def download_file(url, fn):
r = requests.get(url, stream=True)
with open(fn, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
def download_urllib2(url, fn):
f = urllib2.urlopen(url)
with open(fn, 'wb') as fh:
for x in iter(lambda: f.read(1024), b''):
fh.write(x)
``` | null | null | null | {'base_commit': '6f659a41794045292b836859f1281d33eeed8260', 'files': [{'path': 'docs/user/quickstart.rst', 'Loc': {'(None, None, 166)': {'mod': [166]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"docs/user/quickstart.rst"
],
"test": [],
"config": [],
"asset": []
} | null | |
psf | requests | 62176a1ca7207db37273365b4691ed599203b828 | https://github.com/psf/requests/issues/3849 | Received response with content-encoding: gzip, but failed to decode it | ```python
import requests
requests.get('http://gett.bike/')
```
This code raises the following exception:
```python
ContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.',
error('Error -3 while decompressing data: incorrect data check',))
```
Arch linux x64
requests==2.13.0
python=3.6.0 | null | null | null | {'base_commit': '62176a1ca7207db37273365b4691ed599203b828', 'files': [{'path': 'src/requests/api.py', 'Loc': {"(None, 'request', 14)": {'mod': [24]}}, 'status': 'modified'}, {'Loc': [4], 'path': None}]} | [
{
"Loc": [
4
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null,
"src/requests/api.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
psf | requests | 057722af23edf3f69bf7bdfed7c6c32cbe1ce2e7 | https://github.com/psf/requests/issues/3015 | Ability to set timeout after response | For devs who use this great library, it would be very beneficial to be able to set the timeout AFTER initial connection. There are a few scenarios where this is useful but one of the main patterns/use cases is this:
```
import requests
import socket
# May or may not subclass threading.Thread
class Getter(object):
def __init__(self):
self.request = requests.get(url, stream=True)
def run(self):
with open(path, 'r+b') as file:
bytes_consumed = 0
while True:
try:
chunk = self.request.raw.read(size)
if not chunk:
break
chunk_length = len(chunk)
file.write(chunk)
bytes_consumed += chunk_length
except socket.timeout:
# handle incomplete download by using range header next time, etc.
```
Handling incomplete downloads due to connection loss is common and especially important when downloading large or many files (or both). As you can see, this can be achieved in a fairly straightforward way. The issue is there is really no good way to write tests for this. Each method would involve OS specific code which would also be a no-go for CI services.
What would be an option is the ability to set the timeout after establishing a connection. This way in a test you could do "r.timeout = (None, 0.00001)" and during reading it would simulate a timeout.
To my knowledge this is no way currently to inject a new Timeout class retroactively. Is this correct?
| null | null | null | {} | [
{
"Loc": [
20
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
psf | requests | 1285f576ae0a848de27af10d917c19b60940d1fa | https://github.com/psf/requests/issues/3774 | bad handshake error with ssl3 | I have an inhouse IIS server with ssl3 but an expired certificate, so I used requests without certificate verification and it was working fine with requests 2.11.1. But after I upgrade requests to 2.12.0, there was an error occured.
the code is:
...
requests.get('https://10.192.8.89:8080/yps_report', verify=False)
...
error message:
Traceback (most recent call last):
File "c:\python35\lib\site-packages\requests\packages\urllib3\contrib\pyopenssl.py", line 417, in wrap_socket
cnx.do_handshake()
File "c:\python35\lib\site-packages\OpenSSL\SSL.py", line 1426, in do_handshake
self._raise_ssl_error(self._ssl, result)
File "c:\python35\lib\site-packages\OpenSSL\SSL.py", line 1167, in _raise_ssl_error
raise SysCallError(-1, "Unexpected EOF")
OpenSSL.SSL.SysCallError: (-1, 'Unexpected EOF')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\python35\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 594, in urlopen
chunked=chunked)
File "c:\python35\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 350, in _make_request
self._validate_conn(conn)
File "c:\python35\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 835, in _validate_conn
conn.connect()
File "c:\python35\lib\site-packages\requests\packages\urllib3\connection.py", line 323, in connect
ssl_context=context)
File "c:\python35\lib\site-packages\requests\packages\urllib3\util\ssl_.py", line 324, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "c:\python35\lib\site-packages\requests\packages\urllib3\contrib\pyopenssl.py", line 424, in wrap_socket
raise ssl.SSLError('bad handshake: %r' % e)
ssl.SSLError: ("bad handshake: SysCallError(-1, 'Unexpected EOF')",)
...
I tried to downgrade requests to 2.11.1 and the error was gone. I have no idea how to fix this.
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.ssl_ import create_urllib3_context
# This is the 2.11 Requests cipher string.
CIPHERS = (
'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'
'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:'
'!eNULL:!MD5'
)
class DESAdapter(HTTPAdapter):
def init_poolmanager(self, *args, **kwargs):
context = create_urllib3_context(ciphers=CIPHERS)
kwargs['ssl_context'] = context
return super(HTTPAdapter, self).init_poolmanager(*args, **kwargs)
s = requests.Session()
s.mount('https://10.192.8.89', DESAdapter()) | null | null | null | {} | [
{
"Loc": [
41
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3\n需要将下面的user的一个comment中user的代码放入其中",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
ansible | ansible | a6d4c3ff7cf43c24be6622102cee834fc5096496 | https://github.com/ansible/ansible/issues/78759 | module
support:core
bug
affects_2.9 | "Invalid data passed to 'loop', it requires a list, got this instead: <built-in method values of dict object at 0x7f63b782bf80>. | ### Summary
When trying to pass a variable called i.e. sysctl.values to loop, I will get the above error.
### Issue Type
Bug Report
### Component Name
debug (only used for debugging)
### Ansible Version
```console
$ ansible --version
ansible 2.9.27
config file = /home/rf/.ansible.cfg
configured module search path = ['/home/rf/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Aug 2 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[I] </m/d/playground>-2-> ansible-config dump --only-changed
ANSIBLE_PIPELINING(/home/rf/.ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/rf/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
DEFAULT_FORKS(/home/rf/.ansible.cfg) = 50
DEFAULT_HOST_LIST(/home/rf/.ansible.cfg) = ['/home/rf/hosts']
INVENTORY_CACHE_ENABLED(/home/rf/.ansible.cfg) = True
```
### OS / Environment
Fedora 36
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Test
hosts: localhost
gather_facts: True
tasks:
- debug:
msg: "{{ item }}"
loop: "{{ sysctl2 }}"
- debug:
msg: "{{ item }}"
loop: "{{ sysctl.values }}"
vars:
sysctl:
values:
- { name: "net.ipv4.ip_forward", value: "1" }
sysctl2:
- { name: "net.ipv4.ip_forward", value: "1" }
```
### Expected Results
Output of debug using sysctl.values
### Actual Results
```console
PLAY [Test] ********************************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] *******************************************************************************************************************************************************************************************
ok: [localhost] => (item={'name': 'net.ipv4.ip_forward', 'value': '1'}) => {
"msg": {
"name": "net.ipv4.ip_forward",
"value": "1"
}
}
TASK [debug] *******************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "Invalid data passed to 'loop', it requires a list, got this instead: <built-in method values of dict object at 0x7f63b782bf80>. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."}
PLAY RECAP *********************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | null | null | null | {} | [
{
"Loc": [
59
],
"path": null
}
] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ansible | ansible | 8af920c8924b2fd9a0e4192c3c7e6085b687bfdc | https://github.com/ansible/ansible/issues/82382 | bug
affects_2.16 | Ansible core 2.16.1 broke AnsibleUnsafeBytes iteration | ### Summary
Upgrading form 2.16.0 to 2.16.1 (Ansible 9.0.1 to 9.1.0), iterating over AnsibleUnsafeBytes does not create a list of numbers anymore.
### Issue Type
Bug Report
### Component Name
core, unsafe_proxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.12.0 (main, Nov 29 2023, 03:32:06) [GCC 10.2.1 20210110] (/usr/local/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
/bin/sh: 1: less: not found
```
(sorry, dockerized environment)
### OS / Environment
Debian bullseye / 11 (in python docker image: `python:3.12.0-bullseye`), ansible via pip (`ansible==9.1.0`)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```py
from ansible.utils.unsafe_proxy import AnsibleUnsafeText
x = AnsibleUnsafeText("asdf")
y = x.encode("utf8")
list(y)
```
### Expected Results
```
[97, 115, 100, 102]
```
This is what happens on 2.16.0.
### Actual Results
```console
[b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00']
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | null | null | null | {'base_commit': '8af920c8924b2fd9a0e4192c3c7e6085b687bfdc', 'files': [{'path': 'Version', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Other"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Version"
]
} | null |
ansible | ansible | bcf9cd1e2a01d8e111a28db157ebc255a5592dca | https://github.com/ansible/ansible/issues/20085 | cloud
affects_2.1
module
docker
bug | docker_container task fail on exit code | Unless i'm missing something i expect that if I were to do something like the following the task would fail? But it does not 😟
```yaml
tasks:
docker_container:
name: "exit-test"
image: "ubuntu:latest"
command: "bash -c 'exit 123'"
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
2.1.1.0
```
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
```yaml
tasks:
docker_container:
name: "exit-test"
image: "ubuntu:latest"
command: "bash -c 'exit 123'"
```
##### EXPECTED RESULTS
Should fail the task
##### ACTUAL RESULTS
Task is ok. | null | null | null | {} | [] | [] | [
{
"org": "ansible",
"pro": "ansible-modules-core",
"path": [
"cloud/docker/docker_container.py"
]
}
] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [
"cloud/docker/docker_container.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ansible | ansible | d5324c11a0c389d2ede8375e2024cb37b9eb8ce5 | https://github.com/ansible/ansible/issues/19352 | affects_2.0
module
support:core
bug
files | Template update convert \n to actual new line | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
template
##### ANSIBLE VERSION
2.0 and higher
CONFIGURATION
```
[ssh_connection]
control_path = %(directory)s/%%C
```
##### OS / ENVIRONMENT
Mac OS X 10.11.6
Centos 6.x, 7.x
SUMMARY
In the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing `(?m)\n` . The output generated by the template module in versions 2.0 and later, treats the \n as actual line break. Where as versions up to 1.9.6 retains the literal `(?m)\n` without replacing the \n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x.
Any way we can work around this issue? Thank you for your help.
##### STEPS TO REPRODUCE
Our execution flow is probably not the nicest - we want to reengineer it soon. Basic steps:
Run a shell script with ansible-playbook command that pass in an env variable with `(?m)\n` literal.
Playbook calls a main yaml file and assigns shell environment var to a included task yaml file.
The task yaml file invokes the template module.
In the snippet below I stripped out other lines/vars for clarity.
main shell
```
set GROK_PATTERN_GENERAL_ERROR_PG="%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"
```
```
ansible-playbook -i ../common/host.inventory \
-${VERBOSE} \
t.yml \
${CHECK_ONLY} \
--extra-vars "hosts='${HOST}'
xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}'
"
```
t.yml
```
---
- hosts: 127.0.0.1
connection: local
tasks:
- include_vars: ../common/defaults/main.yml
- name: generate logstash kafka logscan filter config file
include: tasks/t.yml
vars:
logstash_grok_general_error: "{{xlogstash_grok_general_error}}"
```
tasks/t.yml
```
---
- name: generate logstash kafka logscan filter config file
template: src=../common/templates/my.conf.j2
dest="./500-filter.conf"
```
my.conf.j2
```
grok {
break_on_match => "true"
match => [
"message", "{{logstash_grok_general_error}}"
]
}
```
Note the `(?m)\n` are still on the same line.
##### EXPECTED RESULTS
```
grok {
break_on_match => "true"
match => [
"message", "%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\n%{USER:logerror}%{GREEDYDATA})"
]
}
```
##### ACTUAL RESULTS
Note `(?m)\n` now has the `\n` as actual line break.
```
grok {
break_on_match => "true"
match => [
"message", "%{TIMESTAMP_ISO8601} ERROR \[%{USER:handlerName}\] %{USER:className}%{GREEDYDATA:errorline1}((?m)
%{USER:logerror}%{GREEDYDATA})"
]
}
``` | null | null | null | {'base_commit': 'd5324c11a0c389d2ede8375e2024cb37b9eb8ce5', 'files': [{'path': 'lib/ansible/template/__init__.py', 'Loc': {}}, {'path': 't.yml', 'Loc': [60]}]} | [
{
"path": "t.yml",
"Loc": [
60
]
}
] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "3\n+\n0",
"info_type": "Code"
} | {
"code": [
"lib/ansible/template/__init__.py"
],
"doc": [],
"test": [],
"config": [
"t.yml"
],
"asset": []
} | null |
ansible | ansible | a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7 | https://github.com/ansible/ansible/issues/73922 | python3
module
support:core
bug
affects_2.10 | cron: Remove/delete an environment variable | ### Summary
With `env=yes`, `cron` add environment variable (with the `name` & `value`) parameters.
I though that having `env` + `state=absent` would remove said variable, but that's not the case (the cron file is actually removed).
As such there is no way to remove a variable and the more obvious way to attempt to do it results in a surprising result.
### Issue Type
Bug Report
### Component Name
ansible.builtin.cron
### Ansible Version
```console
$ ansible --version
ansible 2.10.5
config file = /home/user/.ansible.cfg
configured module search path = ['/usr/share/ansible']
ansible python module location = /home/user/.local/lib/python3.8/site-packages/ansible
executable location = /home/user/.local/bin/ansible
python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
```yaml
cron:
cron_file: foobar
user: root
env: yes
name: "VAR"
value: "False"
state: absent
```
### Expected Results
The "VAR" variable is removed from /etc/cron.d/foobar
### Actual Results
/etc/cron.d/foobar is removed.
There is no way to remove the "VAR" variable. | null | null | null | {'base_commit': 'a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7', 'files': [{'path': 'lib/ansible/modules/cron.py', 'Loc': {'(None, None, None)': {'mod': [15]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "4",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [
"lib/ansible/modules/cron.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ansible | ansible | 7490044bbe28029afa9e3099d86eae9fda5f88b7 | https://github.com/ansible/ansible/issues/11351 | affects_2.0
affects_2.3
c:executor/playbook_executor
support:core
feature
P3 | enable do/until with async tasks | ##### ISSUE TYPE
Feature Idea
##### COMPONENT NAME
core
##### ANSIBLE VERSION
2.0
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
When a task is marked as async, there is no way to loop until a condition is met.
With poll:0 and async_status you can poll for async task to complete but you cannot repeat the original async task itself until a condition is met.
```
cat /tmp/async-test.yml
---
# Run through the test of an async command
- hosts: all
tasks:
- name: "Check an async command"
command: /bin/sleep 3
async: 5
poll: 1
register: command_result
until: command_result.failed
retries: 5
delay: 10
```
```
$ansible-playbook -i localhost, /tmp/async-test.yml
____________
< PLAY [all] >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
_________________
< GATHERING FACTS >
-----------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
ok: [localhost]
______________________________
< TASK: Check an async command >
------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
fatal: [localhost] => error while evaluating conditional: command_result.failed: {% if command_result.failed %} True {% else %} False {% endif %}
FATAL: all hosts have already failed -- aborting
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
to retry, use: --limit @/opt/ashishkh/async-test.retry
localhost : ok=1 changed=0 unreachable=2 failed=0
```
##### STEPS TO REPRODUCE
##### EXPECTED RESULTS
##### ACTUAL RESULTS
| null | null | null | {} | [
{
"path": "/tmp/async-test.yml",
"Loc": [
33
]
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "1",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"/tmp/async-test.yml"
],
"asset": []
} | null |
ansible | ansible | 833970483100bfe89123a5718606234115921aec | https://github.com/ansible/ansible/issues/67993 | cloud
aws
openstack
module
support:community
affects_2.5
bug
traceback
system | Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol(unable to disable stickiness not supported in NLB) | ##### SUMMARY
We are using Ansible 2.5 to deploy AWS resources in our environment. From March 02, 2019 our deployment is failing with the below error.
ERROR:
=====
TASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:
An error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation:
Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol
17:21:08 fatal: [localhost]: FAILED! => {"changed": false, "error": {"code": "InvalidConfigurationRequest", "message": "Stickiness type 'lb_cookie'
is not supported for target groups with the TCP protocol", "type": "Sender"}, "msg": "An error occurred (InvalidConfigurationRequest)
when calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol",
"response_metadata": {"http_headers": {"connection": "close", "content-length": "359", "content-type": "text/xml", "date": "Tue, 03 Mar 2020 11:51:08 GMT",
"x-amzn-requestid": "23b0ca87-e0fb-4b84-b93b-ae5b1363df53"}, "http_status_code": 400, "request_id": "23b0ca87-e0fb-4b84-b93b-ae5b1363df53", "retry_attempts": 0}}
##### ISSUE TYPE
- Bug Report - Unable to disable stickiness not supported in NLB
##### COMPONENT NAME
- name: "target group for {{ server_name }} loadbalancer"
elb_target_group:
state: present
name: "{{ server_name }}-elb"
protocol: tcp
port: 80
target_type: instance
deregistration_delay_timeout: 35
modify_targets: False
vpc_id: "{{ vpc_out.vpcs.0.id }}"
health_check_protocol: "{{ load_balancer_ping_protocol | default('http') }}"
health_check_port: "{{ load_balancer_ping_port | default('80') }}"
health_check_path: "{{ load_balancer_ping_path | default('/elb/ping')}}"
health_check_interval: 30
unhealthy_threshold_count: 2
healthy_threshold_count: 2
stickiness_enabled: False
tags: "{{ aws.tags_as_dict }}"
register: target_group_out
##### ANSIBLE VERSION
```paste below
Ansible version = 2.5.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
- name: "target group for {{ server_name }} loadbalancer"
elb_target_group:
state: present
name: "{{ server_name }}-elb"
protocol: tcp
port: 80
target_type: instance
deregistration_delay_timeout: 35
modify_targets: False
vpc_id: "{{ vpc_out.vpcs.0.id }}"
health_check_protocol: "{{ load_balancer_ping_protocol | default('http') }}"
health_check_port: "{{ load_balancer_ping_port | default('80') }}"
health_check_path: "{{ load_balancer_ping_path | default('/elb/ping')}}"
health_check_interval: 30
unhealthy_threshold_count: 2
healthy_threshold_count: 2
stickiness_enabled: False
tags: "{{ aws.tags_as_dict }}"
register: target_group_out
```
##### OS / ENVIRONMENT
Ubuntu 18.04 LTS / AWS environment
##### STEPS TO REPRODUCE
Kindly use the below playbook to deploy loadbalancer using Ansible on AWS cloud.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "target group for {{ server_name }} loadbalancer"
elb_target_group:
state: present
name: "{{ server_name }}-elb"
protocol: tcp
port: 80
target_type: instance
deregistration_delay_timeout: 35
modify_targets: False
vpc_id: "{{ vpc_out.vpcs.0.id }}"
health_check_protocol: "{{ load_balancer_ping_protocol | default('http') }}"
health_check_port: "{{ load_balancer_ping_port | default('80') }}"
health_check_path: "{{ load_balancer_ping_path | default('/elb/ping')}}"
health_check_interval: 30
unhealthy_threshold_count: 2
healthy_threshold_count: 2
stickiness_enabled: False
tags: "{{ aws.tags_as_dict }}"
register: target_group_out
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
An AWS Network loadbalancer will be created.
##### ACTUAL RESULTS
The deployment fails with below error.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***
17:21:08 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:
An error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation:
Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol
17:21:08 fatal: [localhost]: FAILED! => {"changed": false, "error": {"code": "InvalidConfigurationRequest", "message": "Stickiness type 'lb_cookie'
is not supported for target groups with the TCP protocol", "type": "Sender"}, "msg": "An error occurred (InvalidConfigurationRequest)
when calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol",
"response_metadata": {"http_headers": {"connection": "close", "content-length": "359", "content-type": "text/xml", "date": "Tue, 03 Mar 2020 11:51:08 GMT",
"x-amzn-requestid": "23b0ca87-e0fb-4b84-b93b-ae5b1363df53"}, "http_status_code": 400, "request_id": "23b0ca87-e0fb-4b84-b93b-ae5b1363df53", "retry_attempts": 0}}
```
##### References
I can see a similar issue occurred for terraform users as well.
https://github.com/terraform-providers/terraform-provider-aws/issues/10494
| null | null | null | {} | [
{
"Loc": [
20
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 6f718cee740e7cd423edd1136db78c5be49fa7c0 | https://github.com/ultralytics/yolov5/issues/2467 | question
Stale | Problems with weights | ## ❔Question
Hello, I have just run trainy.py script with my data and faced a problem - you wrote that weights are saved in runs directory, but in my case I have not found them. Everything is fine with hyp.yaml and opt.yaml but folder "weights" is empty.
Do you have any guesses about this issue?
## Additional context
| null | null | null | {'base_commit': '6f718cee740e7cd423edd1136db78c5be49fa7c0', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [470, 454]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2\nweights找不见",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 06831aa9e905e0fa703958f6b3f3db443cf477f3 | https://github.com/ultralytics/yolov5/issues/9079 | Does adjusting the number of classes of a pretrained model work? | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. *
### Question
Hi everyone,
I'm a bit confused about how to properly load a pretrained model with an adjusted number of classes for training with a custom dataset.
On the [Load YOLOv5 from PyTorch Hub ⭐](https://github.com/ultralytics/yolov5/issues/36) page you've explained that one can adjust the number of classes in the pretrained model by using the following command. `model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)`
<img width="999" alt="Bildschirmfoto 2022-08-22 um 08 13 15" src="https://user-images.githubusercontent.com/5917496/185851461-b177aa78-2b56-46a1-9c43-081d2a746938.png">
When I do so, I can see that a model.yaml file is overwritten, but I do not know where this file is stored.
Now, what actually confuses me about the number of classes, is that when I try to use this pretrained model in detection, without any further training. I see an error, that the model was trained with nc=80 and my data is incompatible with nc=13:
`AssertionError: ['yolov5s6.pt'] (80 classes) trained on different --data than what you passed (13 classes). Pass correct combination of --weights and --data that are trained together.`
I know that I can not expect any proper predictions since the last layers are initialized with random weights, but I was expecting that the model is compatible with the 13 classes dataset.
Is this behavior to be expected or am I doing something wrong here?
Do I need to find and use the model.yaml file and is the only thing changed in there 'nc=13'? | null | null | null | {'base_commit': '06831aa9e905e0fa703958f6b3f3db443cf477f3', 'files': [{'path': 'train.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
ultralytics | yolov5 | ee8988b8a2ed07af1b7c8807d39aad35369f0e28 | https://github.com/ultralytics/yolov5/issues/8 | Stale | training actually can not work | After trained on several epochs, I found the mAP is still very low. Does the training really works?
```
Epoch gpu_mem GIoU obj cls total targets img_size
14/299 6.4G 0.02273 0.002925 0.0003764 0.02603 11 640: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6960/6960 [54:20<00:00, 2.13it/s]
Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6960/6960 [13:37<00:00, 8.51it/s]
all 5.57e+04 1.74e+05 0.000332 0.00039 2.4e-06 8.59e-07
Epoch gpu_mem GIoU obj cls total targets img_size
15/299 6.4G 0.02232 0.002874 0.000371 0.02556 7 640: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6960/6960 [54:36<00:00, 2.12it/s]
Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6960/6960 [14:23<00:00, 8.06it/s]
all 5.57e+04 1.74e+05 0.000342 0.000401 2.44e-06 8.66e-07
``` | null | null | null | {'base_commit': 'ee8988b8a2ed07af1b7c8807d39aad35369f0e28', 'files': [{'path': 'models/yolov5s.yaml', 'Loc': {'(None, None, 2)': {'mod': [2]}}, 'status': 'modified'}, {'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "",
"info_type": "Code"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [
"models/yolov5s.yaml"
],
"asset": []
} | null |
ultralytics | yolov5 | 901243c7806be07b31073440cf721e73532a0734 | https://github.com/ultralytics/yolov5/issues/894 | question | training stuck when loading dataset | ## ❔Question
I follow the instructions to run coco128,
```
python train.py --img 640 --batch 16 --epochs 5 --data ./data/coco128.yaml --cfg ./models/yolov5s.yaml --weights '',
```
the ouput is
```
Image sizes 640 train, 640 test
Using 8 dataloader workers
Starting training for 5 epochs...
Epoch gpu_mem GIoU obj cls total targets img_size
0%| | 0/8 [00:00<?, ?it/s
```
then it is stuck, I found that it is stucking at loading the dataset,
in https://github.com/ultralytics/yolov5/blob/master/train.py#L244,
```
for i, (imgs, targets, paths, _) in pbar:
```
it just stops here, could you help me ?
| null | null | null | {'base_commit': '901243c7806be07b31073440cf721e73532a0734', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [388]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 63060910a68bfde238872d629ab88e2e7bc736e8 | https://github.com/ultralytics/yolov5/issues/3735 | question
Stale | Results interpretation | Hello,
Another question to do with results interpretation. I am not very sure how to interpret the results.txt file that gets generated after training is over. Also, is there any way to extract the number of false positives, true positives, false negatives, as well as to see the total mean average accuracy and loss (like with yolov4)?
Further, after training is done, can the best weights obtained from training be used to test on unseen data (more specifically, multiple images)?
Thanks in advance again! | null | null | null | {'base_commit': '63060910a68bfde238872d629ab88e2e7bc736e8', 'files': [{'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | dc54ed5763720ced4f6784552c47534af5413d45 | https://github.com/ultralytics/yolov5/issues/6062 | question
Stale | How to add some private information into .pt file? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
yolov5 is a great algorithm, but I'm having some problems. Specifically, I want to add some private information to the .pt file, can this be done?
### Additional
_No response_ | null | null | null | {'base_commit': 'dc54ed5763720ced4f6784552c47534af5413d45', 'files': [{'path': 'train.py', 'Loc': {"(None, 'train', 58)": {'mod': [377]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 79af1144c270ac7169553d450b9170f9c60f92e4 | https://github.com/ultralytics/yolov5/issues/4517 | question
Stale | what is moasic and what is its default and how to delete it | what is the meaning of moasic
where I can find its default parameter
how to stop moasic and stop augmentation in general
I use only this line is it augment data by default or not? how to stop augmentation if exist
```
!python train.py --img 640 --batch 16 --epochs 400 --data /mydrive/data.yaml \
--weights /mydrive/yolov5s.pt --cache --project /mydrive/train/
``` | null | null | null | {'base_commit': '79af1144c270ac7169553d450b9170f9c60f92e4', 'files': [{'path': 'data/hyps/hyp.scratch.yaml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "配置文件"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"data/hyps/hyp.scratch.yaml"
],
"asset": []
} | null |
ultralytics | yolov5 | 0d8a1842373e55f8f639adede0c3d378f1ffbea5 | https://github.com/ultralytics/yolov5/issues/4717 | bug | [onnx export.py error] Unsupported ONNX opset version | `ONNX: starting export with onnx 1.10.1...`
`ONNX: export failure: Unsupported ONNX opset version: 13`
I'm using
yolov5-5.0, pytorch1.7.0+cu101 and python3.7.9.
How to solve it? | null | null | null | {'base_commit': '0d8a1842373e55f8f639adede0c3d378f1ffbea5', 'files': [{'path': 'export.py', 'Loc': {"(None, 'parse_opt', 166)": {'mod': [179]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"export.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 886f1c03d839575afecb059accf74296fad395b6 | https://github.com/ultralytics/yolov5/issues/2432 | question | Experiments on GhostNet | ## ❔Question
I am just wondering about the performance when using GhostNet in experimental.py. Could you please share this experiment?
## Additional context
| null | null | null | {'base_commit': '886f1c03d839575afecb059accf74296fad395b6', 'files': [{'path': 'Models/yolov5l.yaml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "配置"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"Models/yolov5l.yaml"
],
"asset": []
} | null |
ultralytics | yolov5 | 2026d4c5eb4e3e48b5295106db85c844000d95d1 | https://github.com/ultralytics/yolov5/issues/1498 | question
Stale | calculate fps on local system | ## ❔Question
I have been using the code to do detection from webcam. How can I know what is the speed of detection (fps) in my local system?
| null | null | null | {'base_commit': '2026d4c5eb4e3e48b5295106db85c844000d95d1', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 61)': {'mod': [61]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 14797370646d25e226f0093a5982d5cd54ba729a | https://github.com/ultralytics/yolov5/issues/2797 | question | large scale dataset use --cache-images flag | ## ❔Question
hello ~ , i have dataset with a million images about 450GB and i want to use --cache-images accelerate training(i have 128GB RAM),can i split the whole dataset into many sub dataset and training them one by one(like resume training) ?
## Additional context
| null | null | null | {'base_commit': '14797370646d25e226f0093a5982d5cd54ba729a', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [466]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | f5335f22bbd6037124d60edb3c2d1934d7673e23 | https://github.com/ultralytics/yolov5/issues/8907 | question
Stale | I am making UI by QT for Yolov5 training. Where is making the result image (results.png) after training? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am making UI by QT for Yolov5 training. Where is making the result image (results.png) after training?
I would like to draw the graph for (train/box_loss), (metrics/precision), and (metrics/recall) per each an epoch every time an epoch of the train is finished.
Where is making the result image (results.png) after training?
Thank you for your help.
### Additional
_No response_ | null | null | null | {'base_commit': 'f5335f22bbd6037124d60edb3c2d1934d7673e23', 'files': [{'path': 'utils/plots.py', 'Loc': {"(None, 'plot_results', 418)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"utils/plots.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 0ab303b04499b6b912d8212a4fa10fe3fcb78efa | https://github.com/ultralytics/yolov5/issues/8708 | question
Stale | Significance of --half? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Can you please let me know the significance of --half during training process....
### Additional
_No response_ | null | null | null | {'base_commit': '0ab303b04499b6b912d8212a4fa10fe3fcb78efa', 'files': [{'path': 'val.py', 'Loc': {"(None, 'parse_opt', 330)": {'mod': [351]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"val.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | b74929c910f9cd99d2ece587e57bce1ae000d3ba | https://github.com/ultralytics/yolov5/issues/4252 | question | Training speed and memory | I noticed your instructions about training,
Run commands below to reproduce results on COCO dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest --batch-size your GPU allows (batch sizes shown for 16 GB devices).
I want to train from scratch on the coco dataset.(A100 x1).The code was just downloaded.
The following is the situation during my training.The specific parameters can be seen in the screenshot.
python train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 64 -> 16min/epoch
python train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 128 ->16min/epoch
python train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 192 ->20min/epoch
python train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 192 --workers 16->16min/epoch

My question
1. Why I increased the batch size but the time required for training did not decrease
2. The relationship between workers and batch size, because I noticed that you seem to set it to a maximum of 8 in the code (why it is 8),
3. When epoch=0 and 1, the GPU memory has changed, about x1.5? What may be the reason for this, | null | null | null | {'base_commit': 'b74929c910f9cd99d2ece587e57bce1ae000d3ba', 'files': [{'path': 'train.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 404749a33cc29d119f54b2ce35bf3b33a847a487 | https://github.com/ultralytics/yolov5/issues/2186 | question | Can we return objectness score and class score? | ## ❔Question
I am wondering if it is possible to return confidence scores for objectness and classification separately for each predicted box during inference? I might be conceptually off base here, but I am interested in understanding if the model is unsure if the box itself is correct or if the class it is assigning to the box is correct. My understanding is the `conf` that is returned now is a combo of the two? | null | null | null | {'base_commit': '404749a33cc29d119f54b2ce35bf3b33a847a487', 'files': [{'path': 'detect.py', 'Loc': {"(None, 'detect', 18)": {'mod': [103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113]}}, 'status': 'modified'}, {'path': 'utils/general.py', 'Loc': {"(None, 'non_max_suppression', 340)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"utils/general.py",
"detect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | dabad5793a638cba1e5a2bbb878c9b87fe1a14a0 | https://github.com/ultralytics/yolov5/issues/3942 | enhancement
Stale | For online cutting training and detection can be improve | ## 🚀 Feature
For big image training, usually people thinking about to cut the images, but yolov5 can only resize the image to small size. Such as VisDrone dataset, the smallest image can have 960*540 size, if resize to 640*640, size would be 640*360, but the target in dataset mostly are small object, resize the image make the target become more smaller, but if use bigger resolution, the cuda memory would exceed.
So I thought online cutting training and detection would be a good feature for yolov5 to improve, although cutting image would also increase the train time, but it would be a great idea for people who don't have large computing power GPU, also I think cutting image would be effective for small object detection. Although it's not a new idea in detection, it would be a useful way for people to their own detector.
| null | null | null | {'base_commit': 'dabad5793a638cba1e5a2bbb878c9b87fe1a14a0', 'files': [{'path': 'utils/augmentations.py', 'Loc': {"('Albumentations', '__init__', 16)": {'mod': [22]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"utils/augmentations.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | c8c5ef36c9a19c7843993ee8d51aebb685467eca | https://github.com/ultralytics/yolov5/issues/1238 | question | img-weights | ## ❔Question
parser.add_argument('--img-weights', action='store_true', help='use weighted image selection for training')
in order to make --iimg-weights work, what else I need to do?
dataset = LoadImagesAndLabels(path, imgsz, batch_size,
augment=augment, # augment images
hyp=hyp, # augmentation hyperparameters
rect=rect, # rectangular training
cache_images=cache,
single_cls=opt.single_cls,
stride=int(stride),
pad=pad),
should I add an extra param image_weights=True??
## Additional context
| null | null | null | {'base_commit': 'c8c5ef36c9a19c7843993ee8d51aebb685467eca', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [397]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31 | https://github.com/ultralytics/yolov5/issues/7072 | question | why can't I reproduce the mAP provided by README.md(v6.1)? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I used the method recommended by README.md(v6.1) to reproduce the mAP, but I failed.
'python train.py --data coco.yaml --cfg yolov5s.yaml --weights ' ' --hyp hyp.scratch-low.yaml --img 640 --batch-size 64 --epochs 300' .
All is default value,then I got the best mAP(yolov5s) is 37.057%(the best mAP verified at the end of each epoch, 5000 images), it still has a gap of 0.4% mAP(37.4%).
Similarly, I reproduced the mAP(yolov5n),27.586%----28.0%,Never get published results.
My GPU is GTX NVIDIA RTX A4000(16116MiB), and I think it may be enough.
Is this a normal error caused by equipment(GPU) differences, or are there other reasons?
### Additional
_No response_ | null | null | null | {'base_commit': '9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31', 'files': [{'path': 'data/scripts/get_coco.sh', 'Loc': {'(None, None, 13)': {'mod': [13]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"data/scripts/get_coco.sh"
]
} | null |
ultralytics | yolov5 | 079b36d72ba2ef298f7ae4dc283d8c7975eb02f6 | https://github.com/ultralytics/yolov5/issues/6540 | question | Is YOLOv5 able to detect a specific number of classes according to the project's need, like just 2 or 3 classes? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi, I'm using YOLOv5 in my project and I have a question. If I use "--classes " it could detect one type of class, but is there anyway that I can detect more than one type, like 2 or 3 different types? I've already tried "-- classes 0 1" or "-- classes [0] [1]" but without success. Thanks for the help!
### Additional
_No response_ | null | null | null | {'base_commit': '079b36d72ba2ef298f7ae4dc283d8c7975eb02f6', 'files': [{'path': 'detect.py', 'Loc': {"(None, 'parse_opt', 216)": {'mod': [231]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"detect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | e96c74b5a1c4a27934c5d8ad52cde778af248ed8 | https://github.com/ultralytics/yolov5/issues/4357 | question
Stale | Average Precision for each class | ## Is there any way to see the average precision for each class?
I have run my model for 1,000 epochs and I have a bunch of metrics (which are AMAZING by the way, thanks so making it so easy to see them!) and I have mAP, but I was wondering if there was a way to see the AP for each class? Like a table or something.
In addition, is it possible to see the precision-recall graphs for each class? I can see something in the images tab on wandb, but as I have 80 classes, it looks very messy. | null | null | null | {'base_commit': 'e96c74b5a1c4a27934c5d8ad52cde778af248ed8', 'files': [{'path': 'val.py', 'Loc': {"(None, 'parse_opt', 293)": {'mod': [305]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"val.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | 96e36a7c913e2433446ff410a4cf60041010a524 | https://github.com/ultralytics/yolov5/issues/4152 | question | Format of data for testing trained model | In what format do I need to feed the validation dataset to the val.py file? Should images and markup be in the same folder or in different ones? In what format should the coordinates of the bounding boxes be in - yolo or something else?
| null | null | null | {'base_commit': '96e36a7c913e2433446ff410a4cf60041010a524', 'files': [{'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null |
CorentinJ | Real-Time-Voice-Cloning | eaf5ec4467795344e7d9601515b017fd8c46e44b | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/439 | decoding error in preprocessing synthesizer | I get the following error while running `synthesizer_preprocess_audio.py`.
```
Arguments:
datasets_root: /home/amin/voice_cloning/libri_100
out_dir: /home/amin/voice_cloning/libri_100/SV2TTS/synthesizer
n_processes: None
skip_existing: True
hparams:
Using data from:
/home/amin/voice_cloning/libri_100/LibriSpeech/train-clean-100
LibriSpeech: 0%| | 0/502 [00:00<?, ?speakers/s]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py", line 62, in preprocess_speaker
alignments = [line.rstrip().split(" ") for line in alignments_file]
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py", line 62, in <listcomp>
alignments = [line.rstrip().split(" ") for line in alignments_file]
File "/usr/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 37: invalid start byte
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "synthesizer_preprocess_audio.py", line 52, in <module>
preprocess_librispeech(**vars(args))
File "/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py", line 36, in preprocess_librispeech
for speaker_metadata in tqdm(job, "LibriSpeech", len(speaker_dirs), unit="speakers"):
File "/home/amin/.local/lib/python3.6/site-packages/tqdm/std.py", line 1130, in __iter__
for obj in iterable:
File "/usr/lib/python3.6/multiprocessing/pool.py", line 735, in next
raise value
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 37: invalid start byte
```
Can anyone help? It can save a lot of time for me.
Thanks. | null | null | null | {'base_commit': 'eaf5ec4467795344e7d9601515b017fd8c46e44b', 'files': [{'path': 'synthesizer/preprocess.py', 'Loc': {"(None, 'preprocess_speaker', 54)": {'mod': [60]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"synthesizer/preprocess.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 5425557efe30863267f805851f918124191e0be0 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/629 | Error in macOS when trying to launch the toolbox | Traceback (most recent call last):
File "/Users/luke/Documents/Real-Time-Voice-Cloning-master/demo_toolbox.py", line 2, in <module>
from toolbox import Toolbox
File "/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/__init__.py", line 1, in <module>
from toolbox.ui import UI
File "/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/ui.py", line 6, in <module>
from encoder.inference import plot_embedding_as_heatmap
ModuleNotFoundError: No module named 'encoder.inference' | null | null | null | {'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'encoder/inference.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"encoder/inference.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1156 | missing SV2TTS/ | Hey, I'm trying to finetune the pretrained model but it looks like I am missing the SV2TTS/ directory which contains train.txt, etc.
I have a saved_models/ directory which has three *.pt files for the three components of this TTS architecture. | null | null | null | {'base_commit': '98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5', 'files': [{'path': 'synthesizer_preprocess_audio.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "4",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"synthesizer_preprocess_audio.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | e32cf8f4ddb63d9a7603eeb31f1855b54926aee6 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/549 | Import Error | Hey, i am trying to run this code and everytime i run demo_toolbox.py there comes an error "failed to load qt binding" i tried reinstalling matplotlib and also tried installing PYQt5 .
Need Help !!!
| null | null | null | {'base_commit': 'e32cf8f4ddb63d9a7603eeb31f1855b54926aee6', 'files': [{'path': 'toolbox/ui.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"toolbox/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/117 | ModuleNotFoundError: No module named 'tensorflow.contrib.seq2seq' | When running demo_cli.py
Python = 3.7.4
TensorFlow = 2.0 RC
CUDA = 10.1
cuDNN = Installed for right CUDA version
Windows = 10 | null | null | null | {'base_commit': '8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1', 'files': [{'path': 'requirements.txt', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | c5c2261c97afe6ec5db1ef389eba1257f6cce8a2 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/275 | Speaker verification implementation | I need just the speaker verification part which is the implementation of [GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION](https://arxiv.org/pdf/1710.10467.pdf) paper, how I can proceed to get it please? | null | null | null | {'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'encoder/', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "5\n询问功能实现所在地",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"encoder/"
]
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 7432046efc23cabf176f9fdc8d2fd67020059478 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/855 | Output audio spectrum - low frequences | Hi, Im am trying to train new model in polish language but after 476k steps output sound is very "robotic". I was trying to find why that's happened and noticed (based on my output and @blue-fish samples: https://blue-fish.github.io/experiments/RTVC-FT-1.html) that spectrum of this model don't include high frequences compared to google. Both in logarithmic scale.
Our output:
<img width="610" alt="Zrzut ekranu 2021-10-2 o 20 29 59" src="https://user-images.githubusercontent.com/6368894/135728051-397ec675-d2ac-4e5a-af89-a8e0fcef8ff7.png">
Google: (take a note its logarithmic scale)
<img width="610" alt="Zrzut ekranu 2021-10-2 o 20 30 30" src="https://user-images.githubusercontent.com/6368894/135728056-5a7b83dd-f228-4a4f-9dae-44ce86d1e2b1.png">
Do you have any idea how to improve this?
| null | null | null | {'base_commit': '7432046efc23cabf176f9fdc8d2fd67020059478', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [77]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"synthesizer/hparams.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1122 | Requirements.txt failed to install with obscure issue with installing audioread | I ran into a few issues along the way that I was able to solve, namely errors like this:
WARNING: Failed to write executable - trying to use .deleteme logic
ERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified:
'C:\\Python310\\Scripts\\f2py.exe' -> 'C:\\Python310\\Scripts\\f2py.exe.deleteme'
I fixed these by adding `--user` to the pip command.
I also had to change requirements.txt to a newer version of numpy (1.22.1) to prevent it from failing to install due to older versions not being compatible with the version of Python I already have installed (3.10.6)
But now I'm stuck on this one:
Requirement already satisfied: jsonpointer>=1.9 in c:\users\michael\appdata\roaming\python\python310\site-packages (from jsonpatch->visdom==0.1.8.9->-r R:\requirements.txt (line 15)) (2.3)
Using legacy 'setup.py install' for umap-learn, since package 'wheel' is not installed.
Using legacy 'setup.py install' for visdom, since package 'wheel' is not installed.
Using legacy 'setup.py install' for audioread, since package 'wheel' is not installed.
Using legacy 'setup.py install' for pynndescent, since package 'wheel' is not installed.
Installing collected packages: audioread, visdom, SoundFile, sounddevice, scikit-learn, resampy, pooch, matplotlib, pynndescent, librosa, umap-learn
Running setup.py install for audioread ... error
error: subprocess-exited-with-error
× Running setup.py install for audioread did not run successfully.
│ exit code: 1
╰─> [40 lines of output]
C:\Users\michael\AppData\Local\Temp\pip-install-nat_itg2\audioread_fa5fbfcd88364fc89c7b2a9e454b5549\setup.py:17: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses
import imp
running install
C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
Traceback (most recent call last):
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\util.py", line 258, in subst_vars
return _subst_compat(s).format_map(lookup)
KeyError: 'py_version_nodot_plat'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\michael\AppData\Local\Temp\pip-install-nat_itg2\audioread_fa5fbfcd88364fc89c7b2a9e454b5549\setup.py", line 27, in <module>
setup(name='audioread',
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\core.py", line 148, in setup
return run_commands(dist)
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\core.py", line 163, in run_commands
dist.run_commands()
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\dist.py", line 967, in run_commands
self.run_command(cmd)
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\dist.py", line 985, in run_command
cmd_obj.ensure_finalized()
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\command\install.py", line 45, in finalize_options
orig.install.finalize_options(self)
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\command\install.py", line 381, in finalize_options
self.expand_dirs()
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\command\install.py", line 563, in expand_dirs
self._expand_attrs(['install_purelib', 'install_platlib',
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\command\install.py", line 553, in _expand_attrs
val = subst_vars(val, self.config_vars)
File "C:\Users\michael\AppData\Roaming\Python\Python310\site-packages\setuptools\_distutils\util.py", line 260, in subst_vars
raise ValueError(f"invalid variable {var}")
ValueError: invalid variable 'py_version_nodot_plat'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> audioread
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
I'm not sure if the issue is due to "setup.py install" being deprecated; if that's the case I have no idea what the fix is because I think this is being required somewhere else - maybe another package needs a newer version? But I have no idea which one.
I also thought maybe it could be that wheel wasn't installed, `since package 'wheel' is not installed.` but when I try to install it, it says it's already installed:
C:\> pip install wheel --user
Requirement already satisfied: wheel in c:\python310\lib\site-packages (0.37.1)
There's also the invalid variable error, but I have no idea what this is talking about. | null | null | null | {'base_commit': '98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5', 'files': [{'path': 'requirements.txt', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n依赖声明"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 95adc699c1deb637f485e85a5107d40da0ad94fc | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/717 | I can't use Dataset/Speaker/Utterance | I can't use the upper section in the software. when loading it shows:
Warning: you did not pass a root directory for datasets as argument.
How can I fix this?
Thank you
| null | null | null | {'base_commit': '95adc699c1deb637f485e85a5107d40da0ad94fc', 'files': [{'path': 'demo_toolbox.py', 'Loc': {'(None, None, None)': {'mod': [15]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2\nwarning",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"demo_toolbox.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 039f7e5402e6d9da7fad5022dae038cdfb507b39 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/13 | problem with utils.argutils in python 3.6 | Hi under win 10 64 bits trying using python 3.6 it failed to import the print_args wiht the fact that he can't find the argutils.
think i have a relative import error but can't solve it
btw nice job on what i heard on the youtube demo
if i mnaully try to import the utils from the root dir seems he load another utils files
| null | null | null | {'base_commit': '039f7e5402e6d9da7fad5022dae038cdfb507b39', 'files': [{'path': 'synthesizer/__init__.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"synthesizer/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 7432046efc23cabf176f9fdc8d2fd67020059478 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/884 | Using a different speaker encoder | Hello, I really appreciate the work on display here. I was just wondering if I could use a different speaker encoder. If someone used a different encoder, could you explain the difficulties of replacing the encoder and how the results were different from the speaker encoder already in use? | null | null | null | {'base_commit': '7432046efc23cabf176f9fdc8d2fd67020059478', 'files': [{'path': 'toolbox/__init__.py', 'Loc': {"('Toolbox', 'add_real_utterance', 182)": {'mod': [191]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"toolbox/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | a32962bb7b4827660646ac6dabf62309aea08a91 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/488 | preprocessing VoxCele2 is not working | While running encoder_preprocess on voxceleb2 dataset, I'm getting the following warning and nothing else happens. Not sure why?
```
raw: Preprocessing data for 5994 speakers.
raw: 0%| | 0/5994 [00:00<?, ?speakers/s]
/home/amin/.local/lib/python3.6/site-packages/librosa/core/audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.
warnings.warn('PySoundFile failed. Trying audioread instead.')
/home/amin/.local/lib/python3.6/site-packages/librosa/core/audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.
warnings.warn('PySoundFile failed. Trying audioread instead.')
``` | null | null | null | {'base_commit': 'a32962bb7b4827660646ac6dabf62309aea08a91', 'files': [{'path': 'encoder/preprocess.py', 'Loc': {"(None, 'preprocess_voxceleb2', 164)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"encoder/preprocess.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 0713f860a3dd41afb56e83cff84dbdf589d5e11a | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1065 | vocoder_dataset.py ValueError | I am trying to use the Librispeech dataset to train the vocoder.
And I got a ValueError while training.
```numpy.random._bounded_integers._rand_int32 ValueError: low >= high```
It occurs in line 61 of vocoder_dataset.py,
```mel_offsets = [np.random.randint(0, offset) for offset in max_offsets]```
So I assume there is something wrong with the value of offset? e.g. offset=0 so np.random.randint could not generate a number [0, 0)?
Did anyone encountered this problem too? | null | null | null | {'base_commit': '0713f860a3dd41afb56e83cff84dbdf589d5e11a', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [88]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"synthesizer/hparams.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 5425557efe30863267f805851f918124191e0be0 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/651 | Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc | hello.
Please help me, I do not know how to solve my problem problem.
I run and completed without errors
`python synthesizer_preprocess_audio.py <datasets_root>`
`python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer`
but after typing `python synthesizer_train.py my_run <datasets_root>/SV2TTS/synthesizer`
shows me a long error
```
Arguments:
name: my_run
synthesizer_root: C:\Users\matve\Documents\Tacotron\datasets\SV2TTS\synthesizer
models_dir: synthesizer/saved_models/
mode: synthesis
GTA: True
restore: True
summary_interval: 2500
embedding_interval: 10000
checkpoint_interval: 2000
eval_interval: 100000
tacotron_train_steps: 2000000
tf_log_level: 1
slack_url: None
hparams:
Checkpoint path: synthesizer/saved_models/logs-my_run\taco_pretrained\tacotron_model.ckpt
Loading training data from: C:\Users\matve\Documents\Tacotron\datasets\SV2TTS\synthesizer\train.txt
Using model: Tacotron
Hyperparameters:
allow_clipping_in_normalization: True
attention_dim: 128
attention_filters: 32
attention_kernel: (31,)
cbhg_conv_channels: 128
cbhg_highway_units: 128
cbhg_highwaynet_layers: 4
cbhg_kernels: 8
cbhg_pool_size: 2
cbhg_projection: 256
cbhg_projection_kernel_size: 3
cbhg_rnn_units: 128
cleaners: english_cleaners
clip_for_wavenet: True
clip_mels_length: True
cross_entropy_pos_weight: 20
cumulative_weights: True
decoder_layers: 2
decoder_lstm_units: 1024
embedding_dim: 512
enc_conv_channels: 512
enc_conv_kernel_size: (5,)
enc_conv_num_layers: 3
encoder_lstm_units: 256
fmax: 7600
fmin: 55
frame_shift_ms: None
griffin_lim_iters: 60
hop_size: 200
mask_decoder: False
mask_encoder: True
max_abs_value: 4.0
max_iters: 2000
max_mel_frames: 900
min_level_db: -100
n_fft: 800
natural_eval: False
normalize_for_wavenet: True
num_mels: 80
outputs_per_step: 2
postnet_channels: 512
postnet_kernel_size: (5,)
postnet_num_layers: 5
power: 1.5
predict_linear: False
preemphasis: 0.97
preemphasize: True
prenet_layers: [256, 256]
ref_level_db: 20
rescale: True
rescaling_max: 0.9
sample_rate: 16000
signal_normalization: True
silence_min_duration_split: 0.4
silence_threshold: 2
smoothing: False
speaker_embedding_size: 256
split_on_cpu: True
stop_at_any: True
symmetric_mels: True
tacotron_adam_beta1: 0.9
tacotron_adam_beta2: 0.999
tacotron_adam_epsilon: 1e-06
tacotron_batch_size: 36
tacotron_clip_gradients: True
tacotron_data_random_state: 1234
tacotron_decay_learning_rate: True
tacotron_decay_rate: 0.5
tacotron_decay_steps: 50000
tacotron_dropout_rate: 0.5
tacotron_final_learning_rate: 1e-05
tacotron_gpu_start_idx: 0
tacotron_initial_learning_rate: 0.001
tacotron_num_gpus: 1
tacotron_random_seed: 5339
tacotron_reg_weight: 1e-07
tacotron_scale_regularization: False
tacotron_start_decay: 50000
tacotron_swap_with_cpu: False
tacotron_synthesis_batch_size: 128
tacotron_teacher_forcing_decay_alpha: 0.0
tacotron_teacher_forcing_decay_steps: 280000
tacotron_teacher_forcing_final_ratio: 0.0
tacotron_teacher_forcing_init_ratio: 1.0
tacotron_teacher_forcing_mode: constant
tacotron_teacher_forcing_ratio: 1.0
tacotron_teacher_forcing_start_decay: 10000
tacotron_test_batches: None
tacotron_test_size: 0.05
tacotron_zoneout_rate: 0.1
train_with_GTA: False
trim_fft_size: 512
trim_hop_size: 128
trim_top_db: 23
use_lws: False
utterance_min_duration: 1.6
win_size: 800
Loaded metadata for 290550 examples (366.70 hours)
initialisation done /gpu:0
Initialized Tacotron model. Dimensions (? = dynamic shape):
Train mode: True
Eval mode: False
GTA mode: False
Synthesis mode: False
Input: (?, ?)
device: 0
embedding: (?, ?, 512)
enc conv out: (?, ?, 512)
encoder out (cond): (?, ?, 768)
decoder out: (?, ?, 80)
residual out: (?, ?, 512)
projected residual out: (?, ?, 80)
mel out: (?, ?, 80)
<stop_token> out: (?, ?)
Tacotron Parameters 28.439 Million.
initialisation done /gpu:0
Initialized Tacotron model. Dimensions (? = dynamic shape):
Train mode: False
Eval mode: True
GTA mode: False
Synthesis mode: False
Input: (?, ?)
device: 0
embedding: (?, ?, 512)
enc conv out: (?, ?, 512)
encoder out (cond): (?, ?, 768)
decoder out: (?, ?, 80)
residual out: (?, ?, 512)
projected residual out: (?, ?, 80)
mel out: (?, ?, 80)
<stop_token> out: (?, ?)
Tacotron Parameters 28.439 Million.
Tacotron training set to a maximum of 2000000 steps
Loading checkpoint synthesizer/saved_models/logs-my_run\taco_pretrained\tacotron_model.ckpt-0
Generated 64 train batches of size 36 in 3.626 sec
Step 1 [5.798 sec/step, loss=14.85899, avg_loss=14.85899]
Step 1 [5.798 sec/step, loss=14.85899, avg_loss=14.85899]
Saving Model Character Embeddings visualization..
Tacotron Character embeddings have been updated on tensorboard!
Step 2 [3.362 sec/step, loss=11.10468, avg_loss=12.98183]
Step 2 [3.362 sec/step, loss=11.10468, avg_loss=12.98183]
Generated 403 test batches of size 36 in 15.574 sec
Exiting due to exception: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d (defined at e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[Tacotron_model/clip_by_global_norm/mul_30/_479]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d (defined at e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Original stack trace for 'Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d':
File "synthesizer_train.py", line 55, in <module>
tacotron_train(args, log_dir, hparams)
File "C:\Users\matve\Documents\Tacotron\Real-Time-Voice-Cloning\synthesizer\train.py", line 392, in tacotron_train
return train(log_dir, args, hparams)
File "C:\Users\matve\Documents\Tacotron\Real-Time-Voice-Cloning\synthesizer\train.py", line 148, in train
model, stats = model_train_mode(args, feeder, hparams, global_step)
File "C:\Users\matve\Documents\Tacotron\Real-Time-Voice-Cloning\synthesizer\train.py", line 91, in model_train_mode
is_training=True, split_infos=feeder.split_infos)
File "C:\Users\matve\Documents\Tacotron\Real-Time-Voice-Cloning\synthesizer\models\tacotron.py", line 230, in initialize
residual = postnet(decoder_output)
File "C:\Users\matve\Documents\Tacotron\Real-Time-Voice-Cloning\synthesizer\models\modules.py", line 406, in __call__
"conv_layer_{}_".format(i + 1) + self.scope)
File "C:\Users\matve\Documents\Tacotron\Real-Time-Voice-Cloning\synthesizer\models\modules.py", line 420, in conv1d
padding="same")
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\layers\convolutional.py", line 218, in conv1d
return layer.apply(inputs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 1700, in apply
return self.__call__(inputs, *args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\layers\base.py", line 548, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 854, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 234, in wrapper
return converted_call(f, options, args, kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 439, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 330, in _call_unconverted
return f(*args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\layers\convolutional.py", line 387, in call
return super(Conv1D, self).call(inputs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\keras\layers\convolutional.py", line 197, in call
outputs = self._convolution_op(inputs, self.kernel)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 1134, in __call__
return self.conv_op(inp, filter)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 639, in __call__
return self.call(inp, filter)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 238, in __call__
name=self.name)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 227, in _conv1d
name=name)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 574, in new_func
return func(*args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 574, in new_func
return func(*args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 1681, in conv1d
name=name)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\ops\gen_nn_ops.py", line 1071, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "e:\ProgramData\Miniconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
2021-02-05 20:02:33.232435: W tensorflow/core/kernels/queue_base.cc:277] _1_datafeeder/eval_queue: Skipping cancelled enqueue attempt with queue not closed
2021-02-05 20:02:33.232577: W tensorflow/core/kernels/queue_base.cc:277] _0_datafeeder/input_queue: Skipping cancelled enqueue attempt with queue not closed
```
I think it can't use the memory of my GTX 1660 super .Tell the noob what to do
| null | null | null | {'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [243]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"synthesizer/hparams.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 77c0bd169d8158ed1cdb180cda73c24d3cacd778 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1263 | Python 3.10.12 is not supported | When I ran python3.10 -m pip install numpy==1.20.3 on linux mint, I got an error while I was trying to install it. But it was totally fine when I used python3.8

| null | null | null | {'base_commit': '77c0bd169d8158ed1cdb180cda73c24d3cacd778', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, None)': {'mod': [4]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n依赖声明"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | c5c2261c97afe6ec5db1ef389eba1257f6cce8a2 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/250 | [Errno 2] No such file or directory: 'encoder/_sources.txt' | I have this problem, but I can't understand what does this file contain? There is not _sources.txt in this repo | null | null | null | {'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'encoder_preprocess.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"encoder_preprocess.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 5e400d474043044ba0e3e907a74b4baccb16ee7c | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/425 | Tensorflow.contrib file missing what to do | null | null | null | {'base_commit': '5e400d474043044ba0e3e907a74b4baccb16ee7c', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 35)': {'mod': [35]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0\nand\n2\n这里是指导是doc\n问题原因是依赖的库的版本",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null | ||
CorentinJ | Real-Time-Voice-Cloning | 9553eaa1748cf94814be322ec7b096d2d6bc7f28 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/419 | Getting an exception when browsing for files | For some reason, importing mp3 files is not working. Anyone got an idea on why this might be the case.? | null | null | null | {'base_commit': '9553eaa1748cf94814be322ec7b096d2d6bc7f28', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 40)': {'mod': [40]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | c5c2261c97afe6ec5db1ef389eba1257f6cce8a2 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/221 | A couple inquiries about the colab version | So I have a setup using a copy of the colaboratory version, but I want to be able to generate a few sentences at a time without having to generate per sentence.
I understand that commas and periods don't work, but in the demonstration video it was mentioned that line breaks are a way to get around this for now... however that's done in the toolbox application. How would it be done in code?
I've tried \n but I assume that's only for print related arguments... but I'm fairly new to Python so excuse my ignorance.
On top of this, how could I improve the voice in colab? In regards to training, it's mentioned that a decent session requires around 500gb or more... since I don't exactly have that in colab, is there another way to go about doing this?
I've tried the code with the input being longer than 10 seconds, but apparently if the input is more than 10 seconds or so the voice seems more jittery than it would be if it were capped at 10 seconds. I absolutely applaud this repo but I just really need to understand it a bit better... Thanks in advance. | null | null | null | {'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'toolbox/__init__.py', 'Loc': {"('Toolbox', 'synthesize', 158)": {'mod': [170, 171, 172, 173, 174, 175]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"toolbox/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | c5c2261c97afe6ec5db1ef389eba1257f6cce8a2 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/225 | Not code-savy but want to experiment with code | I have Python Spyder downloaded, but I do not know much about coding, or how to get to the stage where I can add audio and synthesize it. What would you recommend? | null | null | null | {'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'requirements.txt', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 070a3c187f87136ebe92aa72766f8343772d414e | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/378 | i cant install NVIDIA CUDA | I can't install NVIDIA CUDA even though I followed everything that [this guide](https://poorlydocumented.com/2019/11/installing-corentinjs-real-time-voice-cloning-project-on-windows-10-from-scratch/l) told me to do. I also have tried searching for this problem on the internet, but none of them solves my problem. I also have provided the image of the error [here](https://imgur.com/a/fYkiBYQ).
| null | null | null | {'base_commit': '070a3c187f87136ebe92aa72766f8343772d414e', 'files': [{'path': 'demo_cli.py', 'Loc': {'(None, None, None)': {'mod': [34]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"demo_cli.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 9553eaa1748cf94814be322ec7b096d2d6bc7f28 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/420 | New Audio Issue: Assertion Failed | This was working yesterday fine, and no big changes were made.
However, today starting up the demo toolbox loaded:
Assertion failed!
Program: C:\Users\paul1\AppData\Local\Programs\Python\Python37\python.exe
File: src/hostapi/wdmks/pa_win_wdmks.c, Line 1061
Expression: FALSE
I have tried reinstalling visual studio as well, but to no avail. Any thoughts on this would be deeply appreciated.
| null | null | null | {} | [] | [] | [
{
"pro": "sounddevice"
}
] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "库"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"sounddevice"
]
} | null | |
AUTOMATIC1111 | stable-diffusion-webui | 39827a3998afa3ea612e7cc9a475085765d4d509 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5134 | asking-for-help-with-local-system-issues | [Bug]: Non checkpoints found. Can't run without a checkpoint. | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
During the installation (windows), an error occurs :
```
venv "G:\Dev\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Commit hash: 9e78d2c419732711e984c4478af15ece121d64fd
Installing requirements for Web UI
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
No checkpoints found. When searching for checkpoints, looked at:
- file G:\Dev\stable-diffusion-webui\model.ckpt
- directory G:\Dev\stable-diffusion-webui\models\Stable-diffusion
Can't run without a checkpoint. Find and place a .ckpt file into any of those locations. The program will exit.
```
### Steps to reproduce the problem
Launch webui-user.bat
### What should have happened?
Installation complete
### Commit where the problem happens
9e78d2c419732711e984c4478af15ece121d64fd
### What platforms do you use to access UI ?
Windows
### What browsers do you use to access the UI ?
Google Chrome
### Command Line Arguments
_No response_
### Additional information, context and logs
_No response_ | null | null | null | {'base_commit': '39827a3998afa3ea612e7cc9a475085765d4d509', 'files': [{'path': 'modules/sd_models.py', 'Loc': {"(None, 'load_model', 230)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [
"modules/sd_models.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
AUTOMATIC1111 | stable-diffusion-webui | fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11458 | bug-report | [Bug]: ModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed' | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Launching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue
2023-06-27 13:53:22.297173: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-27 13:53:23.287285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /content/microsoftexcel/launch.py:38 in <module> │
│ │
│ 35 │
│ 36 │
│ 37 if __name__ == "__main__": │
│ ❱ 38 │ main() │
│ 39 │
│ │
│ /content/microsoftexcel/launch.py:34 in main │
│ │
│ 31 │ if args.test_server: │
│ 32 │ │ configure_for_tests() │
│ 33 │ │
│ ❱ 34 │ start() │
│ 35 │
│ 36 │
│ 37 if __name__ == "__main__": │
│ │
│ /content/microsoftexcel/modules/launch_utils.py:340 in start │
│ │
│ 337 │
│ 338 def start(): │
│ 339 │ print(f"Launching {'API server' if '--nowebui' in sys.argv else 'W │
│ ❱ 340 │ import webui │
│ 341 │ if '--nowebui' in sys.argv: │
│ 342 │ │ webui.api_only() │
│ 343 │ else: │
│ │
│ /content/microsoftexcel/webui.py:42 in <module> │
│ │
│ 39 startup_timer.record("import ldm") │
│ 40 │
│ 41 from modules import extra_networks │
│ ❱ 42 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, │
│ 43 │
│ 44 # Truncate version number of nightly/local build of PyTorch to not cau │
│ 45 if ".dev" in torch.__version__ or "+git" in torch.__version__: │
│ │
│ /content/microsoftexcel/modules/call_queue.py:5 in <module> │
│ │
│ 2 import threading │
│ 3 import time │
│ 4 │
│ ❱ 5 from modules import shared, progress, errors │
│ 6 │
│ 7 queue_lock = threading.Lock() │
│ 8 │
│ │
│ /content/microsoftexcel/modules/shared.py:18 in <module> │
│ │
│ 15 import modules.devices as devices │
│ 16 from modules import localization, script_loading, errors, ui_component │
│ 17 from modules.paths_internal import models_path, script_path, data_path │
│ ❱ 18 from ldm.models.diffusion.ddpm import LatentDiffusion │
│ 19 from typing import Optional │
│ 20 │
│ 21 demo = None │
│ │
│ /content/microsoftexcel/repositories/stable-diffusion-stability-ai/ldm/model │
│ s/diffusion/ddpm.py:20 in <module> │
│ │
│ 17 import itertools │
│ 18 from tqdm import tqdm │
│ 19 from torchvision.utils import make_grid │
│ ❱ 20 from pytorch_lightning.utilities.distributed import rank_zero_only │
│ 21 from omegaconf import ListConfig │
│ 22 │
│ 23 from ldm.util import log_txt_as_img, exists, default, ismap, isimage, │
╰──────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'
### Steps to reproduce the problem
1. on colab
2. try to use the new 1.4.0 release
3. error
### What should have happened?
no error
### Version or Commit where the problem happens
1.4.0
### What Python version are you running on ?
None
### What platforms do you use to access the UI ?
Other/Cloud
### What device are you running WebUI on?
_No response_
### Cross attention optimization
Automatic
### What browsers do you use to access the UI ?
Google Chrome
### Command Line Arguments
```Shell
!COMMANDLINE_ARGS="--share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue" REQS_FILE="requirements.txt" python launch.py
```
### List of extensions
sd-webui-tunnels
controlnet
openpose-editor
posex
a1111-sd-webui-tagcomplete
supermerger
ultimate-upscale-for-automatic1111
a111 locon extension
images browser
### Console logs
```Shell
**truncated on colab**
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1217 100 1217 0 0 3699 0 --:--:-- --:--:-- --:--:-- 3699
100 1722k 100 1722k 0 0 670k 0 0:00:02 0:00:02 --:--:-- 1355k
Archive: /content/microsoftexcel.zip
creating: microsoftexcel/
inflating: microsoftexcel/.eslintignore
inflating: microsoftexcel/.eslintrc.js
inflating: microsoftexcel/.git-blame-ignore-revs
creating: microsoftexcel/.github/
creating: microsoftexcel/.github/ISSUE_TEMPLATE/
inflating: microsoftexcel/.github/ISSUE_TEMPLATE/bug_report.yml
inflating: microsoftexcel/.github/ISSUE_TEMPLATE/config.yml
inflating: microsoftexcel/.github/ISSUE_TEMPLATE/feature_request.yml
inflating: microsoftexcel/.github/pull_request_template.md
creating: microsoftexcel/.github/workflows/
inflating: microsoftexcel/.github/workflows/on_pull_request.yaml
inflating: microsoftexcel/.github/workflows/run_tests.yaml
inflating: microsoftexcel/.gitignore
inflating: microsoftexcel/.pylintrc
inflating: microsoftexcel/CHANGELOG.md
inflating: microsoftexcel/CODEOWNERS
creating: microsoftexcel/configs/
inflating: microsoftexcel/configs/alt-diffusion-inference.yaml
inflating: microsoftexcel/configs/instruct-pix2pix.yaml
inflating: microsoftexcel/configs/v1-inference.yaml
inflating: microsoftexcel/configs/v1-inpainting-inference.yaml
creating: microsoftexcel/embeddings/
extracting: microsoftexcel/embeddings/Place Textual Inversion embeddings here.txt
inflating: microsoftexcel/environment-wsl2.yaml
creating: microsoftexcel/extensions/
extracting: microsoftexcel/extensions/put extensions here.txt
creating: microsoftexcel/extensions-builtin/
creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/
creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/javascript/
inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js
creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/scripts/
inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/scripts/hotkey_config.py
inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/style.css
creating: microsoftexcel/extensions-builtin/extra-options-section/
creating: microsoftexcel/extensions-builtin/extra-options-section/scripts/
inflating: microsoftexcel/extensions-builtin/extra-options-section/scripts/extra_options_section.py
creating: microsoftexcel/extensions-builtin/LDSR/
inflating: microsoftexcel/extensions-builtin/LDSR/ldsr_model_arch.py
inflating: microsoftexcel/extensions-builtin/LDSR/preload.py
creating: microsoftexcel/extensions-builtin/LDSR/scripts/
inflating: microsoftexcel/extensions-builtin/LDSR/scripts/ldsr_model.py
inflating: microsoftexcel/extensions-builtin/LDSR/sd_hijack_autoencoder.py
inflating: microsoftexcel/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py
inflating: microsoftexcel/extensions-builtin/LDSR/vqvae_quantize.py
creating: microsoftexcel/extensions-builtin/Lora/
inflating: microsoftexcel/extensions-builtin/Lora/extra_networks_lora.py
inflating: microsoftexcel/extensions-builtin/Lora/lora.py
inflating: microsoftexcel/extensions-builtin/Lora/preload.py
creating: microsoftexcel/extensions-builtin/Lora/scripts/
inflating: microsoftexcel/extensions-builtin/Lora/scripts/lora_script.py
inflating: microsoftexcel/extensions-builtin/Lora/ui_extra_networks_lora.py
creating: microsoftexcel/extensions-builtin/prompt-bracket-checker/
creating: microsoftexcel/extensions-builtin/prompt-bracket-checker/javascript/
inflating: microsoftexcel/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js
creating: microsoftexcel/extensions-builtin/ScuNET/
inflating: microsoftexcel/extensions-builtin/ScuNET/preload.py
creating: microsoftexcel/extensions-builtin/ScuNET/scripts/
inflating: microsoftexcel/extensions-builtin/ScuNET/scripts/scunet_model.py
inflating: microsoftexcel/extensions-builtin/ScuNET/scunet_model_arch.py
creating: microsoftexcel/extensions-builtin/SwinIR/
inflating: microsoftexcel/extensions-builtin/SwinIR/preload.py
creating: microsoftexcel/extensions-builtin/SwinIR/scripts/
inflating: microsoftexcel/extensions-builtin/SwinIR/scripts/swinir_model.py
inflating: microsoftexcel/extensions-builtin/SwinIR/swinir_model_arch.py
inflating: microsoftexcel/extensions-builtin/SwinIR/swinir_model_arch_v2.py
creating: microsoftexcel/html/
inflating: microsoftexcel/html/card-no-preview.png
inflating: microsoftexcel/html/extra-networks-card.html
inflating: microsoftexcel/html/extra-networks-no-cards.html
inflating: microsoftexcel/html/footer.html
inflating: microsoftexcel/html/image-update.svg
inflating: microsoftexcel/html/licenses.html
creating: microsoftexcel/javascript/
inflating: microsoftexcel/javascript/aspectRatioOverlay.js
inflating: microsoftexcel/javascript/contextMenus.js
inflating: microsoftexcel/javascript/dragdrop.js
inflating: microsoftexcel/javascript/edit-attention.js
inflating: microsoftexcel/javascript/extensions.js
inflating: microsoftexcel/javascript/extraNetworks.js
inflating: microsoftexcel/javascript/generationParams.js
inflating: microsoftexcel/javascript/hints.js
inflating: microsoftexcel/javascript/hires_fix.js
inflating: microsoftexcel/javascript/imageMaskFix.js
inflating: microsoftexcel/javascript/imageviewer.js
inflating: microsoftexcel/javascript/imageviewerGamepad.js
inflating: microsoftexcel/javascript/localization.js
inflating: microsoftexcel/javascript/notification.js
inflating: microsoftexcel/javascript/profilerVisualization.js
inflating: microsoftexcel/javascript/progressbar.js
inflating: microsoftexcel/javascript/textualInversion.js
inflating: microsoftexcel/javascript/token-counters.js
inflating: microsoftexcel/javascript/ui.js
inflating: microsoftexcel/javascript/ui_settings_hints.js
inflating: microsoftexcel/launch.py
inflating: microsoftexcel/LICENSE.txt
creating: microsoftexcel/localizations/
extracting: microsoftexcel/localizations/Put localization files here.txt
creating: microsoftexcel/models/
creating: microsoftexcel/models/deepbooru/
extracting: microsoftexcel/models/deepbooru/Put your deepbooru release project folder here.txt
creating: microsoftexcel/models/karlo/
inflating: microsoftexcel/models/karlo/ViT-L-14_stats.th
creating: microsoftexcel/models/Stable-diffusion/
extracting: microsoftexcel/models/Stable-diffusion/Put Stable Diffusion checkpoints here.txt
creating: microsoftexcel/models/VAE/
extracting: microsoftexcel/models/VAE/Put VAE here.txt
creating: microsoftexcel/models/VAE-approx/
inflating: microsoftexcel/models/VAE-approx/model.pt
creating: microsoftexcel/modules/
creating: microsoftexcel/modules/api/
inflating: microsoftexcel/modules/api/api.py
inflating: microsoftexcel/modules/api/models.py
inflating: microsoftexcel/modules/call_queue.py
inflating: microsoftexcel/modules/cmd_args.py
creating: microsoftexcel/modules/codeformer/
inflating: microsoftexcel/modules/codeformer/codeformer_arch.py
inflating: microsoftexcel/modules/codeformer/vqgan_arch.py
inflating: microsoftexcel/modules/codeformer_model.py
inflating: microsoftexcel/modules/config_states.py
inflating: microsoftexcel/modules/deepbooru.py
inflating: microsoftexcel/modules/deepbooru_model.py
inflating: microsoftexcel/modules/devices.py
inflating: microsoftexcel/modules/errors.py
inflating: microsoftexcel/modules/esrgan_model.py
inflating: microsoftexcel/modules/esrgan_model_arch.py
inflating: microsoftexcel/modules/extensions.py
inflating: microsoftexcel/modules/extras.py
inflating: microsoftexcel/modules/extra_networks.py
inflating: microsoftexcel/modules/extra_networks_hypernet.py
inflating: microsoftexcel/modules/face_restoration.py
inflating: microsoftexcel/modules/generation_parameters_copypaste.py
inflating: microsoftexcel/modules/gfpgan_model.py
inflating: microsoftexcel/modules/gitpython_hack.py
inflating: microsoftexcel/modules/hashes.py
creating: microsoftexcel/modules/hypernetworks/
inflating: microsoftexcel/modules/hypernetworks/hypernetwork.py
inflating: microsoftexcel/modules/hypernetworks/ui.py
inflating: microsoftexcel/modules/images.py
inflating: microsoftexcel/modules/img2img.py
inflating: microsoftexcel/modules/import_hook.py
inflating: microsoftexcel/modules/interrogate.py
inflating: microsoftexcel/modules/launch_utils.py
inflating: microsoftexcel/modules/localization.py
inflating: microsoftexcel/modules/lowvram.py
inflating: microsoftexcel/modules/mac_specific.py
inflating: microsoftexcel/modules/masking.py
inflating: microsoftexcel/modules/memmon.py
inflating: microsoftexcel/modules/modelloader.py
creating: microsoftexcel/modules/models/
creating: microsoftexcel/modules/models/diffusion/
inflating: microsoftexcel/modules/models/diffusion/ddpm_edit.py
creating: microsoftexcel/modules/models/diffusion/uni_pc/
inflating: microsoftexcel/modules/models/diffusion/uni_pc/sampler.py
inflating: microsoftexcel/modules/models/diffusion/uni_pc/uni_pc.py
inflating: microsoftexcel/modules/models/diffusion/uni_pc/__init__.py
inflating: microsoftexcel/modules/ngrok.py
inflating: microsoftexcel/modules/paths.py
inflating: microsoftexcel/modules/paths_internal.py
inflating: microsoftexcel/modules/postprocessing.py
inflating: microsoftexcel/modules/processing.py
inflating: microsoftexcel/modules/progress.py
inflating: microsoftexcel/modules/prompt_parser.py
inflating: microsoftexcel/modules/realesrgan_model.py
inflating: microsoftexcel/modules/restart.py
inflating: microsoftexcel/modules/Roboto-Regular.ttf
inflating: microsoftexcel/modules/safe.py
inflating: microsoftexcel/modules/scripts.py
inflating: microsoftexcel/modules/scripts_auto_postprocessing.py
inflating: microsoftexcel/modules/scripts_postprocessing.py
inflating: microsoftexcel/modules/script_callbacks.py
inflating: microsoftexcel/modules/script_loading.py
inflating: microsoftexcel/modules/sd_disable_initialization.py
inflating: microsoftexcel/modules/sd_hijack.py
inflating: microsoftexcel/modules/sd_hijack_checkpoint.py
inflating: microsoftexcel/modules/sd_hijack_clip.py
inflating: microsoftexcel/modules/sd_hijack_clip_old.py
inflating: microsoftexcel/modules/sd_hijack_inpainting.py
inflating: microsoftexcel/modules/sd_hijack_ip2p.py
inflating: microsoftexcel/modules/sd_hijack_open_clip.py
inflating: microsoftexcel/modules/sd_hijack_optimizations.py
inflating: microsoftexcel/modules/sd_hijack_unet.py
inflating: microsoftexcel/modules/sd_hijack_utils.py
inflating: microsoftexcel/modules/sd_hijack_xlmr.py
inflating: microsoftexcel/modules/sd_models.py
inflating: microsoftexcel/modules/sd_models_config.py
inflating: microsoftexcel/modules/sd_samplers.py
inflating: microsoftexcel/modules/sd_samplers_common.py
inflating: microsoftexcel/modules/sd_samplers_compvis.py
inflating: microsoftexcel/modules/sd_samplers_kdiffusion.py
inflating: microsoftexcel/modules/sd_unet.py
inflating: microsoftexcel/modules/sd_vae.py
inflating: microsoftexcel/modules/sd_vae_approx.py
inflating: microsoftexcel/modules/sd_vae_taesd.py
inflating: microsoftexcel/modules/shared.py
inflating: microsoftexcel/modules/shared_items.py
inflating: microsoftexcel/modules/styles.py
inflating: microsoftexcel/modules/sub_quadratic_attention.py
inflating: microsoftexcel/modules/sysinfo.py
creating: microsoftexcel/modules/textual_inversion/
inflating: microsoftexcel/modules/textual_inversion/autocrop.py
inflating: microsoftexcel/modules/textual_inversion/dataset.py
inflating: microsoftexcel/modules/textual_inversion/image_embedding.py
inflating: microsoftexcel/modules/textual_inversion/learn_schedule.py
inflating: microsoftexcel/modules/textual_inversion/logging.py
inflating: microsoftexcel/modules/textual_inversion/preprocess.py
inflating: microsoftexcel/modules/textual_inversion/test_embedding.png
inflating: microsoftexcel/modules/textual_inversion/textual_inversion.py
inflating: microsoftexcel/modules/textual_inversion/ui.py
inflating: microsoftexcel/modules/timer.py
inflating: microsoftexcel/modules/txt2img.py
inflating: microsoftexcel/modules/ui.py
inflating: microsoftexcel/modules/ui_common.py
inflating: microsoftexcel/modules/ui_components.py
inflating: microsoftexcel/modules/ui_extensions.py
inflating: microsoftexcel/modules/ui_extra_networks.py
inflating: microsoftexcel/modules/ui_extra_networks_checkpoints.py
inflating: microsoftexcel/modules/ui_extra_networks_hypernets.py
inflating: microsoftexcel/modules/ui_extra_networks_textual_inversion.py
inflating: microsoftexcel/modules/ui_gradio_extensions.py
inflating: microsoftexcel/modules/ui_loadsave.py
inflating: microsoftexcel/modules/ui_postprocessing.py
inflating: microsoftexcel/modules/ui_settings.py
inflating: microsoftexcel/modules/ui_tempdir.py
inflating: microsoftexcel/modules/upscaler.py
inflating: microsoftexcel/modules/xlmr.py
inflating: microsoftexcel/package.json
inflating: microsoftexcel/pyproject.toml
inflating: microsoftexcel/README.md
inflating: microsoftexcel/requirements-test.txt
inflating: microsoftexcel/requirements.txt
inflating: microsoftexcel/requirements_versions.txt
inflating: microsoftexcel/screenshot.png
inflating: microsoftexcel/script.js
creating: microsoftexcel/scripts/
inflating: microsoftexcel/scripts/custom_code.py
inflating: microsoftexcel/scripts/img2imgalt.py
inflating: microsoftexcel/scripts/loopback.py
inflating: microsoftexcel/scripts/outpainting_mk_2.py
inflating: microsoftexcel/scripts/poor_mans_outpainting.py
inflating: microsoftexcel/scripts/postprocessing_codeformer.py
inflating: microsoftexcel/scripts/postprocessing_gfpgan.py
inflating: microsoftexcel/scripts/postprocessing_upscale.py
inflating: microsoftexcel/scripts/prompts_from_file.py
inflating: microsoftexcel/scripts/prompt_matrix.py
inflating: microsoftexcel/scripts/sd_upscale.py
inflating: microsoftexcel/scripts/xyz_grid.py
inflating: microsoftexcel/style.css
creating: microsoftexcel/test/
inflating: microsoftexcel/test/conftest.py
inflating: microsoftexcel/test/test_extras.py
creating: microsoftexcel/test/test_files/
inflating: microsoftexcel/test/test_files/empty.pt
inflating: microsoftexcel/test/test_files/img2img_basic.png
inflating: microsoftexcel/test/test_files/mask_basic.png
inflating: microsoftexcel/test/test_img2img.py
inflating: microsoftexcel/test/test_txt2img.py
inflating: microsoftexcel/test/test_utils.py
extracting: microsoftexcel/test/__init__.py
creating: microsoftexcel/textual_inversion_templates/
inflating: microsoftexcel/textual_inversion_templates/hypernetwork.txt
inflating: microsoftexcel/textual_inversion_templates/none.txt
inflating: microsoftexcel/textual_inversion_templates/style.txt
inflating: microsoftexcel/textual_inversion_templates/style_filewords.txt
inflating: microsoftexcel/textual_inversion_templates/subject.txt
inflating: microsoftexcel/textual_inversion_templates/subject_filewords.txt
inflating: microsoftexcel/webui-macos-env.sh
inflating: microsoftexcel/webui-user.bat
inflating: microsoftexcel/webui-user.sh
inflating: microsoftexcel/webui.bat
inflating: microsoftexcel/webui.py
inflating: microsoftexcel/webui.sh
Cloning into '/content/microsoftexcel/extensions/microsoftexcel-tunnels'...
remote: Enumerating objects: 143, done.
remote: Counting objects: 100% (38/38), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 143 (delta 35), reused 24 (delta 24), pack-reused 105
Receiving objects: 100% (143/143), 26.38 KiB | 13.19 MiB/s, done.
Resolving deltas: 100% (62/62), done.
Cloning into '/content/microsoftexcel/extensions/microsoftexcel-controlnet'...
remote: Enumerating objects: 7327, done.
remote: Counting objects: 100% (2275/2275), done.
remote: Compressing objects: 100% (128/128), done.
remote: Total 7327 (delta 2172), reused 2178 (delta 2147), pack-reused 5052
Receiving objects: 100% (7327/7327), 15.36 MiB | 9.38 MiB/s, done.
Resolving deltas: 100% (4220/4220), done.
Cloning into '/content/microsoftexcel/extensions/openpose-editor'...
remote: Enumerating objects: 403, done.
remote: Counting objects: 100% (123/123), done.
remote: Compressing objects: 100% (56/56), done.
remote: Total 403 (delta 88), reused 80 (delta 67), pack-reused 280
Receiving objects: 100% (403/403), 1.15 MiB | 14.54 MiB/s, done.
Resolving deltas: 100% (170/170), done.
Cloning into '/content/microsoftexcel/extensions/posex'...
remote: Enumerating objects: 407, done.
remote: Counting objects: 100% (43/43), done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 407 (delta 21), reused 35 (delta 19), pack-reused 364
Receiving objects: 100% (407/407), 11.39 MiB | 8.04 MiB/s, done.
Resolving deltas: 100% (196/196), done.
Cloning into '/content/microsoftexcel/extensions/a1111-microsoftexcel-tagcomplete'...
remote: Enumerating objects: 1341, done.
remote: Counting objects: 100% (1341/1341), done.
remote: Compressing objects: 100% (505/505), done.
remote: Total 1341 (delta 783), reused 1251 (delta 775), pack-reused 0
Receiving objects: 100% (1341/1341), 3.85 MiB | 4.02 MiB/s, done.
Resolving deltas: 100% (783/783), done.
Cloning into '/content/microsoftexcel/extensions/microsoftexcel-supermerger'...
remote: Enumerating objects: 720, done.
remote: Counting objects: 100% (237/237), done.
remote: Compressing objects: 100% (94/94), done.
remote: Total 720 (delta 180), reused 183 (delta 143), pack-reused 483
Receiving objects: 100% (720/720), 307.44 KiB | 13.37 MiB/s, done.
Resolving deltas: 100% (374/374), done.
Cloning into '/content/microsoftexcel/extensions/ultimate-upscale-for-automatic1111'...
remote: Enumerating objects: 309, done.
remote: Counting objects: 100% (84/84), done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 309 (delta 34), reused 64 (delta 23), pack-reused 225
Receiving objects: 100% (309/309), 32.23 MiB | 11.17 MiB/s, done.
Resolving deltas: 100% (109/109), done.
Cloning into '/content/microsoftexcel/extensions/a1111-microsoftexcel-locon'...
remote: Enumerating objects: 188, done.
remote: Counting objects: 100% (43/43), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 188 (delta 18), reused 40 (delta 17), pack-reused 145
Receiving objects: 100% (188/188), 47.64 KiB | 15.88 MiB/s, done.
Resolving deltas: 100% (93/93), done.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1229 100 1229 0 0 4708 0 --:--:-- --:--:-- --:--:-- 4708
100 68776 100 68776 0 0 239k 0 --:--:-- --:--:-- --:--:-- 239k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1195 100 1195 0 0 5063 0 --:--:-- --:--:-- --:--:-- 5063
100 1509k 100 1509k 0 0 5428k 0 --:--:-- --:--:-- --:--:-- 5428k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1191 100 1191 0 0 4983 0 --:--:-- --:--:-- --:--:-- 4962
100 118M 100 118M 0 0 212M 0 --:--:-- --:--:-- --:--:-- 212M
/content/microsoftexcel/extensions
Archive: /content/microsoftexcel/extensions/microsoftexcel-images-browser.zip
creating: sd-webui-images-browser/
inflating: sd-webui-images-browser/.DS_Store
creating: sd-webui-images-browser/.git/
creating: sd-webui-images-browser/.git/branches/
inflating: sd-webui-images-browser/.git/config
inflating: sd-webui-images-browser/.git/description
inflating: sd-webui-images-browser/.git/HEAD
creating: sd-webui-images-browser/.git/hooks/
inflating: sd-webui-images-browser/.git/hooks/applypatch-msg.sample
inflating: sd-webui-images-browser/.git/hooks/commit-msg.sample
inflating: sd-webui-images-browser/.git/hooks/fsmonitor-watchman.sample
inflating: sd-webui-images-browser/.git/hooks/post-update.sample
inflating: sd-webui-images-browser/.git/hooks/pre-applypatch.sample
inflating: sd-webui-images-browser/.git/hooks/pre-commit.sample
inflating: sd-webui-images-browser/.git/hooks/pre-merge-commit.sample
inflating: sd-webui-images-browser/.git/hooks/pre-push.sample
inflating: sd-webui-images-browser/.git/hooks/pre-rebase.sample
inflating: sd-webui-images-browser/.git/hooks/pre-receive.sample
inflating: sd-webui-images-browser/.git/hooks/prepare-commit-msg.sample
inflating: sd-webui-images-browser/.git/hooks/update.sample
inflating: sd-webui-images-browser/.git/index
creating: sd-webui-images-browser/.git/info/
inflating: sd-webui-images-browser/.git/info/exclude
creating: sd-webui-images-browser/.git/logs/
inflating: sd-webui-images-browser/.git/logs/HEAD
creating: sd-webui-images-browser/.git/logs/refs/
creating: sd-webui-images-browser/.git/logs/refs/heads/
inflating: sd-webui-images-browser/.git/logs/refs/heads/main
creating: sd-webui-images-browser/.git/logs/refs/remotes/
creating: sd-webui-images-browser/.git/logs/refs/remotes/origin/
inflating: sd-webui-images-browser/.git/logs/refs/remotes/origin/HEAD
creating: sd-webui-images-browser/.git/objects/
creating: sd-webui-images-browser/.git/objects/info/
creating: sd-webui-images-browser/.git/objects/pack/
inflating: sd-webui-images-browser/.git/objects/pack/pack-8c09dc0723b064b3aad4351dc4af51e311b0601c.idx
inflating: sd-webui-images-browser/.git/objects/pack/pack-8c09dc0723b064b3aad4351dc4af51e311b0601c.pack
inflating: sd-webui-images-browser/.git/packed-refs
creating: sd-webui-images-browser/.git/refs/
creating: sd-webui-images-browser/.git/refs/heads/
inflating: sd-webui-images-browser/.git/refs/heads/main
creating: sd-webui-images-browser/.git/refs/remotes/
creating: sd-webui-images-browser/.git/refs/remotes/origin/
inflating: sd-webui-images-browser/.git/refs/remotes/origin/HEAD
creating: sd-webui-images-browser/.git/refs/tags/
inflating: sd-webui-images-browser/.gitignore
creating: sd-webui-images-browser/javascript/
inflating: sd-webui-images-browser/javascript/images_history.js
inflating: sd-webui-images-browser/README.md
creating: sd-webui-images-browser/scripts/
inflating: sd-webui-images-browser/scripts/images_history.py
/content/microsoftexcel/embeddings
Archive: /content/microsoftexcel/embeddings/embeddings.zip
creating: embeddings/
inflating: embeddings/21charturnerv2.pt
inflating: embeddings/Asian-Less-Neg.pt
inflating: embeddings/bad-artist-anime.pt
inflating: embeddings/bad-artist.pt
inflating: embeddings/bad-hands-5.pt
inflating: embeddings/bad-image-v2-39000.pt
inflating: embeddings/bad-picture-chill-75v.pt
inflating: embeddings/BadDream.pt
inflating: embeddings/badhandv4.pt
inflating: embeddings/bad_pictures.pt
inflating: embeddings/bad_prompt.pt
inflating: embeddings/bad_prompt_version2.pt
inflating: embeddings/charturnerv2.pt
inflating: embeddings/CyberRealistic_Negative-neg.pt
inflating: embeddings/easynegative.safetensors
inflating: embeddings/EasyNegativeV2.safetensors
inflating: embeddings/epiCNegative.pt
inflating: embeddings/epiCRealism.pt
inflating: embeddings/FastNegativeEmbedding.pt
inflating: embeddings/HyperStylizeV6.pt
inflating: embeddings/nartfixer.pt
inflating: embeddings/negative_hand-neg.pt
inflating: embeddings/nfixer.pt
inflating: embeddings/ng_deepnegative_v1_75t.pt
inflating: embeddings/nrealfixer.pt
inflating: embeddings/pureerosface_v1.pt
inflating: embeddings/rmadanegative402_sd15-neg.pt
inflating: embeddings/ulzzang-6500-v1.1.bin
inflating: embeddings/ulzzang-6500.pt
inflating: embeddings/UnrealisticDream.pt
inflating: embeddings/verybadimagenegative_v1.3.pt
/content/microsoftexcel/models/ESRGAN
Archive: /content/microsoftexcel/models/ESRGAN/upscalers.zip
inflating: 4x-UltraSharp.pth
inflating: 4x_foolhardy_Remacri.pth
/content
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1133 100 1133 0 0 4800 0 --:--:-- --:--:-- --:--:-- 4800
100 4067M 100 4067M 0 0 221M 0 0:00:18 0:00:18 --:--:-- 242M
/content/microsoftexcel
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0]
Version: ## 1.4.0
Commit hash: <none>
Installing gfpgan
Installing clip
Installing open_clip
Installing xformers
Cloning Stable Diffusion into /content/microsoftexcel/repositories/stable-diffusion-stability-ai...
Cloning K-diffusion into /content/microsoftexcel/repositories/k-diffusion...
Cloning CodeFormer into /content/microsoftexcel/repositories/CodeFormer...
Cloning BLIP into /content/microsoftexcel/repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements
Installing sd-webui-controlnet requirement: mediapipe
Installing sd-webui-controlnet requirement: svglib
Installing sd-webui-controlnet requirement: fvcore
Installing pycloudflared
Installing diffusers
Launching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue
2023-06-27 13:53:22.297173: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-27 13:53:23.287285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /content/microsoftexcel/launch.py:38 in <module> │
│ │
│ 35 │
│ 36 │
│ 37 if __name__ == "__main__": │
│ ❱ 38 │ main() │
│ 39 │
│ │
│ /content/microsoftexcel/launch.py:34 in main │
│ │
│ 31 │ if args.test_server: │
│ 32 │ │ configure_for_tests() │
│ 33 │ │
│ ❱ 34 │ start() │
│ 35 │
│ 36 │
│ 37 if __name__ == "__main__": │
│ │
│ /content/microsoftexcel/modules/launch_utils.py:340 in start │
│ │
│ 337 │
│ 338 def start(): │
│ 339 │ print(f"Launching {'API server' if '--nowebui' in sys.argv else 'W │
│ ❱ 340 │ import webui │
│ 341 │ if '--nowebui' in sys.argv: │
│ 342 │ │ webui.api_only() │
│ 343 │ else: │
│ │
│ /content/microsoftexcel/webui.py:42 in <module> │
│ │
│ 39 startup_timer.record("import ldm") │
│ 40 │
│ 41 from modules import extra_networks │
│ ❱ 42 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, │
│ 43 │
│ 44 # Truncate version number of nightly/local build of PyTorch to not cau │
│ 45 if ".dev" in torch.__version__ or "+git" in torch.__version__: │
│ │
│ /content/microsoftexcel/modules/call_queue.py:5 in <module> │
│ │
│ 2 import threading │
│ 3 import time │
│ 4 │
│ ❱ 5 from modules import shared, progress, errors │
│ 6 │
│ 7 queue_lock = threading.Lock() │
│ 8 │
│ │
│ /content/microsoftexcel/modules/shared.py:18 in <module> │
│ │
│ 15 import modules.devices as devices │
│ 16 from modules import localization, script_loading, errors, ui_component │
│ 17 from modules.paths_internal import models_path, script_path, data_path │
│ ❱ 18 from ldm.models.diffusion.ddpm import LatentDiffusion │
│ 19 from typing import Optional │
│ 20 │
│ 21 demo = None │
│ │
│ /content/microsoftexcel/repositories/stable-diffusion-stability-ai/ldm/model │
│ s/diffusion/ddpm.py:20 in <module> │
│ │
│ 17 import itertools │
│ 18 from tqdm import tqdm │
│ 19 from torchvision.utils import make_grid │
│ ❱ 20 from pytorch_lightning.utilities.distributed import rank_zero_only │
│ 21 from omegaconf import ListConfig │
│ 22 │
│ 23 from ldm.util import log_txt_as_img, exists, default, ismap, isimage, │
╰──────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'
```
### Additional information
_No response_ | null | null | null | {'base_commit': 'fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c', 'files': [{'path': 'extensions-builtin/LDSR/sd_hijack_ddpm_v1.py', 'Loc': {'(None, None, None)': {'mod': [17]}}, 'status': 'modified'}, {'path': 'modules/models/diffusion/ddpm_edit.py', 'Loc': {'(None, None, None)': {'mod': [22]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/models/diffusion/ddpm_edit.py",
"extensions-builtin/LDSR/sd_hijack_ddpm_v1.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
AUTOMATIC1111 | stable-diffusion-webui | ef4c94e1cfe66299227aa95a28c2380d21cb1600 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3902 | [Feature Request]: | Finer control of CFG Scale? now it goes by 0.5 steps. I'm trying to replicate work i did on other app which have CFG scale control by 0.1. i cannot get the same result, of course.
| null | null | null | {} | [] | [
"ui-config.json"
] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "1",
"info_type": "Config"
} | {
"code": [
"ui-config.json"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
AUTOMATIC1111 | stable-diffusion-webui | bf30673f5132c8f28357b31224c54331e788d3e7 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3301 | bug-report | Expected all tensors to be on the same device | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)
how to pick the CUDA:0 ? | null | null | null | {'base_commit': 'bf30673f5132c8f28357b31224c54331e788d3e7', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 17)': {'mod': [17]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n依赖声明"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null |
AUTOMATIC1111 | stable-diffusion-webui | 39919c40dd18f5a14ae21403efea1b0f819756c7 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2190 | bug-report | How to use .ckpt model on repo | Hello everyone!
I was able to train a custom model using Dreambooth and I now have a custom ckpt trained on myself. Where do I put this file to be able to use it in this repo?
I dropped in into models but not sure what to do next?
Appreciate any help | null | null | null | {'base_commit': '39919c40dd18f5a14ae21403efea1b0f819756c7', 'files': [{'path': 'models/Stable-diffusion', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"models/Stable-diffusion"
]
} | null |
AUTOMATIC1111 | stable-diffusion-webui | 556c36b9607e3f4eacdddc85f8e7a78b29476ea7 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1614 | enhancement | Feature request: GPU temperature control | **Is your feature request related to a problem? Please describe.**
I don't like 85 degrees (Celsius) on my GPU, especially if it lasts more than 30 minutes or even 1 hour
**Describe the solution you'd like**
If temp on a GPU is more than {maxTemp} and it lasts {accumulateTempTime} it will pause processing for {cooldownTime} or until it cools to {minTemp}, so my GPU won't end up with exploding
**Describe alternatives you've considered**
Not pausing, but lowering the activity to a few tens of seconds per step.
**Additional context**
Not lowering it in hard core, but smartly lowering activity (using sth similar to PID), so the temp will stay at {desiredTemp}
| null | null | null | {} | [] | [] | [
{
"org": "w-e-w",
"pro": "stable-diffusion-webui-GPU-temperature-protection"
}
] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"stable-diffusion-webui-GPU-temperature-protection"
]
} | null |
python | cpython | c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba | https://github.com/python/cpython/issues/39472 | docs | Wrong reference for specific minidom methods | BPO | [832251](https://bugs.python.org/issue832251)
--- | :---
Nosy | @freddrake
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = 'https://github.com/freddrake'
closed_at = <Date 2004-04-01.04:19:08.000>
created_at = <Date 2003-10-29.09:39:39.000>
labels = ['docs']
title = 'Wrong reference for specific minidom methods'
updated_at = <Date 2004-04-01.04:19:08.000>
user = 'https://bugs.python.org/nerby'
```
bugs.python.org fields:
```python
activity = <Date 2004-04-01.04:19:08.000>
actor = 'fdrake'
assignee = 'fdrake'
closed = True
closed_date = None
closer = None
components = ['Documentation']
creation = <Date 2003-10-29.09:39:39.000>
creator = 'nerby'
dependencies = []
files = []
hgrepos = []
issue_num = 832251
keywords = []
message_count = 3.0
messages = ['18799', '18800', '18801']
nosy_count = 2.0
nosy_names = ['fdrake', 'nerby']
pr_nums = []
priority = 'high'
resolution = 'fixed'
stage = None
status = 'closed'
superseder = None
type = None
url = 'https://bugs.python.org/issue832251'
versions = ['Python 2.3']
```
</p></details>
| null | null | null | {'base_commit': 'c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba', 'files': [{'path': 'Doc/lib/xmldomminidom.tex', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2\ndoc问题",
"iss_reason": "2\ndoc错误,不是bug",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Doc/lib/xmldomminidom.tex"
]
} | null |
python | cpython | 5a65c2d43607a5033d7171445848cde21f07d81d | https://github.com/python/cpython/issues/32681 | interpreter-core | .pyc writing/reading race condition (PR#189) | BPO | [210610](https://bugs.python.org/issue210610)
--- | :---
Nosy | @gvanrossum
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = 'https://github.com/gvanrossum'
closed_at = <Date 2000-09-20.20:33:21.000>
created_at = <Date 2000-07-31.21:05:42.000>
labels = ['interpreter-core']
title = '.pyc writing/reading race condition (PR#189)'
updated_at = <Date 2000-09-20.20:33:21.000>
user = 'https://bugs.python.org/anonymous'
```
bugs.python.org fields:
```python
activity = <Date 2000-09-20.20:33:21.000>
actor = 'gvanrossum'
assignee = 'gvanrossum'
closed = True
closed_date = None
closer = None
components = ['Interpreter Core']
creation = <Date 2000-07-31.21:05:42.000>
creator = 'anonymous'
dependencies = []
files = []
hgrepos = []
issue_num = 210610
keywords = []
message_count = 4.0
messages = ['66', '67', '68', '69']
nosy_count = 2.0
nosy_names = ['gvanrossum', 'jhylton']
pr_nums = []
priority = 'low'
resolution = 'fixed'
stage = None
status = 'closed'
superseder = None
type = None
url = 'https://bugs.python.org/issue210610'
versions = []
```
</p></details>
| null | null | null | {'base_commit': '5a65c2d43607a5033d7171445848cde21f07d81d', 'files': [{'path': 'Doc/library/os.rst', 'Loc': {}}]} | [] | [
"fcntl.h"
] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [
"fcntl.h"
],
"doc": [
"Doc/library/os.rst"
],
"test": [],
"config": [],
"asset": []
} | null |
python | cpython | adf03c3544084359d89e7a0bc2a5aa0561f1a0f2 | https://github.com/python/cpython/issues/68620 | stdlib
release-blocker | Upgrade windows builds to use OpenSSL 1.0.2c | BPO | [24432](https://bugs.python.org/issue24432)
--- | :---
Nosy | @pfmoore, @pitrou, @larryhastings, @giampaolo, @tiran, @tjguk, @benjaminp, @ned-deily, @alex, @bitdancer, @zware, @zooba, @dstufft
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = 'https://github.com/zooba'
closed_at = <Date 2015-07-03.22:28:01.834>
created_at = <Date 2015-06-11.15:05:25.361>
labels = ['library', 'release-blocker']
title = 'Upgrade windows builds to use OpenSSL 1.0.2c'
updated_at = <Date 2015-07-04.06:47:41.096>
user = 'https://github.com/alex'
```
bugs.python.org fields:
```python
activity = <Date 2015-07-04.06:47:41.096>
actor = 'python-dev'
assignee = 'steve.dower'
closed = True
closed_date = <Date 2015-07-03.22:28:01.834>
closer = 'steve.dower'
components = ['Library (Lib)']
creation = <Date 2015-06-11.15:05:25.361>
creator = 'alex'
dependencies = []
files = []
hgrepos = []
issue_num = 24432
keywords = ['security_issue']
message_count = 29.0
messages = ['245173', '245178', '245283', '246116', '246133', '246136', '246143', '246172', '246182', '246185', '246189', '246190', '246195', '246205', '246209', '246210', '246211', '246212', '246213', '246214', '246215', '246216', '246221', '246222', '246224', '246225', '246227', '246228', '246240']
nosy_count = 15.0
nosy_names = ['paul.moore', 'janssen', 'pitrou', 'larry', 'giampaolo.rodola', 'christian.heimes', 'tim.golden', 'benjamin.peterson', 'ned.deily', 'alex', 'r.david.murray', 'python-dev', 'zach.ware', 'steve.dower', 'dstufft']
pr_nums = []
priority = 'release blocker'
resolution = 'fixed'
stage = 'resolved'
status = 'closed'
superseder = None
type = None
url = 'https://bugs.python.org/issue24432'
versions = ['Python 2.7', 'Python 3.4', 'Python 3.5', 'Python 3.6']
```
</p></details>
| null | null | null | {'base_commit': 'adf03c3544084359d89e7a0bc2a5aa0561f1a0f2', 'files': [{'path': 'PCbuild/get_externals.bat', 'Loc': {'(None, None, 57)': {'mod': [57]}}, 'status': 'modified'}, {'path': 'PCbuild/python.props', 'Loc': {'(None, None, 37)': {'mod': [37]}}, 'status': 'modified'}, {'path': 'PCbuild/readme.txt', 'Loc': {'(None, None, 200)': {'mod': [200]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"PCbuild/readme.txt"
],
"test": [],
"config": [
"PCbuild/get_externals.bat",
"PCbuild/python.props"
],
"asset": []
} | null |
python | cpython | 5198a5c7aa77367765ae03542b561845094ca30d | https://github.com/python/cpython/issues/48435 | type-bug
stdlib
topic-regex | re module treats raw strings as normal strings | BPO | [4185](https://bugs.python.org/issue4185)
--- | :---
Nosy | @gvanrossum, @loewis, @akuchling, @birkenfeld, @ezio-melotti
Files | <li>[raw-strings-with-re.txt](https://bugs.python.org/file11868/raw-strings-with-re.txt "Uploaded as text/plain at 2008-10-23.03:55:27 by @ezio-melotti"): Interactive Python session with more examples</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = 'https://github.com/akuchling'
closed_at = <Date 2009-01-01.12:00:35.699>
created_at = <Date 2008-10-23.03:55:28.615>
labels = ['expert-regex', 'type-bug', 'library']
title = 're module treats raw strings as normal strings'
updated_at = <Date 2009-01-01.12:00:35.697>
user = 'https://github.com/ezio-melotti'
```
bugs.python.org fields:
```python
activity = <Date 2009-01-01.12:00:35.697>
actor = 'georg.brandl'
assignee = 'akuchling'
closed = True
closed_date = <Date 2009-01-01.12:00:35.699>
closer = 'georg.brandl'
components = ['Library (Lib)', 'Regular Expressions']
creation = <Date 2008-10-23.03:55:28.615>
creator = 'ezio.melotti'
dependencies = []
files = ['11868']
hgrepos = []
issue_num = 4185
keywords = []
message_count = 8.0
messages = ['75133', '75134', '75135', '75760', '77502', '77562', '77575', '78699']
nosy_count = 5.0
nosy_names = ['gvanrossum', 'loewis', 'akuchling', 'georg.brandl', 'ezio.melotti']
pr_nums = []
priority = 'normal'
resolution = 'fixed'
stage = None
status = 'closed'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue4185'
versions = ['Python 2.6', 'Python 2.5', 'Python 2.4']
```
</p></details>
| null | null | null | {'base_commit': '5198a5c7aa77367765ae03542b561845094ca30d', 'files': [{'path': 'Doc/library/re.rst', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2\nor\n3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"Doc/library/re.rst"
],
"test": [],
"config": [],
"asset": []
} | null |
THUDM | ChatGLM-6B | ab6bcb4968bef335175c0b01972657961b2b1250 | https://github.com/THUDM/ChatGLM-6B/issues/565 | [BUG/Help] <title>使用ptuning微调时报错RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Traceback (most recent call last):
File "main.py", line 429, in <module>
main()
File "main.py", line 112, in main
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, trust_remote_code=True)
File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 679, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained
return cls._from_pretrained(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1958, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 205, in __init__
self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)
File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 61, in __init__
self.text_tokenizer = TextTokenizer(vocab_file)
File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 22, in __init__
self.sp.Load(model_path)
File "/root/miniconda3/lib/python3.8/site-packages/sentencepiece/__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
File "/root/miniconda3/lib/python3.8/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
### Expected Behavior
_No response_
### Steps To Reproduce
使用ptuning微调时报错,已经是最新版的模型文件了
### Environment
```markdown
PyTorch 1.11.0
Python 3.8(ubuntu20.04)
Cuda 11.3
```
### Anything else?
_No response_ | null | null | null | {} | [] | [
"ice_text.model"
] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"ice_text.model"
]
} | null | |
THUDM | ChatGLM-6B | 801b1bb57690f0a99943f0a80c839b9ee120f3a7 | https://github.com/THUDM/ChatGLM-6B/issues/388 | 为什么不能用共享GPU内存呢[Feature] <title> | ### Is your feature request related to a problem? Please describe.
为什么不能用共享GPU内存呢
专用6G都满了但是共享GPU内存一点都没动
### Solutions
emm
### Additional context
_No response_ | null | null | null | {} | [] | [] | [
{
"org": "Jittor",
"pro": "JittorLLMs"
}
] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"JittorLLMs"
]
} | null | |
THUDM | ChatGLM-6B | afe08a19ccadc8b238c218b245bb4c1c62598588 | https://github.com/THUDM/ChatGLM-6B/issues/770 | RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
运行python cli_demo.py报错
root@4uot40mdrplpv-0:/yx/ChatGLM-6B# python mycli_demo.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "/yx/ChatGLM-6B/mycli_demo.py", line 6, in <module>
tokenizer = AutoTokenizer.from_pretrained("/yx/ChatGLM-6B/THUDM/chatglm-6b", trust_remote_code=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 679, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1958, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 205, in __init__
self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 61, in __init__
self.text_tokenizer = TextTokenizer(vocab_file)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py", line 22, in __init__
self.sp.Load(model_path)
File "/usr/local/lib/python3.11/site-packages/sentencepiece/__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
我是在docker中运行的, 麻烦看看是怎么回事, 谢谢
### Expected Behavior
_No response_
### Steps To Reproduce
help
### Environment
```markdown
- OS:Red Hat 4.8.5-44
- Python:3.11
- Transformers:4.27.1
- PyTorch:2.0
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :False
```
### Anything else?
_No response_ | null | null | null | {} | [] | [
"ice_text.model"
] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"ice_text.model"
]
} | null | |
THUDM | ChatGLM-6B | d11eb5213e3c17225b47bb806a120dd45a80b126 | https://github.com/THUDM/ChatGLM-6B/issues/63 | How to fix error like this: torch.cuda.OutOfMemoryError: CUDA out of memory ? | OS: ubuntu 20.04
The error message said we need to change value of max_split_size_mb, but I search source code and cannot find any file contains max_split_size_mb, would you please provide some guidance about how to fix?
```
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:16<00:00, 2.09s/it]
Traceback (most recent call last):
File "/home/zhangclb/sandbox/ai_llm/ChatGLM-6B/cli_demo.py", line 6, in <module>
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
File "/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 749, in cuda
return self._apply(lambda t: t.cuda(device))
File "/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 749, in <lambda>
return self._apply(lambda t: t.cuda(device))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 1.83 GiB total capacity; 1.27 GiB already allocated; 57.19 MiB free; 1.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
``` | null | null | null | {'base_commit': 'd11eb5213e3c17225b47bb806a120dd45a80b126', 'files': [{'path': 'cli_demo.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"cli_demo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
THUDM | ChatGLM-6B | a9fc0184446fba7f4f27addf519fea0b371df83a | https://github.com/THUDM/ChatGLM-6B/issues/417 | [Help] <title> Oracle Linux 7.9 运行int4模型出错,AttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat' | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
/x/home/chatglm_env/lib/python3.7/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version!
RequestsDependencyWarning)
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
/x/home/chatglm_env/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
No compiled kernel found.
Compiling kernels : /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.c
Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.c -shared -o /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.so
sh: gcc: command not found
Compile failed, using default cpu kernel code.
Compiling gcc -O3 -fPIC -std=c99 /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.c -shared -o /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.so
Kernels compiled : /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.so
Cannot load cpu kernel, don't use quantized model on cpu.
Using quantization cache
Applying quantization to glm layers
Traceback (most recent call last):
File "chatglm-int4-demo.py", line 8, in <module>
response, history = model.chat(tokenizer, '你好', history=[])
File "/x/home/chatglm_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py", line 1137, in chat
outputs = self.generate(**input_ids, **gen_kwargs)
File "/x/home/chatglm_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/x/home/chatglm_env/lib/python3.7/site-packages/transformers/generation/utils.py", line 1447, in generate
**model_kwargs,
File "/x/home/chatglm_env/lib/python3.7/site-packages/transformers/generation/utils.py", line 2447, in sample
output_hidden_states=output_hidden_states,
File "/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py", line 1051, in forward
return_dict=return_dict,
File "/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py", line 887, in forward
output_attentions=output_attentions
File "/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py", line 588, in forward
output_attentions=output_attentions
File "/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py", line 406, in forward
mixed_raw_layer = self.query_key_value(hidden_states)
File "/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py", line 334, in forward
output = W8A16LinearCPU.apply(input, self.weight, self.weight_scale, self.weight_bit_width, self.quantization_cache)
File "/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py", line 74, in forward
weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)
File "/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py", line 256, in extract_weight_to_float
func = cpu_kernels.int4WeightExtractionFloat
AttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat'
### Expected Behavior
_No response_
### Steps To Reproduce
from transformers import AutoTokenizer, AutoModel
model_path = '/x/home/chatglm-6b-int4'
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True).float()
response, history = model.chat(tokenizer, '你好', history=[])
### Environment
```markdown
- OS: Oracle 7.9
- Python: 3.7.13
- Transformers: 2.6.1
- PyTorch: 1.13.1
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : no cuda, use cpu
```
### Anything else?
_No response_ | null | null | null | {} | [] | [] | [
{
"pro": "gcc"
}
] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "库"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"gcc"
]
} | null | |
THUDM | ChatGLM-6B | 0c6d1750ef6042338534c3c97002175fa1ae0499 | https://github.com/THUDM/ChatGLM-6B/issues/10 | question | 可以使用自己的数据微调吗 | null | null | null | null | {'base_commit': '0c6d1750ef6042338534c3c97002175fa1ae0499', 'files': [{'path': 'ptuning/', 'Loc': {}}, {'path': 'ptuning/', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"ptuning/"
]
} | null |
THUDM | ChatGLM-6B | c55ecd89a079b86620cc722f2e21a14e3718d0f3 | https://github.com/THUDM/ChatGLM-6B/issues/39 | 6GB显卡提示显存不足 | 显卡:3060laptop 6GB
报错:RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 6.00 GiB total capacity; 5.27 GiB already allocated; 0 bytes free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF | null | null | null | {'base_commit': 'c55ecd89a079b86620cc722f2e21a14e3718d0f3', 'files': [{'path': 'web_demo.py', 'Loc': {'(None, None, None)': {'mod': [5]}}, 'status': 'modified'}, {'path': 'cli_demo.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"web_demo.py",
"cli_demo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
THUDM | ChatGLM-6B | 1d87dac585c8fafd708db16860b628928ec5a821 | https://github.com/THUDM/ChatGLM-6B/issues/532 | [BUG/Help] 这两天更新版本后,chat的微调好像用不了了 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
前几天使用chat微调还是可以用的,那时候output文件是完整的包,而不是增量微调包。
这两天更新后,使用的还是项目自带的train_chat.sh,模型用的是int4。
output文件确实小了,但是却无法运行了,具体形式为运行以下代码
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("/content/ChatGLM-6B/ptuning/output/chattm/checkpoint-50", trust_remote_code=True)
model = AutoModel.from_pretrained("/content/ChatGLM-6B/ptuning/output/chattm/checkpoint-50", trust_remote_code=True).half().cuda()
model = model.eval()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
```
报以下内容后无反应,至少5分钟。期间显存一直在上升
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
The dtype of attention mask (torch.int64) is not bool
最终报错
2023-04-11 13:51:41.577016: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
### Expected Behavior
_No response_
### Steps To Reproduce
-
### Environment
```markdown
colab pro 默认环境
```
### Anything else?
_No response_ | null | null | null | {'base_commit': '1d87dac585c8fafd708db16860b628928ec5a821', 'files': [{'path': 'ptuning/main.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"ptuning/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
THUDM | ChatGLM-6B | edb127326a2d5afd855484f12a38e0119151f826 | https://github.com/THUDM/ChatGLM-6B/issues/723 | centos上,2个12g显存的显卡如何配置可以同时使用 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
centos上,2个12g显存的显卡,无论是训练还是web,都始终用0号显卡,如何配置可以同时使用
### Expected Behavior
_No response_
### Steps To Reproduce
Centos7
12G nvida *2
### Environment
```markdown
- OS:Centos7
- Python:3.8
- Transformers:4.26.1
- PyTorch: 1.12
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :True
```
### Anything else?
_No response_ | null | null | null | {'base_commit': 'edb127326a2d5afd855484f12a38e0119151f826', 'files': [{'path': 'ptuning/train.sh', 'Loc': {'(None, None, 4)': {'mod': [4]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config\nOther 脚本"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"ptuning/train.sh"
]
} | null | |
THUDM | ChatGLM-6B | 801b1bb57690f0a99943f0a80c839b9ee120f3a7 | https://github.com/THUDM/ChatGLM-6B/issues/394 | [BUG/Help] ValueError: 150000 is not in list | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
0%| | 19/30000 [31:30<828:54:23, 99.53s/it]
0%| | 20/30000 [33:09<828:37:17, 99.50s/it]
0%| | 21/30000 [34:48<828:09:42, 99.45s/it]Traceback (most recent call last):
File "/root/projects/ChatGLM-6B/ptuning/main.py", line 393, in <module>
main()
File "/root/projects/ChatGLM-6B/ptuning/main.py", line 332, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py", line 1633, in train
return inner_training_loop(
File "/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py", line 1902, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py", line 2645, in training_step
loss = self.compute_loss(model, inputs)
File "/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py", line 2677, in compute_loss
outputs = model(**inputs)
File "/root/anaconda3/envs/torch10/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py", line 1160, in forward
transformer_outputs = self.transformer(
File "/root/anaconda3/envs/torch10/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py", line 928, in forward
mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]
File "/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py", line 928, in <listcomp>
mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]
ValueError: 150000 is not in list
### Expected Behavior
_No response_
### Steps To Reproduce
PRE_SEQ_LEN=8
LR=1e-2
CUDA_VISIBLE_DEVICES=0 python3 main.py \
--do_train \
--train_file ../data/train.json \
--validation_file ../data/dev.json \
--prompt_column instruction \
--response_column output \
--overwrite_cache \
--model_name_or_path ~/projects/zero_nlp/simple_thu_chatglm6b/thuglm/ \
--output_dir output/adgen-chatglm-6b-pt-$PRE_SEQ_LEN-$LR \
--overwrite_output_dir \
--max_source_length 64 \
--max_target_length 64 \
--per_device_train_batch_size 100 \
--per_device_eval_batch_size 100 \
--gradient_accumulation_steps 16 \
--predict_with_generate \
--max_steps 30000 \
--logging_steps 100 \
--save_steps 100 \
--learning_rate $LR \
--pre_seq_len $PRE_SEQ_LEN \
# --quantization_bit 4
### Environment
```markdown
- OS: centos8
- Python: 3.9
- Transformers: 4.27.1
- PyTorch:2.0.0
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True
```
### Anything else?
_No response_ | null | null | null | {} | [] | [
"ice_text.model",
"modeling_chatglm.py"
] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0\n2",
"info_type": "Code"
} | {
"code": [
"modeling_chatglm.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"ice_text.model"
]
} | null | |
THUDM | ChatGLM-6B | 1047e446e5387aa06c856c95800f67beab8b80d4 | https://github.com/THUDM/ChatGLM-6B/issues/224 | [BUG/Help] ImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils' | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
>>> model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4",trust_remote_code=True).float()
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\mina_\Anaconda3\envs\ChatGLM-6B\lib\site-packages\transformers\models\auto\auto_factory.py", line 456, in from_pretrained
logger.warning(
File "C:\Users\mina_\Anaconda3\envs\ChatGLM-6B\lib\site-packages\transformers\dynamic_module_utils.py", line 374, in get_class_from_dynamic_module
File "C:\Users\mina_\Anaconda3\envs\ChatGLM-6B\lib\site-packages\transformers\dynamic_module_utils.py", line 147, in get_class_in_module
def get_class_in_module(class_name, module_path):
File "C:\Users\mina_\Anaconda3\envs\ChatGLM-6B\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\mina_/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b-int4\dac03c3ac833dab2845a569a9b7f6ac4e8c5dc9b\modeling_chatglm.py", line 30, in <module>
from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig
File "C:\Users\mina_\Anaconda3\envs\ChatGLM-6B\lib\site-packages\transformers\generation\utils.py", line 39, in <module>
from .configuration_utils import GenerationConfig
File "C:\Users\mina_\Anaconda3\envs\ChatGLM-6B\lib\site-packages\transformers\generation\configuration_utils.py", line 24, in <module>
from ..utils import (
ImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils' (C:\Users\mina_\Anaconda3\envs\ChatGLM-6B\lib\site-packages\transformers\utils\__init__.py)
### Expected Behavior
_No response_
### Steps To Reproduce
1. `conda activate chatglm-6b`
2. `from transformers import AutoTokenizer, AutoModel`
3. `tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)`
4. `model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4",trust_remote_code=True).float()`
5. See this issue.
### Environment
```markdown
- OS: Windows 10
- Python: 3.7.5
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : False
```
### Anything else?
_No response_ | null | null | null | {'base_commit': '1047e446e5387aa06c856c95800f67beab8b80d4', 'files': [{'path': 'requirements.txt', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
THUDM | ChatGLM-6B | b65142b5e54e52b27c1c1269e1b4abd83efcce45 | https://github.com/THUDM/ChatGLM-6B/issues/422 | [BUG/Help] <title>KeyError: 'lm_head.weight' | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
报错:KeyError: 'lm_head.weight'
### Expected Behavior
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\Administrator\Downloads\ChatGLM-6B-main\cli_demo.py", line 7, in <module>
model = AutoModel.from_pretrained(r"C:\Users\Administrator\Downloads\ChatGLM-6B-main\model",trust_remote_code=True,ignore_mismatched_sizes=True).half().quantize(4).cuda()
File "C:\Program Files\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 466, in from_pretrained
return model_class.from_pretrained(
File "C:\Program Files\Python310\lib\site-packages\transformers\modeling_utils.py", line 2646, in from_pretrained
) = cls._load_pretrained_model(
File "C:\Program Files\Python310\lib\site-packages\transformers\modeling_utils.py", line 2959, in _load_pretrained_model
mismatched_keys += _find_mismatched_keys(
File "C:\Program Files\Python310\lib\site-packages\transformers\modeling_utils.py", line 2882, in _find_mismatched_keys
and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape
KeyError: 'lm_head.weight'
### Steps To Reproduce
报错:KeyError: 'lm_head.weight'
### Environment
```markdown
- OS:windows 10
- Python:3.10
- Transformers:4.27.1
- PyTorch:cu118
- CUDA Support True
```
### Anything else?
_No response_ | null | null | null | {} | [] | [
"pytorch_model-00001-of-00008.bin",
"pytorch_model-00008-of-00008.bin"
] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Models/数据"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"pytorch_model-00001-of-00008.bin",
"pytorch_model-00008-of-00008.bin"
]
} | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.