organization
string
repo_name
string
base_commit
string
iss_html_url
string
iss_label
string
title
string
body
string
code
null
pr_html_url
string
commit_html_url
string
file_loc
string
own_code_loc
list
ass_file_loc
list
other_rep_loc
list
analysis
dict
loctype
dict
iss_has_pr
int64
scrapy
scrapy
fd55f62207bbbb18d7758c8e2ef46fe9115eb2c5
https://github.com/scrapy/scrapy/issues/5400
bug CI
Tests broken with Twisted 22.1.0
`ImportError: cannot import name 'PayloadResource' from 'twisted.web.test.test_webclient'` `ImportError: cannot import name 'ForeverTakingResource' from 'twisted.web.test.test_webclient'`
null
https://github.com/scrapy/scrapy/pull/5405
null
{'base_commit': 'fd55f62207bbbb18d7758c8e2ef46fe9115eb2c5', 'files': [{'path': 'pytest.ini', 'status': 'modified', 'Loc': {'(None, None, 24)': {'mod': [24, 25]}}}, {'path': 'tests/mockserver.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [17, 20]}, "('LeafResource', None, 38)": {'mod': [38]}, "('Root'...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "tests/mockserver.py" ], "doc": [], "test": [ "tests/test_webclient.py", "tests/test_downloader_handlers.py" ], "config": [ "pytest.ini", "tox.ini" ], "asset": [] }
null
ultralytics
yolov5
04081f810270712ba3a69577c47e5dcfa850fa90
https://github.com/ultralytics/yolov5/issues/1355
bug
The exported label txt seems have problem
Hi, @glenn-jocher i manage to use `python detect.py --save-txt` to semi-auto label images, but when i set `Open Dir` and `Change Save Dir` in [labelImg](https://github.com/tzutalin/labelImg/releases/tag/v1.8.1),the labelImg can not display the exported bbox, and its command line window appears error: ``` Traceback (...
null
https://github.com/ultralytics/yolov5/pull/1377
null
{'base_commit': '04081f810270712ba3a69577c47e5dcfa850fa90', 'files': [{'path': '.github/workflows/ci-testing.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [69, 72]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [99]}}}, {'path': 'detect.py', 'status': 'modified...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "tutorial.ipynb", "utils/general.py", "detect.py", "train.py" ], "doc": [ "README.md" ], "test": [ "test.py" ], "config": [ ".github/workflows/ci-testing.yml" ], "asset": [] }
1
psf
requests
be62645dd56580dd7576032b348cf79d880851d8
https://github.com/psf/requests/issues/1088
Feature Request
Session pickling support is broken and tests for it are removed
The commit 42b029552190f6639642d0f62d27abcd1ceed51e removes the `__attrs__` attribute of the `Session` class, which is used in the pickle protocol's `__getstate__` method. The tests that are testing this functionality (functions `test_session_pickling` and `test_unpickled_session_requests` in the once present `tests/t...
null
https://github.com/psf/requests/pull/1223
null
{'base_commit': 'be62645dd56580dd7576032b348cf79d880851d8', 'files': [{'path': 'requests/sessions.py', 'status': 'modified', 'Loc': {"('Session', None, 166)": {'add': [178]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "requests/sessions.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
Significant-Gravitas
AutoGPT
34261a15835390c5c464cef88c4a42b52a88b739
https://github.com/Significant-Gravitas/AutoGPT/issues/987
Massage about Pinecone initializing
### Duplicates - [X] I have searched the existing issues ### Summary 💡 Add a message like: "Connecting Pinecone. This may take some time..." ### Examples 🌈 _No response_ ### Motivation 🔦 At this point, if the Pinecone index setup takes a noticeable amount of time, the console just stops. It is necessary to no...
null
https://github.com/Significant-Gravitas/AutoGPT/pull/1194
null
{'base_commit': '34261a15835390c5c464cef88c4a42b52a88b739', 'files': [{'path': 'autogpt/memory/pinecone.py', 'status': 'modified', 'Loc': {"('PineconeMemory', '__init__', 10)": {'add': [40]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "autogpt/memory/pinecone.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
6f7ae911f18fda59669309582706f1aa1f36374d
https://github.com/scikit-learn/scikit-learn/issues/19489
Bug Regression module:feature_extraction
'feature_name' referenced before assignment
<!-- Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the past issues. --> #### Describe the bug When I run some preprocessing on my data the line triggering the error is: ``` C:\local_tools\Anaconda3\envs\mother_env\lib\site-packages\sklearn\feature_ex...
null
https://github.com/scikit-learn/scikit-learn/pull/19520
null
{'base_commit': '6f7ae911f18fda59669309582706f1aa1f36374d', 'files': [{'path': 'doc/whats_new/v1.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [346], 'mod': [343]}}}, {'path': 'sklearn/feature_extraction/_dict_vectorizer.py', 'status': 'modified', 'Loc': {"('DictVectorizer', '_transform', 190)": {...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "sklearn/feature_extraction/_dict_vectorizer.py" ], "doc": [ "doc/whats_new/v1.0.rst" ], "test": [ "sklearn/feature_extraction/tests/test_dict_vectorizer.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
0fb9a50033574e36a8bd635d8e5c0a793428877c
https://github.com/scikit-learn/scikit-learn/issues/8996
Easy Sprint
Deprecate LSHForest
LSHForest should be deprecated and scheduled for removal in 0.21. It should also warn about having bad performance. cc @ogrisel
null
https://github.com/scikit-learn/scikit-learn/pull/9078
null
{'base_commit': '0fb9a50033574e36a8bd635d8e5c0a793428877c', 'files': [{'path': 'benchmarks/bench_plot_approximate_neighbors.py', 'status': 'removed', 'Loc': {}}, {'path': 'doc/modules/classes.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1062]}}}, {'path': 'doc/modules/neighbors.rst', 'status': 'mo...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "examples/neighbors/plot_approximate_nearest_neighbors_scalability.py", "benchmarks/bench_plot_approximate_neighbors.py", "examples/neighbors/plot_approximate_nearest_neighbors_hyperparameters.py", "sklearn/neighbors/approximate.py" ], "doc": [ "doc/modules/classes.rst", "doc/m...
1
deepfakes
faceswap
9438672b1cf80602fc93536670d9601d655377f5
https://github.com/deepfakes/faceswap/issues/150
code to integrate
Multi-GPU training
I've read reports of people succesfully training on multiple GPU'S using the following code: ``` from keras.utils import multi_gpu_model autoencoder_A = multi_gpu_model( autoencoder_A ,2) autoencoder_B = multi_gpu_model( autoencoder_B ,2) ``` https://keras.io/utils/#multi_gpu_model I could add supp...
null
https://github.com/deepfakes/faceswap/pull/241
null
{'base_commit': '9438672b1cf80602fc93536670d9601d655377f5', 'files': [{'path': 'scripts/train.py', 'status': 'modified', 'Loc': {"('TrainingProcessor', 'parse_arguments', 25)": {'mod': [75]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "scripts/train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
localstack
localstack
b8290ff8013366de16f7dd2ed14d74b56d1fb03b
https://github.com/localstack/localstack/issues/10860
Internal Refactoring: Towards a Multi-Distribution Setup
Over the next few weeks and months we’re refactoring the code in this repository to move toward a **multi-distribution setup**. For now this only affects active contributors as well as any developers that depend on code existing in the published container under the path `/opt/code/localstack/localstack`. Most users ...
null
https://github.com/localstack/localstack/pull/10800
null
{'base_commit': 'b8290ff8013366de16f7dd2ed14d74b56d1fb03b', 'files': [{'path': '.circleci/config.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [137, 405, 468, 469, 559, 560, 561, 562]}}}, {'path': '.dockerignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}}}, {'path': '.github/...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ ".circleci/config.yml" ], "doc": [ ".dockerignore" ], "test": [], "config": [ ".github/workflows/tests-pro-integration.yml", "Makefile", "Dockerfile", "Dockerfile.s3", ".github/workflows/tests-s3-image.yml", ".github/workflows/asf-updates.yml" ], "asset": [] }
1
pallets
flask
6f2fdc5ac4ad869a21c4c0281d7fa1eb8aa5a689
https://github.com/pallets/flask/issues/3628
Returning Response and headers causes duplicate headers
<!-- **This issue tracker is a tool to address bugs in Flask itself. Please use the Pallets Discord or Stack Overflow for general questions about using Flask or issues not related to Flask.** --> <!-- If you'd like to report a bug in Flask, fill out the template below. Provide any extra information that may be us...
null
https://github.com/pallets/flask/pull/3684
null
{'base_commit': '6f2fdc5ac4ad869a21c4c0281d7fa1eb8aa5a689', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [30, 31, 32]}}}, {'path': 'src/flask/app.py', 'status': 'modified', 'Loc': {"('Flask', 'make_response', 1935)": {'mod': [2048]}}}, {'path': 'tests/test_basic.py', 'st...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "src/flask/app.py" ], "doc": [ "CHANGES.rst" ], "test": [ "tests/test_basic.py" ], "config": [], "asset": [] }
1
lllyasviel
Fooocus
0a87da7dc1998e0073ba824c7f223cd331858b24
https://github.com/lllyasviel/Fooocus/issues/3502
bug can't reproduce feedback pending
[Bug]: Unsupported image type in input when using input image
### Checklist - [ ] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md) - [ ] The issue exists on a clean installation of Fooocus - [ ] The issue exists in the current version of Fooocus - [ ] The issue has not been reported before r...
null
https://github.com/lllyasviel/Fooocus/pull/3506
null
{'base_commit': '0a87da7dc1998e0073ba824c7f223cd331858b24', 'files': [{'path': 'launch.py', 'status': 'modified', 'Loc': {"(None, 'download_models', 104)": {'add': [104]}, '(None, None, None)': {'mod': [24]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "launch.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
Z4nzu
hackingtool
5c69e5cb13127601aaba6ee04e522ead84b74f6a
https://github.com/Z4nzu/hackingtool/issues/181
help me
when i run the install.sh it gives me this error [✘] Installation Failed !!! [✘] [✔] Loading ... Hit:1 http://kali.download/kali kali-rolling InRelease Reading package lists... Done E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied) E: Unable to acquire the dpkg frontend lock (/...
null
https://github.com/Z4nzu/hackingtool/pull/348
null
{'base_commit': '5c69e5cb13127601aaba6ee04e522ead84b74f6a', 'files': [{'path': 'install.sh', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 19, 33, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 6...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "update.sh", "install.sh" ] }
1
binary-husky
gpt_academic
6b5bdbe98a882a726ec9710e5e94baa94d470ad6
https://github.com/binary-husky/gpt_academic/issues/286
弱弱的提问,怎么可以解析前端项目呢
弱弱的提问,怎么可以解析前端项目呢
null
https://github.com/binary-husky/gpt_academic/pull/290
null
{'base_commit': '6b5bdbe98a882a726ec9710e5e94baa94d470ad6', 'files': [{'path': 'functional_crazy.py', 'status': 'modified', 'Loc': {"(None, 'get_crazy_functionals', 3)": {'mod': [46]}}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "functional_crazy.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
2b79665b90bd54fa59701090d5f608a1fc4dd33a
https://github.com/scikit-learn/scikit-learn/issues/18408
Bug module:ensemble
Data type mismatch problem when calling HistGradientBoostingClassifier.predict()
<!-- Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the past issues. --> #### Describe the bug It looks like HistGradientBoostingClassifier has problems on handling datasets with different data types. It works fine when X is `np.float`. However, when X is o...
null
https://github.com/scikit-learn/scikit-learn/pull/18410
null
{'base_commit': '2b79665b90bd54fa59701090d5f608a1fc4dd33a', 'files': [{'path': 'doc/whats_new/v0.24.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [213]}}}, {'path': 'sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py', 'status': 'modified', 'Loc': {"('BaseHistGradientBoosting', '_raw_pred...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py" ], "doc": [ "doc/whats_new/v0.24.rst" ], "test": [ "sklearn/ensemble/_hist_gradient_boosting/tests/test_gradient_boosting.py" ], "config": [], "asset": [] }
1
nvbn
thefuck
fee874cddc1af36344e1cdaedd6d80eb6aea8341
https://github.com/nvbn/thefuck/issues/449
Fuck alias for fish
`~> fuck` says: > ``` > Seems like fuck alias isn't configured! > Please put eval thefuck --alias in your ~/.config/fish/config.fish. > More details - https://github.com/nvbn/thefuck#manual-installation > ``` but https://github.com/nvbn/thefuck/wiki/Shell-aliases says: > Add this function to config.fish: > > ``` fi...
null
https://github.com/nvbn/thefuck/pull/450
null
{'base_commit': 'fee874cddc1af36344e1cdaedd6d80eb6aea8341', 'files': [{'path': 'thefuck/shells.py', 'status': 'modified', 'Loc': {"('Fish', 'how_to_configure', 201)": {'mod': [202]}}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "thefuck/shells.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
python
cpython
9f1814723f5596115a794a8bec0d053f25dbf32f
https://github.com/python/cpython/issues/96828
type-feature topic-SSL
Add an `ssl.OP_ENABLE_KTLS` option for enabling the use of the kernel TLS
# Feature or enhancement A new `ssl.OP_ENABLE_KTLS` option for enabling the use of the kernel TLS. # Pitch Kernel Transport Layer Security (kTLS) can improve performance of programs using TLS by reducing the number of switches between the user space and the kernel space. kTLS allows using the `sendfile` system...
null
https://github.com/python/cpython/pull/96830
null
{'base_commit': '9f1814723f5596115a794a8bec0d053f25dbf32f', 'files': [{'path': 'Doc/library/ssl.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [841]}}}, {'path': 'Modules/_ssl.c', 'status': 'modified', 'Loc': {"(None, 'sslmodule_init_constants', 5725)": {'add': [5883]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "Modules/_ssl.c" ], "doc": [ "Doc/library/ssl.rst" ], "test": [], "config": [], "asset": [] }
1
huggingface
transformers
fa876aee2adf525b597495c10ad9c96896953dbd
https://github.com/huggingface/transformers/issues/9620
SQuAD 2.0 metric not supported
Hello. I'm trying to run the official `run_qa.py` code for SQuAD 2.0. You have an open TODO here that is causing a bug: https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py#L436 I would like to know what is the status of this TODO, and if it is going to be updated, or is ...
null
https://github.com/huggingface/transformers/pull/9677
null
{'base_commit': 'fa876aee2adf525b597495c10ad9c96896953dbd', 'files': [{'path': 'examples/question-answering/requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}, {'path': 'examples/question-answering/run_qa.py', 'status': 'modified', 'Loc': {"(None, 'main', 159)": {'mod': [436, 437, 438...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "examples/question-answering/run_qa.py", "examples/question-answering/squad_v2_local/evaluate.py", "examples/question-answering/squad_v2_local/squad_v2_local.py", "examples/question-answering/run_qa_beam_search.py" ], "doc": [], "test": [], "config": [ "examples/question-answer...
1
sherlock-project
sherlock
21fe11db51edcca881665694c4cc2a3fe6f1af54
https://github.com/sherlock-project/sherlock/issues/113
help wanted
Blackplanet false positive
Blackplanet is giving false positives. (request from germany) @Czechball you added this in #81 ; maybe you are able to fix it?
null
https://github.com/sherlock-project/sherlock/pull/169
null
{'base_commit': '21fe11db51edcca881665694c4cc2a3fe6f1af54', 'files': [{'path': 'data.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, ...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "tests/base.py", "tests/all.py", "data.json" ], "doc": [ "sites.md" ], "test": [], "config": [], "asset": [] }
1
geekan
MetaGPT
f201b2f5f32c2d48eab6632bf103e9b3a92fc999
https://github.com/geekan/MetaGPT/issues/1239
RAG faiss AssertionError
**Bug description** <!-- Clearly and directly describe the current bug --> execute this demo ```Python import asyncio from metagpt.rag.engines import SimpleEngine from metagpt.rag.schema import FAISSRetrieverConfig from metagpt.const import EXAMPLE_DATA_PATH DOC_PATH = EXAMPLE_DATA_PATH / "rag/travel.txt" ...
null
https://github.com/geekan/MetaGPT/pull/1241
null
{'base_commit': 'f201b2f5f32c2d48eab6632bf103e9b3a92fc999', 'files': [{'path': 'config/config2.example.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20]}}}, {'path': 'metagpt/configs/embedding_config.py', 'status': 'modified', 'Loc': {"('EmbeddingConfig', None, 16)": {'add': [22, 27, 34, 43]}}}, {...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "metagpt/rag/schema.py", "metagpt/configs/embedding_config.py" ], "doc": [], "test": [], "config": [ "config/config2.example.yaml" ], "asset": [] }
1
keras-team
keras
8f5592bcb61ff48c96560c8923e482db1076b54a
https://github.com/keras-team/keras/issues/20324
type:support keras-team-review-pending
Reason for the recently added shape restriction in MultiHeadAttention
Hello, Wondering why is there a restriction on the input shape of `query` and `value` to have a matching final dimension? This blocks having cross-attention to a source that has a different shape than query, unless adding an extra projection layer. Given that all input tensors (`query`, `key`, `value`) are immedi...
null
https://github.com/keras-team/keras/pull/20340
null
{'base_commit': '8f5592bcb61ff48c96560c8923e482db1076b54a', 'files': [{'path': 'keras/src/layers/attention/multi_head_attention.py', 'status': 'modified', 'Loc': {"('MultiHeadAttention', 'build', 199)": {'mod': [214, 215, 216, 217, 218, 219]}, "('MultiHeadAttention', 'compute_output_shape', 598)": {'mod': [607, 608, 60...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "keras/src/layers/attention/multi_head_attention_test.py", "keras/src/layers/attention/multi_head_attention.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
ansible
ansible
2897cf43cea3d61b9673ce14ba796a663d99f19d
https://github.com/ansible/ansible/issues/56571
python3 support:community bug has_pr affects_2.7 collection collection:community.general needs_collection_redirect bot_closed
"machinectl: invalid option -- 'c'" when using become_method: machinectl
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `become_method: machi...
null
https://github.com/ansible/ansible/pull/56572
null
{'base_commit': '2897cf43cea3d61b9673ce14ba796a663d99f19d', 'files': [{'path': 'lib/ansible/plugins/become/machinectl.py', 'status': 'modified', 'Loc': {"('BecomeModule', 'build_become_command', 78)": {'mod': [87]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "lib/ansible/plugins/become/machinectl.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
pandas-dev
pandas
b5a5268dabb2a4dea1c3c543a1ddff501b87a447
https://github.com/pandas-dev/pandas/issues/16870
Docs Groupby good first issue
(DOC) A `string` passed to `groupby` is hard to understand based on current doc
#### Code Sample, a copy-pastable example if possible From [Here](pandas/doc/source/groupby.rst) ```rst For DataFrame objects, a string indicating a column to be used to group. Of course df.groupby('A') is just syntactic sugar for df.groupby(df['A']), but it makes life simpler For DataFrame objects, a string in...
null
https://github.com/pandas-dev/pandas/pull/36238
null
{'base_commit': 'b5a5268dabb2a4dea1c3c543a1ddff501b87a447', 'files': [{'path': 'doc/source/user_guide/groupby.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [90, 91, 92, 93, 94]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [], "doc": [ "doc/source/user_guide/groupby.rst" ], "test": [], "config": [], "asset": [] }
1
pandas-dev
pandas
48d0460ab9acbee223bae1be699344f8fd232224
https://github.com/pandas-dev/pandas/issues/12401
Indexing API Design Deprecate Needs Discussion
DEPR: filter & select
do we need label selectors? we should for sure just have a single method for this. maybe call it `query_labels`? to be consistent with `.query` as the workhorse for data selection. - [x] ``.select`` (#17633) - [ ] ``.filter`` xref #6599
null
null
https://github.com/pandas-dev/pandas/commit/48d0460ab9acbee223bae1be699344f8fd232224
{'base_commit': '48d0460ab9acbee223bae1be699344f8fd232224', 'files': [{'path': 'doc/source/whatsnew/v0.21.0.txt', 'status': 'modified', 'Loc': {'(None, None, 669)': {'add': [669]}}}, {'path': 'pandas/core/common.py', 'status': 'modified', 'Loc': {"(None, '_apply_if_callable', 444)": {'add': [447]}}}, {'path': 'pandas/c...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": "" }
{ "code": [ "pandas/core/common.py", "pandas/core/generic.py", "pandas/core/indexing.py" ], "doc": [ "doc/source/whatsnew/v0.21.0.txt" ], "test": [ "pandas/tests/series/test_indexing.py", "pandas/tests/test_multilevel.py", "pandas/tests/frame/test_mutate_columns.py", "pandas/te...
null
deepfakes
faceswap
9dc151e5b58abb5f8862d2aa84124ed86156e0b8
https://github.com/deepfakes/faceswap/issues/355
when using GUI recent version, A converting error has occurred.
I am testing the gui version downloaded today. But when converting, the following error has occurred. Can anyone tell me what I am doing wrong or how to solve it? (1) error message "Failed to convert image: ...\faceA_source_gui\out1.png. Reason: argument of type 'NoneType' is not iterable" (1) train image : ...
null
https://github.com/deepfakes/faceswap/pull/352
null
{'base_commit': '9dc151e5b58abb5f8862d2aa84124ed86156e0b8', 'files': [{'path': 'faceswap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [30]}}}, {'path': 'requirements-gpu-python35-cuda8.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}}}, {'path': 'requirements-gpu-python36-cuda...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "scripts/gui.py", "scripts/train.py", "faceswap.py", "tools.py", "tools/sort.py", "scripts/convert.py" ], "doc": [], "test": [], "config": [ "requirements-gpu-python36-cuda9.txt", "requirements-gpu-python35-cuda8.txt", "requirements-python35.txt", "requireme...
1
xtekky
gpt4free
b9478049b3e8644be2de93015476b9111126d683
https://github.com/xtekky/gpt4free/issues/660
bug
gpt4free useless: IndexError: list index out of range
**Bug description** Telegram bot using gpt4free not working main.py: ```import telebot from gpt4free import usesless bot = telebot.TeleBot('my_token') @bot.message_handler(commands=['start']) def send_welcome(message): bot.reply_to(message, "ChatGPT unlimited and free but without memory") @bot.message_handler() ...
null
https://github.com/xtekky/gpt4free/pull/664
null
{'base_commit': 'b9478049b3e8644be2de93015476b9111126d683', 'files': [{'path': 'gpt4free/usesless/__init__.py', 'status': 'modified', 'Loc': {"('Completion', '__response_to_json', 148)": {'mod': [151, 152, 153]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "gpt4free/usesless/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
keras-team
keras
aab55e649c34f8a24f00ee63922d049d3417c979
https://github.com/keras-team/keras/issues/8304
HDF5 Normalizer not working.
``` def preprocess_train(array): """ Given a batch of numpy arrays, it outputs a batch of numpy of arrays with all preprocessing size : (w, h) """ num1 = np.random.randint(0, 128 - 112) num2 = np.random.randint(0, 171 - 112) crop = array[ :, num1:num1+112, num2:num2+112, :] crop = ...
null
https://github.com/keras-team/keras/pull/10749
null
{'base_commit': 'aab55e649c34f8a24f00ee63922d049d3417c979', 'files': [{'path': 'keras/utils/io_utils.py', 'status': 'modified', 'Loc': {"('HDF5Matrix', '__init__', 44)": {'add': [60]}, "('HDF5Matrix', 'shape', 98)": {'mod': [104]}, "('HDF5Matrix', 'dtype', 107)": {'mod': [113]}}}, {'path': 'tests/keras/utils/io_utils_t...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "keras/utils/io_utils.py", "tests/keras/utils/io_utils_test.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
home-assistant
core
8c8feb95a9c9048d655bc1eb263f6bc6ee61ee74
https://github.com/home-assistant/core/issues/4
Instructions don't result in homeassistant listening on any port
Neither of these result in homeassistant listening on port `8123` ``` bash python3 -m homeassistant python3 -m homeassistant --config=config ``` In fact, it isn't seeming to be listening on _any_ port. ``` bash (ve)[jeff@omniscience home-assistant] (master)$ ./build_frontend (ve)[jeff@omniscience home-assistant] (ma...
null
https://github.com/home-assistant/core/pull/35811
null
{'base_commit': '8c8feb95a9c9048d655bc1eb263f6bc6ee61ee74', 'files': [{'path': 'homeassistant/components/google_assistant/helpers.py', 'status': 'modified', 'Loc': {"('GoogleEntity', 'sync_serialize', 393)": {'add': [428]}}}, {'path': 'tests/components/google_assistant/test_helpers.py', 'status': 'modified', 'Loc': {"(...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "homeassistant/components/google_assistant/helpers.py" ], "doc": [], "test": [ "tests/components/google_assistant/test_helpers.py" ], "config": [], "asset": [] }
1
psf
requests
2203c3bccd5e4888a16d73247d540fd6e359d29c
https://github.com/psf/requests/issues/1
Cookie support?
An feature request (not found in documentation). Does this support cookies? Usecase: I can integrate this module inside an existings framework. This framework generate for me the authentication/session cookie, so to perform request using requests there I need to add the same auth cookie already generated.
null
null
https://github.com/psf/requests/commit/2203c3bccd5e4888a16d73247d540fd6e359d29c
{'base_commit': '2203c3bccd5e4888a16d73247d540fd6e359d29c', 'files': [{'path': 'requests/core.py', 'status': 'modified', 'Loc': {"('Request', '__init__', 68)": {'add': [76]}, "('Request', None, 61)": {'add': [101]}, "('Request', '_get_opener', 101)": {'mod': [108, 109, 112, 113, 114]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "并没有找到对应的pr,这一行提供的pr也无法解决issue问题,在issue中解决问题的是一个commit", "info_type": "" }
{ "code": [ "requests/core.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
psf
requests
ac4e05874a1a983ca126185a0e4d4e74915f792e
https://github.com/psf/requests/issues/1859
Brittle test
The test `test_expires_valid_str` fails on my OS X box, in Python 2.7: ``` python ============================= test session starts ============================== platform darwin -- Python 2.7.5 -- pytest-2.3.4 plugins: cov collected 116 items test_requests.py ...........................................................
null
https://github.com/psf/requests/pull/1860
null
{'base_commit': 'ac4e05874a1a983ca126185a0e4d4e74915f792e', 'files': [{'path': 'requests/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}, "(None, 'morsel_to_cookie', 388)": {'mod': [396, 397]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "requests/cookies.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
nvbn
thefuck
58ddd4338adf12a3abc2ffed0e27794a398fa8d2
https://github.com/nvbn/thefuck/issues/994
help wanted hacktoberfest
UnicodeDecodeError when using thefuck
I followed the alias guide, but I got an error when running thefuck in PowerShell: ``` Traceback (most recent call last): File "d:\python36\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "d:\python36\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\P...
null
https://github.com/nvbn/thefuck/pull/1214
null
{'base_commit': '58ddd4338adf12a3abc2ffed0e27794a398fa8d2', 'files': [{'path': 'tests/output_readers/test_rerun.py', 'status': 'modified', 'Loc': {"('TestRerun', None, 9)": {'add': [24]}}}, {'path': 'thefuck/output_readers/rerun.py', 'status': 'modified', 'Loc': {"(None, 'get_output', 45)": {'mod': [63]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "thefuck/output_readers/rerun.py" ], "doc": [], "test": [ "tests/output_readers/test_rerun.py" ], "config": [], "asset": [] }
1
deepfakes
faceswap
c50287c23b3f35f54aa703823a8c3f9cbfc34377
https://github.com/deepfakes/faceswap/issues/233
Some faces with one eye hair covered can't be recognized
*First THANKS A LOT for all contributors' hard work! *Always make a compare test after big change, test with same source 1000 pics (kar801 -> kar1800) , compare with FakeApp1.1 & latest faceswap commit 232d931. *Test files [Link Removed] ## Expected behavior Not sure, limitation ? or possible to improve ? ## A...
null
https://github.com/deepfakes/faceswap/pull/236
null
{'base_commit': 'c50287c23b3f35f54aa703823a8c3f9cbfc34377', 'files': [{'path': 'lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13], 'mod': [11, 12]}, "(None, 'extract', 114)": {'add': [162, 170], 'mod': [114, 115, 117, 118, 121, 123, 124, 125, 126, 12...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py", "lib/ModelAE.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
huggingface
transformers
02e05fb0a532e572b56ba75dad6ba3db625bbdeb
https://github.com/huggingface/transformers/issues/9438
Documentation
Doc styling utils adds parasites new lines
## Environment info - `transformers` version: 4.2.0dev0 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: Nope - Using distributed or parallel set-up in script?: Nope ### Who can help ...
null
https://github.com/huggingface/transformers/pull/9488
null
{'base_commit': '02e05fb0a532e572b56ba75dad6ba3db625bbdeb', 'files': [{'path': 'docs/source/benchmarks.rst', 'status': 'modified', 'Loc': {}}, {'path': 'utils/style_doc.py', 'status': 'modified', 'Loc': {"(None, 'style_rst_file', 378)": {'mod': [384, 386]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "utils/style_doc.py" ], "doc": [ "docs/source/benchmarks.rst" ], "test": [], "config": [], "asset": [] }
1
scrapy
scrapy
09e56ae43eb63641381e0d722a04536c2fe22c0d
https://github.com/scrapy/scrapy/issues/3616
Document LogFormatter
Currently, the `LogFormatter` class is only mentioned in the [Release notes](https://docs.scrapy.org/en/latest/news.html) page of the documentation. This class should be properly documented, both its API members and a small section introducing it on the documentation page about [Logging](https://docs.scrapy.org/en/late...
null
https://github.com/scrapy/scrapy/pull/3660
null
{'base_commit': '09e56ae43eb63641381e0d722a04536c2fe22c0d', 'files': [{'path': 'docs/topics/logging.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [195]}}}, {'path': 'docs/topics/settings.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [868]}}}, {'path': 'scrapy/logformatter.py', 's...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "scrapy/logformatter.py" ], "doc": [ "docs/topics/logging.rst", "docs/topics/settings.rst" ], "test": [], "config": [], "asset": [] }
1
scrapy
scrapy
47b9de93a9c7a514f4007439335facd8ea82a12d
https://github.com/scrapy/scrapy/issues/2905
enhancement docs help wanted
An error occurred while connecting: [Failure instance: Traceback: <class 'ValueError'>: filedescriptor out of range in select()
I'm trying crawl ~200k sites, only the home pages. In the beginning the crawl works fine but the logs quickly fill up with the following errors: 2017-08-29 11:18:55,131 - scrapy.core.scraper - ERROR - Error downloading <GET http://axo-suit.eu> Traceback (most recent call last): File "venv/lib/python3.6/site-pack...
null
https://github.com/scrapy/scrapy/pull/4294
null
{'base_commit': '47b9de93a9c7a514f4007439335facd8ea82a12d', 'files': [{'path': 'docs/faq.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [363]}}}, {'path': 'docs/topics/broad-crawls.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [213]}}}, {'path': 'docs/topics/settings.rst', 'status...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "scrapy/utils/reactor.py", "scrapy/crawler.py", "scrapy/utils/log.py", "tests/CrawlerProcess/asyncio_enabled_no_reactor.py", "scrapy/utils/defer.py", "scrapy/utils/asyncio.py", "tests/CrawlerProcess/asyncio_enabled_reactor.py", "scrapy/settings/default_settings.py" ], "...
1
pallets
flask
fb89745408cc02515815c792355c7e883b2d08a4
https://github.com/pallets/flask/issues/4602
Flask.auto_find_instance_path() can return wrong path for namespace packages installed in development mode
https://github.com/pallets/flask/blob/bd56d19b167822a9a23e2e9e2a07ccccc36baa8d/src/flask/scaffold.py#L798 If there are several packages under the same namespace, all installed in development mode, like: ``` ~/namespace-package1/ namespace/ package1/ __init__.py app.py ...
null
https://github.com/pallets/flask/pull/4610
null
{'base_commit': 'fb89745408cc02515815c792355c7e883b2d08a4', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}}}, {'path': 'src/flask/scaffold.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, '_find_package_path', 783)": {'add': [784], 'mod'...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "src/flask/scaffold.py" ], "doc": [ "CHANGES.rst" ], "test": [ "tests/test_instance_config.py" ], "config": [ "tox.ini" ], "asset": [] }
1
huggingface
transformers
0ee71188ff184ee5f8b70081665858301fe4afb1
https://github.com/huggingface/transformers/issues/20395
some tokenizer(s) don't save the updated attributes
### System Info transformers version: 4.25.0.dev0 Torch version: 1.13.0+cpu Cuda available: False Cuda version: None CuDNN version: None Number of GPUs available: 0 ### Description For `GPT2Tokenizer(Fast)`, Set `tokenizer.model_max_length` to `128` (originally `1024`), save it then reload, will give `tok...
null
https://github.com/huggingface/transformers/pull/20401
null
{'base_commit': '0ee71188ff184ee5f8b70081665858301fe4afb1', 'files': [{'path': 'src/transformers/tokenization_utils_base.py', 'status': 'modified', 'Loc': {"('PreTrainedTokenizerBase', 'save_pretrained', 2022)": {'add': [2084]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "src/transformers/tokenization_utils_base.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
AntonOsika
gpt-engineer
c6dd5237428895c0ba6cda40e3b2b95012276a05
https://github.com/AntonOsika/gpt-engineer/issues/928
bug triage
KeyError in apply_edits breaking improve mode
I am running improve mode, creating c# and xaml. GPT Engineer is attempting to make updates to a xaml user control (here renamed to be "myExistingUserControl.xaml") and running into an issue where the filepath is invalid. ```These edits will ensure that the code changes are in the correct format and can be found in ...
null
https://github.com/AntonOsika/gpt-engineer/pull/930
null
{'base_commit': 'c6dd5237428895c0ba6cda40e3b2b95012276a05', 'files': [{'path': 'gpt_engineer/preprompts/improve', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [67], 'mod': [11, 32, 41, 52]}}}, {'path': 'tests/core/test_chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [185]},...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [], "doc": [], "test": [ "tests/core/test_chat_to_files.py" ], "config": [], "asset": [ "gpt_engineer/preprompts/improve" ] }
1
ultralytics
yolov5
77415a42e5975ea356393c9f1d5cff0ae8acae2c
https://github.com/ultralytics/yolov5/issues/2446
enhancement
Images in MPO Format are considered corrupted
I am using images taken by a DJI drone. These images are deemed corrupted by the dataset loader, and are thus not used. This happens because in datasets.py the `im.format` is checked against a list of formats that doesn't contain "mpo". If I add that entry manually everything works as expected. MPO is a container ...
null
https://github.com/ultralytics/yolov5/pull/2615
null
{'base_commit': '77415a42e5975ea356393c9f1d5cff0ae8acae2c', 'files': [{'path': 'utils/datasets.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [29]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "utils/datasets.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
0bbd57b322aaa5aeca4f3af2dd7f802360d29673
https://github.com/scikit-learn/scikit-learn/issues/2190
Bug
crash in MeanShift tests after make cython (edited from k_means)
The crash: ``` [erg@pliny scikit-learn]$ [master*] nosetests -v /home/erg/python/scikit-learn/sklearn/feature_selection/selector_mixin.py:7: DeprecationWarning: sklearn.feature_selection.selector_mixin.SelectorMixin has been renamed sklearn.feature_selection.from_model._LearntSelectorMixin, and this alias will be remo...
null
https://github.com/scikit-learn/scikit-learn/pull/2230
null
{'base_commit': '0bbd57b322aaa5aeca4f3af2dd7f802360d29673', 'files': [{'path': 'sklearn/neighbors/binary_tree.pxi', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1199, 1257, 1258, 1260, 1345, 1355, 1357, 1398, 1400, 1401, 1403, 1491, 1544, 1589]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "sklearn/neighbors/binary_tree.pxi" ], "doc": [], "test": [], "config": [], "asset": [] }
1
Significant-Gravitas
AutoGPT
4c2a566acc37c8d95b07c023f8c52a1a2a5d15bf
https://github.com/Significant-Gravitas/AutoGPT/issues/2186
bug needs investigation API access
Azure support broken?
### ⚠️ Search for existing issues first ⚠️ - [X] I have searched the existing issues, and there is no existing issue for my problem ### GPT-3 or GPT-4 - [ ] I am using Auto-GPT with GPT-3 (GPT-3.5) ### Steps to reproduce 🕹 ```yaml azure.yaml: azure_api_type: azure azure_api_base: https://test.openai....
null
https://github.com/Significant-Gravitas/AutoGPT/pull/2351
null
{'base_commit': '4c2a566acc37c8d95b07c023f8c52a1a2a5d15bf', 'files': [{'path': 'autogpt/config/config.py', 'status': 'modified', 'Loc': {"('Config', 'load_azure_config', 136)": {'mod': [157]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "autogpt/config/config.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
nvbn
thefuck
f966ecd4f5b8221ee15e843f5ec287e1f7cca940
https://github.com/nvbn/thefuck/issues/740
wrong suggestion with git push --set-upstream
Thefuck is incorrectly adding the remote name at the end of the command suggestion: ``` $ git push myfork fatal: The current branch test-branch has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream myfork test-branch $ fuck git push --set-upstrea...
null
https://github.com/nvbn/thefuck/pull/745
null
{'base_commit': 'f966ecd4f5b8221ee15e843f5ec287e1f7cca940', 'files': [{'path': 'tests/rules/test_git_push.py', 'status': 'modified', 'Loc': {"(None, 'test_get_new_command', 23)": {'add': [25]}}}, {'path': 'thefuck/rules/git_push.py', 'status': 'modified', 'Loc': {"(None, 'get_new_command', 22)": {'add': [34]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [ "thefuck/rules/git_push.py" ], "doc": [], "test": [ "tests/rules/test_git_push.py" ], "config": [], "asset": [] }
1
huggingface
transformers
e4b234834a79541f31be227aadce13f5aafda85a
https://github.com/huggingface/transformers/issues/16497
WIP
[TODO] Investigate equivalence tests
**(add a lot of assignees just to make you informed and kept updated in the future. Don't hesitate to remove yourself if you think it's irrelevant)** Currently the PT/TF/Flax equivalence tests use `1e-5` as the tolerance for the absolute differences of outputs. We see that these tests failed with a non-negligible...
null
https://github.com/huggingface/transformers/pull/16517
null
{'base_commit': 'e4b234834a79541f31be227aadce13f5aafda85a', 'files': [{'path': 'templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}, "(None, 'prepare_config_and_inputs',...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "" }
{ "code": [], "doc": [], "test": [ "tests/test_modeling_tf_common.py", "templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py", "tests/openai/test_modeling_tf_openai.py", "tests/funnel/test_modeling_tf_funnel.py", ...
1
pallets
flask
01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e
https://github.com/pallets/flask/issues/1971
Implement RFC 7233
It would be great to support [RFC 7233 : Hypertext Transfer Protocol (HTTP/1.1): Range Requests](https://tools.ietf.org/html/rfc7233) for next major version, at least for non multipart/byteranges media type. I'm willing to implement this, so please share your thoughts about this. What must be done: - Modify `send_fil...
null
https://github.com/pallets/flask/pull/2031
null
{'base_commit': '01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e', 'files': [{'path': 'CHANGES', 'status': 'modified', 'Loc': {'(None, None, 20)': {'add': [20]}}}, {'path': 'flask/helpers.py', 'status': 'modified', 'Loc': {"(None, 'send_file', 430)": {'add': [448, 502], 'mod': [538, 544, 578]}, '(None, None, None)': {'mod': [...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "flask/helpers.py" ], "doc": [ "CHANGES" ], "test": [ "tests/test_helpers.py" ], "config": [], "asset": [] }
null
pallets
flask
673e5af658cf029e82d87047dcb7ebee3d343d10
https://github.com/pallets/flask/issues/2823
Flask complains a .env file exists when not using python-dotenv, even though that .env is a directory
I place my virtualenvs in a `.env` directory in my project directory. Flask 1.x sees this directory and thinks it might be a "dotenv" file (even though it is a directory). ### Expected Behavior `flask` should ignore a `.env` directory when `python-dotenv` is not installed. ### Actual Behavior `flask` says: ...
null
https://github.com/pallets/flask/pull/2827
null
{'base_commit': '673e5af658cf029e82d87047dcb7ebee3d343d10', 'files': [{'path': 'flask/cli.py', 'status': 'modified', 'Loc': {"(None, 'load_dotenv', 567)": {'mod': [587]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "flask/cli.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
pallets
flask
8e589daaf2cec6a10262b8ff88801127f2fa14fd
https://github.com/pallets/flask/issues/4220
`template_filter` decorator typing does not support custom filters with multiple arguments
`template_filter` decorator typing does not support custom filters that take in multiple arguments. Consider: ```py from flask import Flask app = Flask(__name__) @app.template_filter('foo_bar') def foo_bar_filter(foo, bar): return f'{foo} {bar}' ``` `mypy` will return the following error message: ...
null
null
https://github.com/pallets/flask/commit/8e589daaf2cec6a10262b8ff88801127f2fa14fd
{'base_commit': '8e589daaf2cec6a10262b8ff88801127f2fa14fd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 10)': {'add': [10]}}}, {'path': 'src/flask/typing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [43, 44, 45]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "src/flask/typing.py" ], "doc": [ "CHANGES.rst" ], "test": [], "config": [], "asset": [] }
null
sherlock-project
sherlock
a4df5010f49044eb1f1713057e8914e6a5a104b3
https://github.com/sherlock-project/sherlock/issues/1073
false positive
producthunt.com false positive
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Put x into all boxes (like this [x]) once you have...
null
null
https://github.com/sherlock-project/sherlock/commit/a4df5010f49044eb1f1713057e8914e6a5a104b3
{'base_commit': 'a4df5010f49044eb1f1713057e8914e6a5a104b3', 'files': [{'path': 'sherlock/resources/data.json', 'status': 'modified', 'Loc': {'(None, None, 1159)': {'mod': [1159]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "sherlock/resources/data.json" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
d2803c0fb7d0ba9361dcba8eb9bcebbf2f774958
https://github.com/keras-team/keras/issues/11023
Cannot load_model
Thank you! - [ ] Check that you are up-to-date with the master branch of Keras. You can update with: pip install git+git://github.com/keras-team/keras.git --upgrade --no-deps - [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found [here](h...
null
https://github.com/keras-team/keras/pull/10727
null
{'base_commit': 'd2803c0fb7d0ba9361dcba8eb9bcebbf2f774958', 'files': [{'path': 'keras/engine/saving.py', 'status': 'modified', 'Loc': {"(None, 'get_json_type', 61)": {'mod': [82, 83]}}}, {'path': 'tests/test_model_saving.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14, 643]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "keras/engine/saving.py" ], "doc": [], "test": [ "tests/test_model_saving.py" ], "config": [], "asset": [] }
null
nvbn
thefuck
b28ece0f34e54d1c980e31223451f3b2f0f20ff9
https://github.com/nvbn/thefuck/issues/1021
Git checkout should provide multiple corrections
When correcting git checkout, the default is to use the 'closest branch'. We have a lot of branches with similar names, but quite often, what I actually meant to do was supply the '-b' flag. Can the git checkout rule be updated to return all of the possible options, rather than trying to guess, based on some arbitr...
null
https://github.com/nvbn/thefuck/pull/1022
null
{'base_commit': 'b28ece0f34e54d1c980e31223451f3b2f0f20ff9', 'files': [{'path': 'tests/rules/test_git_checkout.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [59, 62, 66, 70]}}}, {'path': 'thefuck/rules/git_checkout.py', 'status': 'modified', 'Loc': {"(None, 'get_new_command', 31)": {'add': [36], 'mod'...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "thefuck/rules/git_checkout.py" ], "doc": [], "test": [ "tests/rules/test_git_checkout.py" ], "config": [], "asset": [] }
null
nvbn
thefuck
2d81166213c403dce5c04d1fb73ba5d3e57d6676
https://github.com/nvbn/thefuck/issues/660
Slow execution time
The command output is very slow on macOS w/ fish shell. Reproduction rate is ~80% for me. Version: The Fuck 3.18 using Python 2.7.10 Shell: fish, version 2.6.0 OS: macOS 10.12.5 Debug Output: ``` ❯ fuck 333ms DEBUG: Run...
null
null
https://github.com/nvbn/thefuck/commit/2d81166213c403dce5c04d1fb73ba5d3e57d6676
{'base_commit': '2d81166213c403dce5c04d1fb73ba5d3e57d6676', 'files': [{'path': 'tests/shells/test_fish.py', 'status': 'modified', 'Loc': {"('TestFish', 'test_get_overridden_aliases', 29)": {'mod': [31, 32]}}}, {'path': 'thefuck/shells/fish.py', 'status': 'modified', 'Loc': {"('Fish', '_get_overridden_aliases', 40)": {'...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "thefuck/shells/fish.py" ], "doc": [], "test": [ "tests/shells/test_fish.py" ], "config": [], "asset": [] }
null
nvbn
thefuck
6da0bc557f0fd94ea1397d3a7f508be896cc98d8
https://github.com/nvbn/thefuck/issues/1120
Trying rule missing_space_before_subcommand taking so long
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we can to fix that. Actually, maybe we already have, so first thing to do is to update The Fuck and see if the bug is still there. --> <!-- If it is (sorry again), check if the problem has not already been reported and if not, just op...
null
null
https://github.com/KiaraGrouwstra/thefuck/commit/6da0bc557f0fd94ea1397d3a7f508be896cc98d8
{'base_commit': '6da0bc557f0fd94ea1397d3a7f508be896cc98d8', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 436)': {'add': [436]}, '(None, None, 468)': {'add': [468]}}}, {'path': 'tests/test_conf.py', 'status': 'modified', 'Loc': {"('TestSettingsFromEnv', 'test_from_env', 48)": {'add': [67],...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "thefuck/conf.py", "thefuck/utils.py", "thefuck/const.py" ], "doc": [ "README.md" ], "test": [ "tests/test_conf.py", "tests/test_utils.py" ], "config": [], "asset": [] }
null
nvbn
thefuck
a84671dd3b7505d4d73f11ee9c7d057429542e24
https://github.com/nvbn/thefuck/issues/20
Some Unicode error in Ubuntu 14.10
``` bash $ apt-get update E: Не удалось открыть файл блокировки /var/lib/apt/lists/lock - open (13: Отказано в доступе) E: Невозможно заблокировать каталог /var/lib/apt/lists/ E: Не удалось открыть файл блокировки /var/lib/dpkg/lock - open (13: Отказано в доступе) E: Не удалось выполнить блокировку управляющего каталог...
null
null
https://github.com/nvbn/thefuck/commit/a84671dd3b7505d4d73f11ee9c7d057429542e24
{'base_commit': 'a84671dd3b7505d4d73f11ee9c7d057429542e24', 'files': [{'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'thefuck/rules/no_command.py', 'status': 'modified', 'Loc': {"(None, '_get_output', 9)": {'mod': [13]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "thefuck/rules/no_command.py", "setup.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
nvbn
thefuck
622298549172754afff07a8ea1f55358062e17a7
https://github.com/nvbn/thefuck/issues/330
Add command options (--version, --help, --update/--upgrade)
And perhaps a manpage too, even if it only says "Please use fuck --help for documentation"
null
null
https://github.com/nvbn/thefuck/commit/622298549172754afff07a8ea1f55358062e17a7
{'base_commit': '622298549172754afff07a8ea1f55358062e17a7', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 110)': {'mod': [110]}, '(None, None, 112)': {'mod': [112]}}}, {'path': 'thefuck/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 3], 'mod': [83, 99, 100]}, "(N...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "thefuck/main.py" ], "doc": [ "README.md" ], "test": [], "config": [], "asset": [] }
null
nvbn
thefuck
284d49da8d0ab3252b5426423b608033d39c2669
https://github.com/nvbn/thefuck/issues/786
next release
"TypeError: 'module' object is not callable" On any invocation of thefuck
<!-- If you have any issue with The Fuck, sorry about that, but we will do what we can to fix that. Actually, maybe we already have, so first thing to do is to update The Fuck and see if the bug is still there. --> <!-- If it is (sorry again), check if the problem has not already been reported and if not, just op...
null
null
https://github.com/nvbn/thefuck/commit/fb39d0bbd349e916ae12a77f04efd151dd046e6b https://github.com/nvbn/thefuck/commit/284d49da8d0ab3252b5426423b608033d39c2669
{'base_commit': '284d49da8d0ab3252b5426423b608033d39c2669', 'files': [{'path': 'tests/rules/test_apt_get.py', 'status': 'modified', 'Loc': {"(None, 'test_match', 13)": {'mod': [15, 16, 17]}, "(None, 'test_not_match', 30)": {'mod': [33, 34, 35]}, "(None, 'test_get_new_command', 49)": {'mod': [52, 53, 54]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [ "tests/rules/test_apt_get.py" ], "config": [], "asset": [] }
null
home-assistant
core
2b5e7c26111e447c2714284151c2e7555abd11e4
https://github.com/home-assistant/core/issues/27175
integration: google_assistant
Google assistant: something went wrong when using alarm
<!-- READ THIS FIRST: - If you need additional help with this template please refer to https://www.home-assistant.io/help/reporting_issues/ - Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/home-assistant/releases - Frontend issues should be...
null
https://github.com/home-assistant/core/pull/36942
null
{'base_commit': '2b5e7c26111e447c2714284151c2e7555abd11e4', 'files': [{'path': 'homeassistant/components/google_assistant/trait.py', 'status': 'modified', 'Loc': {"('ArmDisArmTrait', None, 974)": {'add': [990, 1000]}, "('ArmDisArmTrait', 'sync_attributes', 1001)": {'mod': [1005]}, "('ArmDisArmTrait', 'execute', 1031)":...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "homeassistant/components/google_assistant/trait.py" ], "doc": [], "test": [ "tests/components/google_assistant/test_trait.py" ], "config": [], "asset": [] }
null
home-assistant
core
fb7fb0ea78ee335cd23f3647223a675718ccf048
https://github.com/home-assistant/core/issues/40316
integration: knx
KNX problem with 0.115.0 and 0.115.1
## The problem KNX integration has changed behavior and don't work fine: 1) it is possible to read the status of a scene only if it is launched from the KNX bus but not if it is launched from the HA 2) KNX climate don't read operation_mode_state_address correctly, when the operation mode is changed it reads the corr...
null
https://github.com/home-assistant/core/pull/40472
null
{'base_commit': 'fb7fb0ea78ee335cd23f3647223a675718ccf048', 'files': [{'path': 'homeassistant/components/knx/manifest.json', 'status': 'modified', 'Loc': {'(None, None, 5)': {'mod': [5]}}}, {'path': 'requirements_all.txt', 'status': 'modified', 'Loc': {'(None, None, 2268)': {'mod': [2268]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Doc\nJson" }
{ "code": [ "homeassistant/components/knx/manifest.json" ], "doc": [], "test": [], "config": [ "requirements_all.txt" ], "asset": [] }
null
home-assistant
core
faedba04079d2c999a479118b5189ef4c0bff060
https://github.com/home-assistant/core/issues/77928
integration: velux stale
Somfy blind motors cannot be assigned to a room
### The problem Somfy motors will return `None` as serial number via the Velux KLF-200: [Handle devices without serial numbers.](https://github.com/Julius2342/pyvlx/pull/42/commits/d409d66db8732553e928f5dd9d00d458ba638dea) This serial is usesd as unique id here: [core/homeassistant/components/velux/__init__.py#L1...
null
https://github.com/home-assistant/core/pull/117508
null
{'base_commit': 'faedba04079d2c999a479118b5189ef4c0bff060', 'files': [{'path': 'homeassistant/components/velux/__init__.py', 'status': 'modified', 'Loc': {"('VeluxEntity', None, 106)": {'mod': [111]}, "('VeluxEntity', '__init__', 111)": {'mod': [114]}}}, {'path': 'homeassistant/components/velux/cover.py', 'status': 'mo...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "homeassistant/components/velux/light.py", "homeassistant/components/velux/cover.py", "homeassistant/components/velux/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
home-assistant
core
551a584ca69771804b6f094eceb67dcb25a2f627
https://github.com/home-assistant/core/issues/68620
needs-more-information integration: overkiz
Polling interval for stateless (e.g. Somfy (Oceania)) is not applied in Overkiz
### The problem Every day I get a "Gateway ID" error in Overkiz error that reads as below. Same problem as [#66606](https://github.com/home-assistant/core/issues/66606) "Translation Error: The intl string context variable "gateway id" was not provided to the string "Gateway: {gateway id}" Overkiz (by Somfy)". ...
null
https://github.com/home-assistant/core/pull/133617
null
{'base_commit': '551a584ca69771804b6f094eceb67dcb25a2f627', 'files': [{'path': 'homeassistant/components/overkiz/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [43]}, "(None, 'async_setup_entry', 57)": {'mod': [116, 117, 118, 119, 122]}}}, {'path': 'homeassistant/components/overkiz/const.py',...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "homeassistant/components/overkiz/const.py", "homeassistant/components/overkiz/__init__.py", "homeassistant/components/overkiz/coordinator.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
yt-dlp
yt-dlp
f7590d47641cedbf630b909aa8f53930c4a9ce5c
https://github.com/yt-dlp/yt-dlp/issues/983
site-bug
VRV - NoneType object is not iterable
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check lis...
null
null
https://github.com/yt-dlp/yt-dlp/commit/f7590d47641cedbf630b909aa8f53930c4a9ce5c
{'base_commit': 'f7590d47641cedbf630b909aa8f53930c4a9ce5c', 'files': [{'path': 'yt_dlp/extractor/vrv.py', 'status': 'modified', 'Loc': {"('VRVIE', '_real_extract', 168)": {'mod': [221]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "yt_dlp/extractor/vrv.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
yt-dlp
yt-dlp
50e93e03a7ca6ae35a319ea310104f7d6d91eee3
https://github.com/yt-dlp/yt-dlp/issues/3183
geo-blocked site-bug
Tele5 has an extraction error
### Checklist - [X] I'm reporting a broken site - [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) - [X] I've checked that all provided URLs are alive and playable in a browser - [X] I've checked that all URLs and arguments with specia...
null
null
https://github.com/yt-dlp/yt-dlp/commit/50e93e03a7ca6ae35a319ea310104f7d6d91eee3
{'base_commit': '50e93e03a7ca6ae35a319ea310104f7d6d91eee3', 'files': [{'path': 'yt_dlp/YoutubeDL.py', 'status': 'modified', 'Loc': {}}, {'path': 'yt_dlp/extractor/aliexpress.py', 'status': 'modified', 'Loc': {"('AliExpressLiveIE', None, 12)": {'mod': [21]}}}, {'path': 'yt_dlp/extractor/applepodcasts.py', 'status': 'mod...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": "Code" }
{ "code": [ "yt_dlp/extractor/extractors.py", "yt_dlp/extractor/streamcz.py", "yt_dlp/extractor/bbc.py", "yt_dlp/extractor/zdf.py", "yt_dlp/extractor/tv2dk.py", "yt_dlp/extractor/rutv.py", "yt_dlp/extractor/aliexpress.py", "yt_dlp/extractor/wdr.py", "yt_dlp/extractor/videa.py", ...
null
yt-dlp
yt-dlp
80e8493ee7c3083f4e215794e4a67ba5265f24f7
https://github.com/yt-dlp/yt-dlp/issues/2885
site-request patch-available
Add Filmarkivet.se as a Supported Site
### Checklist - [X] I'm reporting a new site support request - [X] I've verified that I'm running yt-dlp version **2022.02.04**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) - [X] I've checked that all provided URLs are alive and playable in a browser - [X] I've checked that none of provided URLs [...
null
null
https://github.com/yt-dlp/yt-dlp/commit/80e8493ee7c3083f4e215794e4a67ba5265f24f7
{'base_commit': '80e8493ee7c3083f4e215794e4a67ba5265f24f7', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {"('GenericIE', None, 143)": {'add': [2529]}}}, {'path': 'yt_dlp/utils.py', 'status': 'modified', 'Loc': {"(None, 'is_html', 3283)": {'add': [3292], 'mod': [3294, 3295, 3296, 3297, ...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "yt_dlp/utils.py", "yt_dlp/extractor/generic.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
yt-dlp
yt-dlp
5da08bde9e073987d1aae2683235721e4813f9c6
https://github.com/yt-dlp/yt-dlp/issues/5424
site-enhancement
[VLIVE.TV] Extract release timestamp
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field ### Checklist - [X] I'm asking a question and **not** reporting a bug or requesting a feature - [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - [X] I've...
null
null
https://github.com/HHeroin/yt-dlp/commit/5da08bde9e073987d1aae2683235721e4813f9c6
{'base_commit': '5da08bde9e073987d1aae2683235721e4813f9c6', 'files': [{'path': 'yt_dlp/extractor/vlive.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15]}, "('VLiveIE', None, 69)": {'add': [83, 100]}, "('VLiveIE', '_real_extract', 148)": {'add': [171]}}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "yt_dlp/extractor/vlive.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
yt-dlp
yt-dlp
51c22ef4e2af966d6100d0d97d9e8019022df8ad
https://github.com/yt-dlp/yt-dlp/issues/2996
bug
'<' not supported between instances of 'float' and 'str' and --throttled-rate error after update?
### Checklist - [X] I'm reporting a bug unrelated to a specific site - [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) - [X] I've checked that all provided URLs are alive and playable in a browser - [X] I've checked that all URLs and ...
null
null
https://github.com/yt-dlp/yt-dlp/commit/51c22ef4e2af966d6100d0d97d9e8019022df8ad
{'base_commit': '51c22ef4e2af966d6100d0d97d9e8019022df8ad', 'files': [{'path': 'yt_dlp/__init__.py', 'status': 'modified', 'Loc': {"(None, 'validate_options', 156)": {'mod': [258]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "yt_dlp/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
yt-dlp
yt-dlp
6f638d325e1878df304822c6bf4e231e06dae89a
https://github.com/yt-dlp/yt-dlp/issues/3467
docs/meta/cleanup high-priority regression
Error since commit 43cc91a
### Checklist - [X] I'm reporting a bug unrelated to a specific site - [X] I've verified that I'm running yt-dlp version **2022.04.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - [X] I've checked that all provided URLs are alive and playable in a browser - [X] I've che...
null
null
https://github.com/yt-dlp/yt-dlp/commit/6f638d325e1878df304822c6bf4e231e06dae89a
{'base_commit': '6f638d325e1878df304822c6bf4e231e06dae89a', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, 61)': {'add': [61]}, '(None, None, 64)': {'mod': [64]}, '(None, None, 68)': {'mod': [68]}, '(None, None, 70)': {'mod': [70]}}}, {'path': 'yt_dlp/extractor/anvato.py', 'status': 'modifie...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "yt_dlp/extractor/anvato.py" ], "doc": [], "test": [], "config": [ "Makefile" ], "asset": [] }
null
yt-dlp
yt-dlp
14a086058a30a0748b5b716e9b21481f993518f3
https://github.com/yt-dlp/yt-dlp/issues/1601
site-bug
ARD:mediathek doesn't work anymore
### Checklist - [X] I'm reporting a broken site - [X] I've verified that I'm running yt-dlp version **2021.10.22**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) - [X] I've checked that all provided URLs are alive and playable in a browser - [X] I've checked that all URLs and arguments with special ...
null
null
https://github.com/yt-dlp/yt-dlp/commit/14a086058a30a0748b5b716e9b21481f993518f3
{'base_commit': '14a086058a30a0748b5b716e9b21481f993518f3', 'files': [{'path': 'yt_dlp/extractor/ard.py', 'status': 'modified', 'Loc': {"('ARDBetaMediathekIE', None, 390)": {'add': [405, 428], 'mod': [391]}, "('ARDBetaMediathekIE', '_ARD_extract_playlist', 512)": {'mod': [528, 529, 530, 531, 532, 533, 534, 536, 537, 53...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "yt_dlp/extractor/ard.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
comfyanonymous
ComfyUI
ab7d4f784892c275e888d71aa80a3a2ed59d9b83
https://github.com/comfyanonymous/ComfyUI/issues/2019
[Bug] text of collapsed node still present
On latest commit https://github.com/comfyanonymous/ComfyUI/commit/d66b631d74e6f6ac95c61c63d4a0da150bf74903. Dragging the node also doesn't do anything until it's uncollapsed. <img width="1236" alt="Screenshot 2023-11-21 at 1 14 19 PM" src="https://github.com/comfyanonymous/ComfyUI/assets/111034657/abb0b5c1-3e94-4928-...
null
null
https://github.com/comfyanonymous/ComfyUI/commit/ab7d4f784892c275e888d71aa80a3a2ed59d9b83
{'base_commit': 'ab7d4f784892c275e888d71aa80a3a2ed59d9b83', 'files': [{'path': 'web/scripts/domWidget.js', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [235, 292]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "web/scripts/domWidget.js" ], "doc": [], "test": [], "config": [], "asset": [] }
null
AntonOsika
gpt-engineer
3e589bf1356024fb471a9d17738e4626f21a953b
https://github.com/AntonOsika/gpt-engineer/issues/1153
bug triage
Azure Deployment Name Bug
## Policy and info - Maintainers will close issues that have been stale for 14 days if they contain relevant answers. - Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/ ## Expected Behavior There...
null
https://github.com/AntonOsika/gpt-engineer/pull/1170
null
{'base_commit': '3e589bf1356024fb471a9d17738e4626f21a953b', 'files': [{'path': '.github/CONTRIBUTING.md', 'status': 'modified', 'Loc': {'(None, None, 114)': {'add': [114]}}}, {'path': 'gpt_engineer/core/ai.py', 'status': 'modified', 'Loc': {"('AI', '_create_chat_model', 330)": {'mod': [349]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "gpt_engineer/core/ai.py" ], "doc": [ ".github/CONTRIBUTING.md" ], "test": [], "config": [], "asset": [] }
null
AntonOsika
gpt-engineer
c4c1203fc07b2e23c3e5a5e9277266a711ab9466
https://github.com/AntonOsika/gpt-engineer/issues/35
.py files are not being created. I just get all_output.txt that I manually have to create from.
Hi, I absolutely love this script. This is the most accurate auto-GPT development script I have tried yet, it's so powerful! In the demo video it shows the script creating each of the development files, in my case .py files within the workspace folder automatically. My build isn't doing this I just get an all_output...
null
https://github.com/AntonOsika/gpt-engineer/pull/120
null
{'base_commit': 'c4c1203fc07b2e23c3e5a5e9277266a711ab9466', 'files': [{'path': 'gpt_engineer/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "(None, 'parse_chat', 6)": {'add': [11], 'mod': [6, 7, 8, 10, 13, 14, 15, 16, 17, 18, 19]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "gpt_engineer/chat_to_files.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
AntonOsika
gpt-engineer
7b91676a0c2ccd4589a42f2cadbf1e69f93ad81b
https://github.com/AntonOsika/gpt-engineer/issues/1128
bug triage
Applying diffs failing silently
## Expected Behavior I would expect GPT engineer to either successfully apply all diffs sent by the AI or fail in a way that lets you know which diffs have been applied, which failed, and allows you to manually salvage the failed diff parts by copy and pasting ## Current Behavior The current behaviour seems t...
null
https://github.com/AntonOsika/gpt-engineer/pull/1138
null
{'base_commit': '7b91676a0c2ccd4589a42f2cadbf1e69f93ad81b', 'files': [{'path': 'gpt_engineer/core/diff.py', 'status': 'modified', 'Loc': {"('Diff', 'validate_and_correct', 340)": {'mod': [357]}}}, {'path': 'tests/core/test_salvage_correct_hunks.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "gpt_engineer/core/diff.py" ], "doc": [], "test": [ "tests/core/test_salvage_correct_hunks.py" ], "config": [], "asset": [] }
null
lllyasviel
Fooocus
f7bb578a1409b1f96aff534ff5ed2bd10502296f
https://github.com/lllyasviel/Fooocus/issues/1527
Add copy to clipboard in plaintext for image details
Add copy to clipboard in plaintext for image details A button we can click to copy to clipboard all of the image details shown in the log output file. If not on the log page then on the app itself. The quick copying of these settings enables us to share our work methods with others in the community more smoothly,...
null
null
https://github.com/lllyasviel/Fooocus/commit/f7bb578a1409b1f96aff534ff5ed2bd10502296f
{'base_commit': 'f7bb578a1409b1f96aff534ff5ed2bd10502296f', 'files': [{'path': 'fooocus_version.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}, {'path': 'modules/async_worker.py', 'status': 'modified', 'Loc': {"(None, 'handler', 116)": {'mod': [400, 401, 780, 782]}}}, {'path': 'modules/private_...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/private_logger.py", "webui.py", "modules/async_worker.py", "fooocus_version.py" ], "doc": [ "update_log.md" ], "test": [], "config": [], "asset": [] }
null
lllyasviel
Fooocus
3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f
https://github.com/lllyasviel/Fooocus/issues/2561
enhancement
[Feature Request]: Prompt embedded LoRAs
### Is there an existing issue for this? - [x] I have searched the existing issues and checked the recent builds/commits ### What would your feature do? Similar to how A1111 handles LoRAs by default, I believe there should be an option to embed LoRAs in the prompt by using the following structure: ```csharp ...
null
https://github.com/lllyasviel/Fooocus/pull/2323
null
{'base_commit': '3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f', 'files': [{'path': 'modules/async_worker.py', 'status': 'modified', 'Loc': {"(None, 'handler', 134)": {'add': [435], 'mod': [155, 453, 454, 655, 865, 908, 912]}, "(None, 'worker', 19)": {'mod': [47, 50, 51, 72]}, "(None, 'callback', 806)": {'mod': [810]}}}, {'...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/async_worker.py", "modules/sdxl_styles.py", "modules/config.py", "modules/util.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
lllyasviel
Fooocus
8e62a72a63b30a3067d1a1bc3f8d226824bd9283
https://github.com/lllyasviel/Fooocus/issues/1671
bug (AMD)
Cannot use image prompts
I am trying to use 2x images as an image prompt but when I press generate this is what I'm getting (I can generate just fine without image prompts): Full console log: <code>[Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 3 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 1.5 [Parameters] See...
null
https://github.com/lllyasviel/Fooocus/pull/1678
null
{'base_commit': '8e62a72a63b30a3067d1a1bc3f8d226824bd9283', 'files': [{'path': 'extras/ip_adapter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10], 'mod': [5]}, "(None, 'load_ip_adapter', 90)": {'mod': [119, 120, 121, 122, 123, 124, 125, 126]}}}, {'path': 'fooocus_version.py', 'status': 'modified',...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "fooocus_version.py", "extras/ip_adapter.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
lllyasviel
Fooocus
d57afc88a48359bc1642c2ae30a091f0426eff43
https://github.com/lllyasviel/Fooocus/issues/1063
Faceswap crashes
**Describe the problem** The program crashes when trying to use an image as prompt and selecting the faceswap advanced option **Full Console Log** Requirement already satisfied: pygit2==1.12.2 in /usr/local/lib/python3.10/dist-packages (1.12.2) Requirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3....
null
https://github.com/lllyasviel/Fooocus/pull/1710
null
{'base_commit': 'd57afc88a48359bc1642c2ae30a091f0426eff43', 'files': [{'path': 'fooocus_colab.ipynb', 'status': 'modified', 'Loc': {'(None, None, 15)': {'mod': [15]}}}, {'path': 'readme.md', 'status': 'modified', 'Loc': {'(None, None, 127)': {'add': [127]}, '(None, None, 118)': {'mod': [118]}, '(None, None, 124)': {'mo...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "fooocus_colab.ipynb", "ldm_patched/modules/args_parser.py" ], "doc": [ "readme.md" ], "test": [], "config": [], "asset": [] }
null
odoo
odoo
72ec0050b442214c9be93907fc01a48832243c15
https://github.com/odoo/odoo/issues/7306
[v8.0] Bank statement : Customer Import invoice wizard do not auto-fill the right field
Step to reproduce: create a customer invoice create a new bank statement and import this invoice click on 'Reconcile' Problem: No match proposition between the bank statement line and the invoice move line can be found since the communication field is '/'. (The invoice number is in the field 'Reference' instead) So p...
null
null
https://github.com/odoo/odoo/commit/72ec0050b442214c9be93907fc01a48832243c15
{'base_commit': '72ec0050b442214c9be93907fc01a48832243c15', 'files': [{'path': 'addons/account/account_bank_statement.py', 'status': 'modified', 'Loc': {"('account_bank_statement_line', 'get_reconciliation_proposition', 537)": {'mod': [575]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "addons/account/account_bank_statement.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
binary-husky
gpt_academic
197287fc303119bf71caf9b3f72280cab08da749
https://github.com/binary-husky/gpt_academic/issues/1147
[Bug]: 翻译arxiv文档报错,无论本地自己搭建还是官方在线均报错
### Installation Method | 安装方法与平台 OneKeyInstall (一键安装脚本-windows) ### Version | 版本 Latest | 最新版 ### OS | 操作系统 Windows ### Describe the bug | 简述 官方在线版报错代码如下: > Local Message] 实验性函数调用出错: > > Traceback (most recent call last): > File "./toolbox.py", line 165, in decorated > yield from f(main_input, llm...
null
null
https://github.com/binary-husky/gpt_academic/commit/197287fc303119bf71caf9b3f72280cab08da749
{'base_commit': '197287fc303119bf71caf9b3f72280cab08da749', 'files': [{'path': 'shared_utils/handle_upload.py', 'status': 'modified', 'Loc': {"(None, 'extract_archive', 91)": {'mod': [107, 108, 109, 110, 111, 112, 113, 114, 116, 117]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "shared_utils/handle_upload.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
binary-husky
gpt_academic
65317e33af87640b68c84c9f6ee67188b76c6d7a
https://github.com/binary-husky/gpt_academic/issues/558
能否利用EdgeGPT,支持调用微软Bing接口
大佬们求求了,看看这个项目吧,https://github.com/acheong08/EdgeGPT 如果可以方便地调用Bing接口,或者未来的百度、阿里等第三方接口,对于没有openAI-key也没法本地部署GLM的同学是福音啊
null
null
https://github.com/binary-husky/gpt_academic/commit/65317e33af87640b68c84c9f6ee67188b76c6d7a
{'base_commit': '65317e33af87640b68c84c9f6ee67188b76c6d7a', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [65], 'mod': [47, 48]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21, 119]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "request_llm/bridge_all.py", "config.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
binary-husky
gpt_academic
e359fff0405c4cb865b809b4ecfc0a95a54d2512
https://github.com/binary-husky/gpt_academic/issues/1554
[Bug]: docker安装版本适配spark api报错
### Installation Method | 安装方法与平台 Docker-Compose(Windows/Mac) ### Version | 版本 Latest | 最新版 ### OS | 操作系统 Mac ### Describe the bug | 简述 在mac本地使用conda安装方式,适配spark api可以正常运行。但是通过docker compose方式安装之后通过spark api会出现报错,不过千帆api则可以正常使用 ### Screen Shot | 有帮助的截图 <img width="1423" alt="Snipaste_2024-02-14_21-12-27" src...
null
null
https://github.com/binary-husky/gpt_academic/commit/e359fff0405c4cb865b809b4ecfc0a95a54d2512
{'base_commit': 'e359fff0405c4cb865b809b4ecfc0a95a54d2512', 'files': [{'path': 'request_llms/bridge_qianfan.py', 'status': 'modified', 'Loc': {"(None, 'predict', 135)": {'add': [148, 151], 'mod': [161, 162, 163, 164, 165, 166]}}}, {'path': 'request_llms/bridge_qwen.py', 'status': 'modified', 'Loc': {"(None, 'predict', ...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "request_llms/bridge_qwen.py", "request_llms/bridge_qianfan.py", "request_llms/bridge_skylark2.py", "request_llms/bridge_spark.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
binary-husky
gpt_academic
c17fc2a9b55b1c7447718a06a3eac4378828bb22
https://github.com/binary-husky/gpt_academic/issues/1021
waiting feedback
[Feature]: 通义千问的模型开源了,建议加入.
### Class | 类型 None ### Feature Request | 功能请求 附:开源地址 魔搭ModelScope: https://modelscope.cn/models/qwen/Qwen-7B/summary https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary Hugging Face:https://huggingface.co/Qwen GitHub:https://github.com/QwenLM/Qwen-7B
null
null
https://github.com/binary-husky/gpt_academic/commit/c17fc2a9b55b1c7447718a06a3eac4378828bb22
{'base_commit': 'c17fc2a9b55b1c7447718a06a3eac4378828bb22', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [337]}}}, {'path': 'request_llm/bridge_qwen.py', 'status': 'm...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "request_llm/bridge_all.py", "request_llm/bridge_qwen.py", "config.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
binary-husky
gpt_academic
19bd0c35ed05e6f99c8e3c0a8c994b1385341cae
https://github.com/binary-husky/gpt_academic/issues/1053
ToDo
[Bug]: 本地翻译Latex出错
### Installation Method | 安装方法与平台 Pip Install (I used latest requirements.txt) ### Version | 版本 Latest | 最新版 ### OS | 操作系统 Windows ### Describe the bug | 简述 * 问题:找不到所谓的“fp”(文件指针) ![image](https://github.com/binary-husky/gpt_academic/assets/62052010/40949aca-4ebf-4b24-ade6-a8423654b228) * stack:这里是将在**tex文件...
null
null
https://github.com/binary-husky/gpt_academic/commit/19bd0c35ed05e6f99c8e3c0a8c994b1385341cae
{'base_commit': '19bd0c35ed05e6f99c8e3c0a8c994b1385341cae', 'files': [{'path': 'crazy_functions/latex_fns/latex_toolbox.py', 'status': 'modified', 'Loc': {"(None, 'find_tex_file_ignore_case', 281)": {'add': [283], 'mod': [286]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "crazy_functions/latex_fns/latex_toolbox.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
binary-husky
gpt_academic
e24f077b68e38b679e5ca25853ea2c402f074ea3
https://github.com/binary-husky/gpt_academic/issues/1120
[Feature]: 希望能够增加azure openai gpt4 的模型选项
### Class | 类型 程序主体 ### Feature Request | 功能请求 RT
null
null
https://github.com/binary-husky/gpt_academic/commit/e24f077b68e38b679e5ca25853ea2c402f074ea3
{'base_commit': 'e24f077b68e38b679e5ca25853ea2c402f074ea3', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [83]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [147]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "request_llm/bridge_all.py", "config.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
deepfakes
faceswap
a799f769e4c48908c3efd64792384403392f2e82
https://github.com/deepfakes/faceswap/issues/67
Cluster faces during extract using dlib.chinese_whispers_clustering
I have had some success hacking together a pre-processing script to run over my training images. It uses [dlib.chinese_whispers_clustering](http://dlib.net/python/index.html#dlib.chinese_whispers_clustering) to group the found faces in the training data based on likeness. I think one of the keys to good results is good...
null
https://github.com/deepfakes/faceswap/pull/61
null
{'base_commit': 'a799f769e4c48908c3efd64792384403392f2e82', 'files': [{'path': 'Dockerfile', 'status': 'modified', 'Loc': {'(None, None, 14)': {'add': [14]}, '(None, None, 10)': {'mod': [10, 11, 12]}, '(None, None, 16)': {'mod': [16]}, '(None, None, 18)': {'mod': [18]}}}, {'path': 'faceswap.py', 'status': 'modified', '...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "lib/aligner.py", "lib/model.py", "lib/training_data.py", "plugins/Convert_Adjust.py", "plugins/Extract_Align.py", "plugins/Extract_Crop.py", "scripts/train.py", "faceswap.py", "plugins/PluginLoader.py", "plugins/Convert_Masked.py", "lib/DetectedFace.py", "l...
null
deepfakes
faceswap
f5dd18352c6640bc5c39a01642c7ac7356c0dea1
https://github.com/deepfakes/faceswap/issues/718
bug
[Windows] cuda_path was not set if success on first check.
**Describe the bug** setup.py file: cuDNN was not detected if `cuda_check` success in first check using "nvcc -V" because of `self.env.cuda_path` not set **To Reproduce** Steps to reproduce the behavior: 1, run `python setup.py` on windows 10 environment **Expected behavior** detect cuDNN lib **Screenshot...
null
null
https://github.com/deepfakes/faceswap/commit/f5dd18352c6640bc5c39a01642c7ac7356c0dea1
{'base_commit': 'f5dd18352c6640bc5c39a01642c7ac7356c0dea1', 'files': [{'path': 'lib/gpu_stats.py', 'status': 'modified', 'Loc': {"('GPUStats', 'initialize', 64)": {'mod': [92]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {"('Checks', None, 314)": {'add': [353]}, "('Checks', 'cudnn_check', 458)": {'add': [459]}...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "setup.py", "lib/gpu_stats.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
deepfakes
faceswap
dea984efc1c720832d7c32513c806b4b67cc6560
https://github.com/deepfakes/faceswap/issues/590
Disable logging
In previous commits before the logging implementation, multiple GPUS were able to run different tasks simultaneously ( extract/train/convert ). After the logging commit, only 1 task can be run due to the log file being in use by the first process. Is there an option to disable logging or specify a log file instea...
null
null
https://github.com/deepfakes/faceswap/commit/dea984efc1c720832d7c32513c806b4b67cc6560
{'base_commit': 'dea984efc1c720832d7c32513c806b4b67cc6560', 'files': [{'path': 'lib/cli.py', 'status': 'modified', 'Loc': {"('ScriptExecutor', 'execute_script', 83)": {'mod': [85]}, "('DirOrFileFullPaths', None, 150)": {'mod': [150]}, "('FaceSwapArgs', 'get_global_arguments', 265)": {'mod': [274, 275, 276, 277]}}}, {'p...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "lib/cli.py", "lib/gui/utils.py", "lib/logger.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181
https://github.com/3b1b/manim/issues/1436
bug
PNG images have a black background (no transparency)
### Description When trying do display a png image(with transparent background), it shows the background as black, didn't encouter the issue when trying with the cairo renderer. **Code**: ```python img = ImageMobject("./dice.png") self.play(FadeIn(img)) ``` ### Results <img width="626" alt="result" sr...
null
null
https://github.com/3b1b/manim/commit/b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181
{'base_commit': 'b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181', 'files': [{'path': 'manimlib/shaders/image/frag.glsl', 'status': 'modified', 'Loc': {'(None, None, 12)': {'mod': [12]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "manimlib/shaders/image/frag.glsl" ] }
null
3b1b
manim
e1c049bece420bc1190eb3ed4d5d9878c431aa5e
https://github.com/3b1b/manim/issues/394
import readline is failing
I am trying to run examples_scenes.py and it threw a ModuleNotFoundError when it tried to import readline. This should be easy to resolve - just pip install readline right? Nope. readline apparently doesn't work on Windows, and I got this strange follow-up error below. I don't know what to do at this point. Help? ...
null
https://github.com/3b1b/manim/pull/672
null
{'base_commit': 'e1c049bece420bc1190eb3ed4d5d9878c431aa5e', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 11)': {'add': [11]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
All-Hands-AI
OpenHands
660d1d1e64c5e28e96bf9b8172cd87d1d809fd07
https://github.com/All-Hands-AI/OpenHands/issues/5876
bug severity:medium
[Bug]: "The model produces invalid content"
### Is there an existing issue for the same bug? - [X] I have checked the existing issues. ### Describe the bug and reproduction steps https://www.all-hands.dev/share?share_id=dab4a77e7d64e7a4dc6124dc672d3f4beb2d411a33155977425b821e292d4f4c The LLM is `gpt-4o` In the logs I got ```yaml {'error': {'message'...
null
https://github.com/All-Hands-AI/OpenHands/pull/7045
null
{'base_commit': '660d1d1e64c5e28e96bf9b8172cd87d1d809fd07', 'files': [{'path': 'openhands/llm/llm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [78]}, "('LLM', 'wrapper', 180)": {'mod': [220, 221, 222]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "openhands/llm/llm.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
scrapy
scrapy
7e8453cf1ec992e5df5cebfeda08552c58e7c9bc
https://github.com/scrapy/scrapy/issues/2656
sos filepipelines 302
hi when i setting file_urls "http://m.baidu.com/api?action=redirect&token=kpyysd&from=1014090y&type=app&dltype=new&refid=2650327114&tj=soft_5845028_88031597_%E8%AF%AD%E9%9F%B3%E6%90%9C%E7%B4%A2&refp=action_search&blink=da5b687474703a2f2f7265736765742e39312e636f6d2f536f66742f436f6e74726f6c6c65722e617368783f616374...
null
https://github.com/scrapy/scrapy/pull/2616
null
{'base_commit': '7e8453cf1ec992e5df5cebfeda08552c58e7c9bc', 'files': [{'path': 'docs/topics/media-pipeline.rst', 'status': 'modified', 'Loc': {'(None, None, 324)': {'add': [324]}}}, {'path': 'scrapy/pipelines/files.py', 'status': 'modified', 'Loc': {"('FilesPipeline', '__init__', 226)": {'mod': [252]}}}, {'path': 'scra...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "scrapy/pipelines/media.py", "scrapy/pipelines/files.py", "tests/mockserver.py" ], "doc": [ "docs/topics/media-pipeline.rst" ], "test": [ "tests/test_pipeline_media.py" ], "config": [], "asset": [] }
null
ansible
ansible
cc00f21a358923c03e334e245d58df0853d10661
https://github.com/ansible/ansible/issues/57069
networking module support:network nxos bug affects_2.7 cisco
nxos_vpc breaks using default vrf
##### SUMMARY When using pkl_vrf": "default" command is missing vrf value ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME Module: nxos_vpc ##### ANSIBLE VERSION ``` ansible 2.7.2 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/shar...
null
https://github.com/ansible/ansible/pull/57370
null
{'base_commit': 'cc00f21a358923c03e334e245d58df0853d10661', 'files': [{'path': 'lib/ansible/modules/network/nxos/nxos_vpc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [60, 63, 277]}, "(None, 'main', 317)": {'add': [396], 'mod': [392]}, "(None, 'get_vpc', 222)": {'mod': [265, 266, 267, 268, 269, 270,...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "lib/ansible/modules/network/nxos/nxos_vpc.py" ], "doc": [], "test": [ "test/units/modules/network/nxos/test_nxos_vpc.py" ], "config": [], "asset": [] }
null
ansible
ansible
44b53141748d29220441e0799b54ea3130ac6753
https://github.com/ansible/ansible/issues/78076
support:core has_pr docs affects_2.12
Minor change to the getting started diagram
### Summary I was looking through the new Ansible getting started guide and noticed one of the nodes in the diagram has a duplicate label. s/node 2/node 3 ### Issue Type Documentation Report ### Component Name https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/images/ansible_basic.svg ### Ansible Vers...
null
https://github.com/ansible/ansible/pull/78077
null
{'base_commit': '44b53141748d29220441e0799b54ea3130ac6753', 'files': [{'path': 'docs/docsite/rst/images/ansible_basic.svg', 'status': 'modified', 'Loc': {'(None, None, 27)': {'mod': [27, 28, 29]}, '(None, None, 35)': {'mod': [35]}, '(None, None, 51)': {'mod': [51]}, '(None, None, 67)': {'mod': [67]}, '(None, None, 192)...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Doc" }
{ "code": [], "doc": [ "docs/docsite/rst/images/ansible_basic.svg" ], "test": [], "config": [], "asset": [] }
null
ansible
ansible
0335d05f437eb59bcb77a58ef7819562f298ba79
https://github.com/ansible/ansible/issues/3730
ansible stacktrace
simple ansible facts now stack trace: ``` ansible -m setup -c local -i ~/hosts 127.0.0.1 ``` 127.0.0.1 | FAILED => Traceback (most recent call last): File "/home/bcoca/work/ansible/lib/ansible/runner/**init**.py", line 367, in _executor exec_rc = self._executor_internal(host, new_stdin) File "/home/bcoca/work...
null
null
https://github.com/ansible/ansible/commit/0335d05f437eb59bcb77a58ef7819562f298ba79
{'base_commit': '0335d05f437eb59bcb77a58ef7819562f298ba79', 'files': [{'path': 'lib/ansible/inventory/vars_plugins/group_vars.py', 'status': 'modified', 'Loc': {"('VarsModule', 'run', 38)": {'mod': [43]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "lib/ansible/inventory/vars_plugins/group_vars.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ansible
ansible
f841c2803a1e36bb6f392c466d36b669f9243464
https://github.com/ansible/ansible/issues/77073
module support:core feature P3 affects_2.13
Add support for deb822 apt sources with apt_repository
### Summary Debian has deprecated APT's original `sources.list` file format. As of Debian 11 (and Ubuntu 20.10), APT uses [the newer "DEB822" format](https://manpages.debian.org/unstable/apt/sources.list.5.en.html#DEB822-STYLE_FORMAT) by default. This format has been supported since APT 1.1, which goes back to Ubuntu ...
null
https://github.com/ansible/ansible/pull/80018
null
{'base_commit': 'f841c2803a1e36bb6f392c466d36b669f9243464', 'files': [{'path': 'test/integration/targets/setup_deb_repo/tasks/main.yml', 'status': 'modified', 'Loc': {'(None, None, 61)': {'add': [61]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [ "test/integration/targets/setup_deb_repo/tasks/main.yml" ], "asset": [] }
null
ansible
ansible
6a71aef6c5b2f4c26d5f6522cd5b1a85cd78ee6b
https://github.com/ansible/ansible/issues/58126
networking python3 module support:network bug affects_2.8 ios cisco
ios_facts module not enumerating ansible_net_model in Ansible 2.8
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> ios_facts module not ...
null
https://github.com/ansible/ansible/pull/58174
null
{'base_commit': '6a71aef6c5b2f4c26d5f6522cd5b1a85cd78ee6b', 'files': [{'path': 'lib/ansible/plugins/cliconf/ios.py', 'status': 'modified', 'Loc': {"('Cliconf', 'get_device_info', 199)": {'mod': [210, 211, 212]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "lib/ansible/plugins/cliconf/ios.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ansible
ansible
d1cd6ee56d492deef40f6f2f178832a1815730a5
https://github.com/ansible/ansible/issues/37734
cloud azure module affects_2.4 support:certified feature
Add network interface to Load Balancer Backend pool in azure_rm_networkinterface
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME azure_rm_networkinterface ##### ANSIBLE VERSION ``` ansible --version ansible 2.4.3.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/dgermain/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansib...
null
github.com/ansible/ansible/pull/38643
null
{'base_commit': 'd1cd6ee56d492deef40f6f2f178832a1815730a5', 'files': [{'path': 'lib/ansible/module_utils/azure_rm_common.py', 'status': 'modified', 'Loc': {"('AzureRMModuleBase', None, 216)": {'add': [605]}, '(None, None, None)': {'mod': [131]}}}, {'path': 'lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py',...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nConfig\nTest" }
{ "code": [ "lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py", "lib/ansible/module_utils/azure_rm_common.py" ], "doc": [], "test": [], "config": [ "test/integration/targets/azure_rm_networkinterface/tasks/main.yml" ], "asset": [] }
null
ansible
ansible
6f8c1da0c805f334b8598fd2556f7ed92dc9348e
https://github.com/ansible/ansible/issues/79277
bug traceback affects_2.13
ansible-test fails to report the proper error when validating ansible-doc
### Summary The utility ansible-test sanity is fantastic and does its job. Unfortunately, when validating the ansible-doc, if the YAML is malformed, you'll get a parsing error instead of the actual YAML error. ### Issue Type Bug Report ### Component Name ansible-test ### Ansible Version ```console $ ansible --ve...
null
https://github.com/ansible/ansible/pull/79682
null
{'base_commit': '6f8c1da0c805f334b8598fd2556f7ed92dc9348e', 'files': [{'path': 'test/integration/targets/ansible-test-sanity-validate-modules/runme.sh', 'status': 'modified', 'Loc': {'(None, None, 7)': {'mod': [7]}}}, {'path': 'test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py', '...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py" ], "doc": [], "test": [], "config": [], "asset": [ "test/integration/targets/ansible-test-sanity-validate-modules/runme.sh" ] }
null
ansible
ansible
d97080174e9bbebd27a967368934ef91d1f28f64
https://github.com/ansible/ansible/issues/32070
networking affects_2.4 support:core nxos bug cisco
Occasional failures with NXOS modules
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME nxos modules ##### ANSIBLE VERSION ansible 2.4.0.0 config file = /project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg configured module search path = [u'/etc/ansible/roles/plugins/library', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/mod...
null
https://github.com/ansible/ansible/pull/32114
null
{'base_commit': 'd97080174e9bbebd27a967368934ef91d1f28f64', 'files': [{'path': 'lib/ansible/module_utils/nxos.py', 'status': 'modified', 'Loc': {"('Cli', 'run_commands', 139)": {'add': [171]}, '(None, None, None)': {'mod': [37]}}}, {'path': 'lib/ansible/modules/network/nxos/nxos_interface.py', 'status': 'modified', 'Lo...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "Code" }
{ "code": [ "lib/ansible/modules/network/nxos/nxos_interface.py", "lib/ansible/module_utils/nxos.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ansible
ansible
cc7a5228b02344658dac69c38ccb7d6580d2b4c6
https://github.com/ansible/ansible/issues/34012
module affects_2.4 net_tools support:community bug
nmcli module fails with self.dns4=' '.join(module.params['dns4']) TypeError
##### ISSUE TYPE <!--- Pick one below and delete the rest --> - Bug Report ##### COMPONENT NAME `nmcli` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes below --> ``` ansible 2.4.1.0 config file = /Users/dlbewley/src/ansible/playbook-openshift/ansible.cfg ...
null
https://github.com/ansible/ansible/pull/30757
null
{'base_commit': 'cc7a5228b02344658dac69c38ccb7d6580d2b4c6', 'files': [{'path': 'lib/ansible/modules/net_tools/nmcli.py', 'status': 'modified', 'Loc': {"('Nmcli', '__init__', 549)": {'mod': [559]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "lib/ansible/modules/net_tools/nmcli.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
5f7d39fede4de8af98472bd009c63c3a86568e2d
https://github.com/ultralytics/yolov5/issues/2840
bug
wandb: Network error (ReadTimeout), entering retry loop. See wandb\debug-internal.log for full traceback.
- **Current repo**: yolov5-5.0 release version - **Common dataset**: VisDrone.yaml - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments ## 🐛 Bug I try to use your rep to train yolov4's NET because yolov4(https://github.com/WongKinYiu/PyTorc...
null
https://github.com/ultralytics/yolov5/pull/2882
null
{'base_commit': '5f7d39fede4de8af98472bd009c63c3a86568e2d', 'files': [{'path': 'data/argoverse_hd.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'data/coco.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'data/coco128.yaml', 'status': 'modified', 'Loc':...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "utils/general.py" ], "doc": [], "test": [], "config": [ "data/argoverse_hd.yaml", "data/voc.yaml", "data/coco.yaml", "data/coco128.yaml" ], "asset": [ "data/scripts/get_argoverse_hd.sh", "data/scripts/get_voc.sh", "data/scripts/get_coco.sh" ] }
null
ultralytics
yolov5
cbd55da5d24becbe3b94afaaa4cdd1187a512c3f
https://github.com/ultralytics/yolov5/issues/2824
bug
Sizes of tensors must match
Multi Threaded Inference is not working with Yolo5. It throws the following error, ``` File "/home/zumbala/anaconda3/envs/environment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/zumbala/yolov5/models/yolo.py", line 1...
null
null
https://github.com/ultralytics/yolov5/commit/cbd55da5d24becbe3b94afaaa4cdd1187a512c3f
{'base_commit': 'cbd55da5d24becbe3b94afaaa4cdd1187a512c3f', 'files': [{'path': 'models/yolo.py', 'status': 'modified', 'Loc': {"('Detect', 'forward', 38)": {'mod': [52]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "models/yolo.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
d9b64c27c24db2001535bb480959aca015159510
https://github.com/ultralytics/yolov5/issues/119
question Stale
yolov5m模型,由42M增大到84M,是做了什么修改么?
6.16我做训练的时候(yolov5m),训练出来的模型大小是42M 但是今天(6.18)我用最新代码训练的时候,模型大小是84M 请问是做了什么修改么?
null
null
https://github.com/ultralytics/yolov5/commit/d9b64c27c24db2001535bb480959aca015159510
{'base_commit': 'd9b64c27c24db2001535bb480959aca015159510', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 60)": {'mod': [335]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
bfd51f62f8e0a114cb94c269e83ff135e31d8bdb
https://github.com/ultralytics/yolov5/issues/187
bug
can't test with my finetune weights
i train a model in my custom data, can get the weights (**last.pt** and **best.pt**) i run: `python test.py --img 640 --batch 16 --data ./data/patrol.yaml --weights weights/last.pt --device 4` `python test.py --img 640 --batch 16 --data ./data/patrol.yaml --weights weights/best.pt --device 4` both raise the error: ...
null
https://github.com/ultralytics/yolov5/pull/245
null
{'base_commit': 'bfd51f62f8e0a114cb94c269e83ff135e31d8bdb', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 62)": {'add': [135, 136, 174], 'mod': [82, 291]}, '(None, None, None)': {'mod': [375]}}}, {'path': 'utils/torch_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add':...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py", "utils/utils.py", "utils/torch_utils.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
5425557efe30863267f805851f918124191e0be0
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/227
Short text samples
It would be awesome to be able to use this to help train a hot word detector. In addition to recording myself saying the hotword, I could create an even larger dataset by adding outputs of this model that used my voice as the reference. The problem with that, however, is that this model seems to only work well on se...
null
https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472
null
{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 18)': {'mod': [18]}, '(None, None, 23)': {'mod': [23, 24]}, '(None, None, 65)': {'mod': [65, 66, 68, 70]}}}, {'path': 'demo_cli.py', 'status': 'modified', 'Loc': {'(None, None, None)':...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "synthesizer/models/modules.py", "synthesizer/models/tacotron.py", "synthesizer/train.py", "synthesizer/models/attention.py", "synthesizer_train.py", "demo_cli.py", "toolbox/__init__.py", "demo_toolbox.py", "synthesizer/models/architecture_wrappers.py", "synthesizer...
null
AUTOMATIC1111
stable-diffusion-webui
f108782e30369dedfc66f22d21c2b72c77941de7
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5050
bug
[Bug]: img2img sampler is not changing
### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What happened? I'm trying to choose another sampler, but it is not working. I tried checking the p value, and found sampler_name = None There seems to be a code missing to assign the varia...
null
https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4910
null
{'base_commit': 'f108782e30369dedfc66f22d21c2b72c77941de7', 'files': [{'path': 'scripts/xy_grid.py', 'status': 'modified', 'Loc': {"(None, 'confirm_samplers', 71)": {'add': [74]}, "('Script', 'process_axis', 276)": {'add': [279]}}}, {'path': 'img2img.py', 'Loc': {}}, {'path': 'Line 102: sampler_index=sd_samplers.sample...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "img2img.py", "scripts/xy_grid.py" ], "doc": [], "test": [], "config": [], "asset": [ "Line 102: sampler_index=sd_samplers.samplers_for_img2img[sampler_index].name" ] }
null