[{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "559609fe98ec2145788133687e64a6e87766bc77", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/25525", "iss_label": "Bug\nmodule:feature_extraction", "title": "Extend SequentialFeatureSelector example to demonstrate how to use negative tol", "body": "### Describe the bug\r\n\r\nI utilized the **SequentialFeatureSelector** for feature selection in my code, with the direction set to \"backward.\" The tolerance value is negative and the selection process stops when the decrease in the metric, AUC in this case, is less than the specified tolerance. Generally, increasing the number of features results in a higher AUC, but sacrificing some features, especially correlated ones that offer little contribution, can produce a pessimistic model with a lower AUC. The code worked as expected in **sklearn 1.1.1**, but when I updated to **sklearn 1.2.1**, I encountered the following error.\r\n\r\n### Steps/Code to Reproduce\r\n\r\n```python\r\nfrom sklearn.datasets import load_breast_cancer\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom sklearn.feature_selection import SequentialFeatureSelector\r\nfrom sklearn.preprocessing import StandardScaler\r\nfrom sklearn.pipeline import Pipeline\r\n\r\nX, y = load_breast_cancer(return_X_y=True)\r\n\r\nTOL = -0.001\r\nfeature_selector = SequentialFeatureSelector(\r\n LogisticRegression(max_iter=1000),\r\n n_features_to_select=\"auto\",\r\n direction=\"backward\",\r\n scoring=\"roc_auc\",\r\n tol=TOL\r\n )\r\n\r\n\r\npipe = Pipeline(\r\n [('scaler', StandardScaler()), \r\n ('feature_selector', feature_selector), \r\n ('log_reg', LogisticRegression(max_iter=1000))]\r\n )\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n pipe.fit(X, y)\r\n print(pipe['log_reg'].coef_[0])\r\n\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\n$ python sfs_tol.py \r\n[-2.0429818 0.5364346 -1.35765488 -2.85009904 -2.84603016]\r\n```\r\n\r\n### Actual Results\r\n\r\n```python-traceback\r\n$ python sfs_tol.py \r\nTraceback (most recent call last):\r\n File \"/home/modelling/users-workspace/nsofinij/lab/open-source/sfs_tol.py\", line 28, in \r\n pipe.fit(X, y)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 401, in fit\r\n Xt = self._fit(X, y, **fit_params_steps)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 359, in _fit\r\n X, fitted_transformer = fit_transform_one_cached(\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/joblib/memory.py\", line 349, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 893, in _fit_transform_one\r\n res = transformer.fit_transform(X, y, **fit_params)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_set_output.py\", line 142, in wrapped\r\n data_to_wrap = f(self, X, *args, **kwargs)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py\", line 862, in fit_transform\r\n return self.fit(X, y, **fit_params).transform(X)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_selection/_sequential.py\", line 201, in fit\r\n self._validate_params()\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py\", line 581, in _validate_params\r\n validate_parameter_constraints(\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_param_validation.py\", line 97, in validate_parameter_constraints\r\n raise InvalidParameterError(\r\nsklearn.utils._param_validation.InvalidParameterError: The 'tol' parameter of SequentialFeatureSelector must be None or a float in the range (0, inf). Got -0.001 instead.\r\n\r\n```\r\n\r\n### Versions\r\n\r\n```shell\r\nSystem:\r\n python: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:26:04) [GCC 10.4.0]\r\nexecutable: /home/modelling/opt/anaconda3/envs/py310/bin/python\r\n machine: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.26\r\n\r\nPython dependencies:\r\n sklearn: 1.2.1\r\n pip: 23.0\r\n setuptools: 66.1.1\r\n numpy: 1.24.1\r\n scipy: 1.10.0\r\n Cython: None\r\n pandas: 1.5.3\r\n matplotlib: 3.6.3\r\n joblib: 1.2.0\r\nthreadpoolctl: 3.1.0\r\n\r\nBuilt with OpenMP: True\r\n\r\nthreadpoolctl info:\r\n user_api: openmp\r\n internal_api: openmp\r\n prefix: libgomp\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0\r\n version: None\r\n num_threads: 64\r\n\r\n user_api: blas\r\n internal_api: openblas\r\n prefix: libopenblas\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/numpy.libs/libopenblas64_p-r0-15028c96.3.21.so\r\n version: 0.3.21\r\nthreading_layer: pthreads\r\n architecture: SkylakeX\r\n num_threads: 64\r\n\r\n user_api: blas\r\n internal_api: openblas\r\n prefix: libopenblas\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scipy.libs/libopenblasp-r0-41284840.3.18.so\r\n version: 0.3.18\r\nthreading_layer: pthreads\r\n architecture: SkylakeX\r\n num_threads: 64\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/26205", "commit_html_url": null, "file_loc": {"base_commit": "559609fe98ec2145788133687e64a6e87766bc77", "files": [{"path": "examples/feature_selection/plot_select_from_model_diabetes.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [145], "mod": [123, 124, 125]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/feature_selection/plot_select_from_model_diabetes.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2264", "iss_label": "cli", "title": "Handle app factory in FLASK_APP", "body": "`FLASK_APP=myproject.app:create_app('dev')`\r\n[\r\nGunicorn does this with `eval`](https://github.com/benoitc/gunicorn/blob/fbd151e9841e2c87a18512d71475bcff863a5171/gunicorn/util.py#L364), which I'm not super happy with. Instead, we could use `literal_eval` to allow a simple list of arguments. The line should never be so complicated that `eval` would be necessary anyway.\r\n\r\n~~~python\r\n# might need to fix this regex\r\nm = re.search(r'(\\w+)(\\(.*\\))', app_obj)\r\n\r\nif m:\r\n app = getattr(mod, m.group(1))(*literal_eval(m.group(2)))\r\n~~~", "pr_html_url": "https://github.com/pallets/flask/pull/2326", "file_loc": {"base_commit": "cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4", "files": [{"path": "flask/cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 12]}, "(None, 'find_best_app', 32)": {"mod": [58, 62, 69, 71]}, "(None, 'call_factory', 82)": {"mod": [82, 83, 84, 85, 86, 88, 89, 90, 91, 92, 93]}, "(None, 'locate_app', 125)": {"mod": [151, 153, 154, 155, 156, 158]}}}, {"path": "tests/test_cli.py", "status": "modified", "Loc": {"(None, 'test_locate_app', 148)": {"add": [152], "mod": [154, 155, 156, 157, 158, 159, 160, 161]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["flask/cli.py"], "doc": [], "test": ["tests/test_cli.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "737ca72b7bce6e377dd6876eacee63338fa8c30c", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/894", "iss_label": "", "title": "ERROR:localstack.services.generic_proxy: Error forwarding request:", "body": "Starting local dev environment. CTRL-C to quit.\r\nStarting mock API Gateway (http port 4567)...\r\nStarting mock DynamoDB (http port 4569)...\r\nStarting mock SES (http port 4579)...\r\nStarting mock Kinesis (http port 4568)...\r\nStarting mock Redshift (http port 4577)...\r\nStarting mock S3 (http port 4572)...\r\nStarting mock CloudWatch (http port 4582)...\r\nStarting mock CloudFormation (http port 4581)...\r\nStarting mock SSM (http port 4583)...\r\nStarting mock SQS (http port 4576)...\r\nStarting local Elasticsearch (http port 4571)...\r\nStarting mock SNS (http port 4575)...\r\nStarting mock DynamoDB Streams service (http port 4570)...\r\nStarting mock Firehose service (http port 4573)...\r\nStarting mock Route53 (http port 4580)...\r\nStarting mock ES service (http port 4578)...\r\nStarting mock Lambda service (http port 4574)...\r\n2018-08-11T13:33:08:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py\", line 201, in forward\r\n headers=forward_headers)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py\", line 508, in send\r\n raise ConnectionError(e, request=request)\r\nConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\r\n\r\n2018-08-11T13:34:08:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py\", line 201, in forward\r\n headers=forward_headers)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py\", line 508, in send\r\n raise ConnectionError(e, request=request)\r\nConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\r\n\r\n2018-08-11T13:35:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py\", line 201, in forward\r\n headers=forward_headers)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py\", line 508, in send\r\n raise ConnectionError(e, request=request)\r\nConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\r\n\r\n2018-08-11T13:36:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py\", line 201, in forward\r\n headers=forward_headers)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py\", line 508, in send\r\n raise ConnectionError(e, request=request)\r\nConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\r\n", "pr_html_url": "https://github.com/localstack/localstack/pull/1526", "file_loc": {"base_commit": "737ca72b7bce6e377dd6876eacee63338fa8c30c", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [186]}}}, {"path": "localstack/config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}}}, {"path": "localstack/services/kinesis/kinesis_starter.py", "status": "modified", "Loc": {"(None, 'start_kinesis', 14)": {"add": [17], "mod": [14, 23, 24]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack/config.py", "localstack/services/kinesis/kinesis_starter.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "d2871b29754abd0f72cf42c299bb1c041519f7bc", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/30", "iss_label": "", "title": "[Feature request] Add example of finetuning the pretrained models on custom corpus", "body": "", "pr_html_url": "https://github.com/huggingface/transformers/pull/25107", "file_loc": {"base_commit": "d2871b29754abd0f72cf42c299bb1c041519f7bc", "files": [{"path": "src/transformers/modeling_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [75, 108]}, "('PreTrainedModel', 'from_pretrained', 1959)": {"add": [2227]}, "(None, 'load_state_dict', 442)": {"mod": [461]}, "('PreTrainedModel', '_load_pretrained_model', 3095)": {"mod": [3183, 3388, 3389, 3390, 3391, 3392, 3393, 3394, 3395, 3396, 3397, 3398, 3399, 3400, 3401, 3402, 3403, 3404]}}}, {"path": "src/transformers/trainer.py", "status": "modified", "Loc": {"('Trainer', '__init__', 313)": {"mod": [468, 469, 470]}, "('Trainer', '_wrap_model', 1316)": {"mod": [1382, 1385, 1387]}, "('Trainer', 'train', 1453)": {"mod": [1520]}, "('Trainer', '_inner_training_loop', 1552)": {"mod": [1654]}, "('Trainer', 'create_accelerator_and_postprocess', 3866)": {"mod": [3889]}}}, {"path": "src/transformers/training_args.py", "status": "modified", "Loc": {"('TrainingArguments', None, 158)": {"add": [464], "mod": [439, 442, 445, 457]}, "('TrainingArguments', '__post_init__', 1221)": {"add": [1522, 1524, 1585], "mod": [1529, 1530, 1531, 1533, 1534, 1535, 1536, 1537, 1543, 1544, 1547, 1548, 1550, 1551, 1555, 1556, 1558, 1559, 1560, 1589, 1591, 1593, 1594, 1595, 1596, 1597, 1598, 1599, 1602]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/trainer.py", "src/transformers/modeling_utils.py", "src/transformers/training_args.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/11080", "iss_label": "Indexing\nPerformance", "title": "PERF: checking is_monotonic_increasing/decreasing before sorting on an index", "body": "We don't keep the sortedness state in an index per-se, but it is rather cheap to check\n- `is_monotonic_increasing` or `is_monotonic_decreasing` on a reg-index \n- MultiIndex should check `is_lexsorted` (this might be done already)\n\n```\nIn [8]: df = DataFrame(np.random.randn(1000000,2),columns=list('AB'))\n\nIn [9]: %timeit df.sort_index()\n10 loops, best of 3: 37.1 ms per loop\n\nIn [10]: %timeit -n 1 -r 1 df.index.is_monotonic_increasing\n1 loops, best of 1: 2.01 ms per loop\n\nIn [11]: %timeit -n 1 -r 1 df.index.is_monotonic_increasin^C\nKeyboardInterrupt\n\nIn [11]: %timeit df.set_index('A').sort_index()\n10 loops, best of 3: 175 ms per loop\n\nIn [12]: %timeit -n 1 -r 1 df.set_index('A').index.is_monotonic_increasing\n1 loops, best of 1: 9.54 ms per loop\n```\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/11294", "file_loc": {"base_commit": "51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319", "files": [{"path": "asv_bench/benchmarks/frame_methods.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [932]}}}, {"path": "doc/source/whatsnew/v0.17.1.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [54]}}}, {"path": "pandas/core/frame.py", "status": "modified", "Loc": {"('DataFrame', 'sort_index', 3126)": {"add": [3159]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/frame.py", "asv_bench/benchmarks/frame_methods.py"], "doc": ["doc/source/whatsnew/v0.17.1.txt"], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "fdb45741e521d606b028984dbc2f6ac57755bb88", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/10", "iss_label": "", "title": "Suggestions for speeding up ingestion?", "body": "I presume I must be doing something wrong, as it is taking hours to ingest a 500kbyte text on an i9-12900 with 128GB. In fact it's not even done yet. Using models are recommended.\r\n\r\nHelp?\r\n\r\nThanks\r\n\r\nSome output:\r\n\r\nllama_print_timings: load time = 674.34 ms\r\nllama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per run)\r\nllama_print_timings: prompt eval time = 12526.78 ms / 152 tokens ( 82.41 ms per token)\r\nllama_print_timings: eval time = 157.46 ms / 1 runs ( 157.46 ms per run)\r\nllama_print_timings: total time = 12715.48 ms", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/224", "file_loc": {"base_commit": "fdb45741e521d606b028984dbc2f6ac57755bb88", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4, 15, 17, 23, 25, 28, 58, 62, 86]}}}, {"path": "example.env", "status": "modified", "Loc": {"(None, None, None)": {"add": [4], "mod": [2]}}}, {"path": "ingest.py", "status": "modified", "Loc": {"(None, 'main', 71)": {"add": [79], "mod": [75, 76, 81, 84, 87, 90]}, "(None, None, None)": {"mod": [22]}}}, {"path": "privateGPT.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 11]}, "(None, 'main', 20)": {"mod": [21, 22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["ingest.py", "privateGPT.py"], "doc": ["README.md"], "test": [], "config": ["example.env"], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "9fef668338b15e508bac99598dd139546fece00b", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/9", "iss_label": "", "title": "Crash at the end of training", "body": "Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:\r\n\r\nI was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8\r\n\r\nIs this an issue you know about?\r\n```\r\n11/08/2018 17:50:03 - INFO - __main__ - device cuda n_gpu 1 distributed training False\r\n11/08/2018 17:50:18 - INFO - __main__ - *** Example ***\r\n11/08/2018 17:50:18 - INFO - __main__ - unique_id: 1000000000\r\n11/08/2018 17:50:18 - INFO - __main__ - example_index: 0\r\n11/08/2018 17:50:18 - INFO - __main__ - doc_span_index: 0\r\n11/08/2018 17:50:18 - INFO - __main__ - tokens: [CLS] to whom did the virgin mary allegedly appear in 1858 in lou ##rdes france ? [SEP] architectural ##ly , the school has a catholic character . atop the main building ' s gold dome is a golden statue of the virgin mary . immediately in front of the main building and facing it , is a copper statue of christ with arms up ##rai ##sed with the legend \" ve ##ni ##te ad me om ##nes \" . next to the main building is the basilica of the sacred heart . immediately behind the basilica is the gr ##otto , a marian place of prayer and reflection . it is a replica of the gr ##otto at lou ##rdes , france where the virgin mary reputed ##ly appeared to saint bern ##ade ##tte so ##ub ##iro ##us in 1858 . at the end of the main drive ( and in a direct line that connects through 3 statues and the gold dome ) , is a simple , modern stone statue of mary . [SEP]\r\n11/08/2018 17:50:18 - INFO - __main__ - token_to_orig_map: 17:0 18:0 19:0 20:1 21:2 22:3 23:4 24:5 25:6 26:6 27:7 28:8 29:9 30:10 31:10 32:10 33:11 34:12 35:13 36:14 37:15 38:16 39:17 40:18 41:19 42:20 43:20 44:21 45:22 46:23 47:24 48:25 49:26 50:27 51:28 52:29 53:30 54:30 55:31 56:32 57:33 58:34 59:35 60:36 61:37 62:38 63:39 64:39 65:39 66:40 67:41 68:42 69:43 70:43 71:43 72:43 73:44 74:45 75:46 76:46 77:46 78:46 79:47 80:48 81:49 82:50 83:51 84:52 85:53 86:54 87:55 88:56 89:57 90:58 91:58 92:59 93:60 94:61 95:62 96:63 97:64 98:65 99:65 100:65 101:66 102:67 103:68 104:69 105:70 106:71 107:72 108:72 109:73 110:74 111:75 112:76 113:77 114:78 115:79 116:79 117:80 118:81 119:81 120:81 121:82 122:83 123:84 124:85 125:86 126:87 127:87 128:88 129:89 130:90 131:91 132:91 133:91 134:92 135:92 136:92 137:92 138:93 139:94 140:94 141:95 142:96 143:97 144:98 145:99 146:100 147:101 148:102 149:102 150:103 151:104 152:105 153:106 154:107 155:108 156:109 157:110 158:111 159:112 160:113 161:114 162:115 163:115 164:115 165:116 166:117 167:118 168:118 169:119 170:120 171:121 172:122 173:123 174:123\r\n11/08/2018 17:50:18 - INFO - __main__ - token_is_max_context: 17:True 18:True 19:True 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 49:True 50:True 51:True 52:True 53:True 54:True 55:True 56:True 57:True 58:True 59:True 60:True 61:True 62:True 63:True 64:True 65:True 66:True 67:True 68:True 69:True 70:True 71:True 72:True 73:True 74:True 75:True 76:True 77:True 78:True 79:True 80:True 81:True 82:True 83:True 84:True 85:True 86:True 87:True 88:True 89:True 90:True 91:True 92:True 93:True 94:True 95:True 96:True 97:True 98:True 99:True 100:True 101:True 102:True 103:True 104:True 105:True 106:True 107:True 108:True 109:True 110:True 111:True 112:True 113:True 114:True 115:True 116:True 117:True 118:True 119:True 120:True 121:True 122:True 123:True 124:True 125:True 126:True 127:True 128:True 129:True 130:True 131:True 132:True 133:True 134:True 135:True 136:True 137:True 138:True 139:True 140:True 141:True 142:True 143:True 144:True 145:True 146:True 147:True 148:True 149:True 150:True 151:True 152:True 153:True 154:True 155:True 156:True 157:True 158:True 159:True 160:True 161:True 162:True 163:True 164:True 165:True 166:True 167:True 168:True 169:True 170:True 171:True 172:True 173:True 174:True\r\n11/08/2018 17:50:18 - INFO - __main__ - input_ids: 101 2000 3183 2106 1996 6261 2984 9382 3711 1999 8517 1999 10223 26371 2605 1029 102 6549 2135 1010 1996 2082 2038 1037 3234 2839 1012 10234 1996 2364 2311 1005 1055 2751 8514 2003 1037 3585 6231 1997 1996 6261 2984 1012 3202 1999 2392 1997 1996 2364 2311 1998 5307 2009 1010 2003 1037 6967 6231 1997 4828 2007 2608 2039 14995 6924 2007 1996 5722 1000 2310 3490 2618 4748 2033 18168 5267 1000 1012 2279 2000 1996 2364 2311 2003 1996 13546 1997 1996 6730 2540 1012 3202 2369 1996 13546 2003 1996 24665 23052 1010 1037 14042 2173 1997 7083 1998 9185 1012 2009 2003 1037 15059 1997 1996 24665 23052 2012 10223 26371 1010 2605 2073 1996 6261 2984 22353 2135 2596 2000 3002 16595 9648 4674 2061 12083 9711 2271 1999 8517 1012 2012 1996 2203 1997 1996 2364 3298 1006 1998 1999 1037 3622 2240 2008 8539 2083 1017 11342 1998 1996 2751 8514 1007 1010 2003 1037 3722 1010 2715 2962 6231 1997 2984 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\r\n11/08/2018 17:50:18 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\r\n\r\n... [truncated] ...\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29314/29324 [3:27:55<00:04, 2.36it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29315/29324 [3:27:55<00:03, 2.44it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29316/29324 [3:27:56<00:03, 2.26it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29317/29324 [3:27:56<00:02, 2.35it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29318/29324 [3:27:56<00:02, 2.44it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29319/29324 [3:27:57<00:02, 2.25it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29320/29324 [3:27:57<00:01, 2.35it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29321/29324 [3:27:58<00:01, 2.41it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29322/29324 [3:27:58<00:00, 2.25it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29323/29324 [3:27:59<00:00, 2.36it/s]\u001b[ATraceback (most recent call last):\r\n File \"code/run_squad.py\", line 929, in \r\n main()\r\n File \"code/run_squad.py\", line 862, in main\r\n loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 477, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/0x0d4ff90d01fa4168983197b17d73bb0c_dependencies/code/modeling.py\", line 467, in forward\r\n start_loss = loss_fct(start_logits, start_positions)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 477, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py\", line 862, in forward\r\n ignore_index=self.ignore_index, reduction=self.reduction)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py\", line 1550, in cross_entropy\r\n return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py\", line 1403, in nll_loss\r\n if input.size(0) != target.size(0):\r\nRuntimeError: dimension specified as 0 but tensor has no dimensions\r\n\r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py\", line 931, in __del__\r\n self.close()\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py\", line 1133, in close\r\n self._decr_instances(self)\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py\", line 496, in _decr_instances\r\n cls.monitor.exit()\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py\", line 52, in exit\r\n self.join()\r\n File \"/usr/lib/python3.6/threading.py\", line 1053, in join\r\n raise RuntimeError(\"cannot join current thread\")\r\nRuntimeError: cannot join current thread\r\n```", "pr_html_url": "https://github.com/huggingface/transformers/pull/16310", "file_loc": {"base_commit": "9fef668338b15e508bac99598dd139546fece00b", "files": [{"path": "tests/big_bird/test_modeling_big_bird.py", "status": "modified", "Loc": {"('BigBirdModelTester', '__init__', 47)": {"mod": [73]}, "('BigBirdModelTest', 'test_fast_integration', 561)": {"mod": [584]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["tests/big_bird/test_modeling_big_bird.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "ccabcf1fca906bfa6b65a3189c1c41061e6c1042", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/3698", "iss_label": "", "title": "AttributeError: 'NoneType' object has no attribute 'read'", "body": "Hello :)\r\n\r\nAfter a recent upgrade for our [coala](https://github.com/coala/coala) project to `requests` 2.12.1 we encounter an exception in our test suites which seems to be caused by `requests`.\r\n\r\nBuild: https://ci.appveyor.com/project/coala/coala-bears/build/1.0.3537/job/1wm7b4u9yhgkxkgn\r\n\r\nRelevant part:\r\n```\r\n================================== FAILURES ===================================\r\n_________________ InvalidLinkBearTest.test_redirect_threshold _________________\r\nself = \r\n def test_redirect_threshold(self):\r\n \r\n long_url_redirect = \"\"\"\r\n https://bitbucket.org/api/301\r\n https://bitbucket.org/api/302\r\n \"\"\".splitlines()\r\n \r\n short_url_redirect = \"\"\"\r\n http://httpbin.org/status/301\r\n \"\"\".splitlines()\r\n \r\n self.assertResult(valid_file=long_url_redirect,\r\n invalid_file=short_url_redirect,\r\n> settings={'follow_redirects': 'yeah'})\r\ntests\\general\\InvalidLinkBearTest.py:157: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests\\general\\InvalidLinkBearTest.py:75: in assertResult\r\n out = list(uut.run(\"valid\", valid_file, **settings))\r\nbears\\general\\InvalidLinkBear.py:80: in run\r\n file, timeout, link_ignore_regex):\r\nbears\\general\\InvalidLinkBear.py:53: in find_links_in_file\r\n code = InvalidLinkBear.get_status_code(link, timeout)\r\nbears\\general\\InvalidLinkBear.py:37: in get_status_code\r\n timeout=timeout).status_code\r\nC:\\Python34\\lib\\site-packages\\requests\\api.py:96: in head\r\n return request('head', url, **kwargs)\r\nC:\\Python34\\lib\\site-packages\\requests\\api.py:56: in request\r\n return session.request(method=method, url=url, **kwargs)\r\nC:\\Python34\\lib\\site-packages\\requests\\sessions.py:488: in request\r\n resp = self.send(prep, **send_kwargs)\r\nC:\\Python34\\lib\\site-packages\\requests_mock\\mocker.py:69: in _fake_send\r\n return self._real_send(session, request, **kwargs)\r\nC:\\Python34\\lib\\site-packages\\requests\\sessions.py:641: in send\r\n r.content\r\nC:\\Python34\\lib\\site-packages\\requests\\models.py:772: in content\r\n self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n def generate():\r\n # Special case for urllib3.\r\n if hasattr(self.raw, 'stream'):\r\n try:\r\n for chunk in self.raw.stream(chunk_size, decode_content=True):\r\n yield chunk\r\n except ProtocolError as e:\r\n raise ChunkedEncodingError(e)\r\n except DecodeError as e:\r\n raise ContentDecodingError(e)\r\n except ReadTimeoutError as e:\r\n raise ConnectionError(e)\r\n else:\r\n # Standard file-like object.\r\n while True:\r\n> chunk = self.raw.read(chunk_size)\r\nE AttributeError: 'NoneType' object has no attribute 'read'\r\nC:\\Python34\\lib\\site-packages\\requests\\models.py:705: AttributeError\r\n```\r\nhappens on Windows and Linux.\r\n\r\nThanks in advance :)", "pr_html_url": "https://github.com/psf/requests/pull/3718", "file_loc": {"base_commit": "ccabcf1fca906bfa6b65a3189c1c41061e6c1042", "files": [{"path": "requests/models.py", "status": "modified", "Loc": {"('Response', 'content', 763)": {"mod": [772]}}}, {"path": "tests/test_requests.py", "status": "modified", "Loc": {"('TestRequests', None, 55)": {"add": [1096]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/models.py"], "doc": [], "test": ["tests/test_requests.py"], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "fc805074be7b3b507bc1699e537f9b691c6f91b9", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/674", "iss_label": "bug\ndocumentation", "title": "ModuleNotFoundError: No module named 'tkinter'", "body": "**Bug description**\r\nWhen running `gpt-engineer --improve` (using the recent version from PyPI), I get the following output:\r\n\r\n```\r\n$ gpt-engineer --improve\r\nTraceback (most recent call last):\r\n File \"/home/.../.local/bin/gpt-engineer\", line 5, in \r\n from gpt_engineer.main import app\r\n File \"/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/main.py\", line 12, in \r\n from gpt_engineer.collect import collect_learnings\r\n File \"/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/collect.py\", line 5, in \r\n from gpt_engineer import steps\r\n File \"/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/steps.py\", line 19, in \r\n from gpt_engineer.file_selector import FILE_LIST_NAME, ask_for_files\r\n File \"/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/file_selector.py\", line 4, in \r\n import tkinter as tk\r\nModuleNotFoundError: No module named 'tkinter'\r\n```\r\n\r\n\r\n**Expected behavior**\r\nNo error.\r\n\r\nIn https://github.com/AntonOsika/gpt-engineer/pull/465, no changes where made to the required packages, so tkinter might be added there. (Or made optional.)\r\n\r\nEDIT: The error happens always, regardless of the command line parameter.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/675", "file_loc": {"base_commit": "fc805074be7b3b507bc1699e537f9b691c6f91b9", "files": [{"path": "docs/installation.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [45]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docs/installation.rst"], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "85dce2c836fe03aefc07b7f4e0aec575e170f1cd", "iss_html_url": "https://github.com/pallets/flask/issues/593", "iss_label": "blueprints", "title": "Nestable blueprints", "body": "I'd like to be able to register \"sub-blueprints\" using `Blueprint.register_blueprint(*args, **kwargs)`. This would register the nested blueprints with an app when the \"parent\" is registered with it. All parameters are preserved, other than `url_prefix`, which is handled similarly to in `add_url_rule`. A na\u00edve implementation could look like this:\n\n``` python\nclass Blueprint(object):\n ...\n\n def register_blueprint(self, blueprint, **options):\n def deferred(state):\n url_prefix = options.get('url_prefix')\n if url_prefix is None:\n url_prefix = blueprint.url_prefix\n if 'url_prefix' in options:\n del options['url_prefix']\n\n state.app.register_blueprint(blueprint, url_prefix, **options)\n self.record(deferred)\n```\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/3923", "commit_html_url": null, "file_loc": {"base_commit": "85dce2c836fe03aefc07b7f4e0aec575e170f1cd", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, 71)": {"add": [71]}}}, {"path": "docs/blueprints.rst", "status": "modified", "Loc": {"(None, None, 122)": {"add": [122]}}}, {"path": "src/flask/app.py", "status": "modified", "Loc": {"('Flask', '__call__', 1982)": {"add": [1987]}, "('Flask', 'update_template_context', 712)": {"mod": [726, 727, 728]}, "('Flask', 'register_blueprint', 971)": {"mod": [990, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1004]}, "('Flask', '_find_error_handler', 1230)": {"mod": [1238, 1239, 1240, 1241, 1242, 1243, 1244]}, "('Flask', 'preprocess_request', 1741)": {"mod": [1752, 1755, 1756, 1761, 1762]}, "('Flask', 'process_response', 1768)": {"mod": [1782, 1784, 1785]}, "('Flask', 'do_teardown_request', 1794)": {"mod": [1818, 1819, 1820]}}}, {"path": "src/flask/blueprints.py", "status": "modified", "Loc": {"('BlueprintSetupState', '__init__', 16)": {"add": [47]}, "('Blueprint', '__init__', 141)": {"add": [170]}, "('Blueprint', 'register', 213)": {"add": [225], "mod": [281, 282, 286, 287, 288, 289, 290, 291, 292, 293]}, "('BlueprintSetupState', 'add_url_rule', 53)": {"mod": [71]}, "('Blueprint', None, 78)": {"mod": [213]}}}, {"path": "tests/test_blueprints.py", "status": "modified", "Loc": {"(None, 'test_app_url_processors', 828)": {"add": [852]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/flask/blueprints.py", "src/flask/app.py"], "doc": ["docs/blueprints.rst", "CHANGES.rst"], "test": ["tests/test_blueprints.py"], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "f92d61497a426a19818625c3ccdaae9beeb82b31", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14263", "iss_label": "bug", "title": "[Bug]: KeyError: \"do_not_save\" when trying to save a prompt", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nWhen I try to save a prompt, it errors in the console saying\r\n```\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py\", line 212, in save_styles\r\n style_paths.remove(\"do_not_save\")\r\nKeyError: 'do_not_save'\r\n```\r\nand the file is not modified\r\nI manually commented it out and it doesn't seem to break anything, except that it is saved to styles.csv.csv instead of styles.csv\n\n### Steps to reproduce the problem\n\nTry to save a prompt\r\n\n\n### What should have happened?\n\nSave into style.csv with no error\n\n### Sysinfo\n\n{\r\n \"Platform\": \"Linux-6.6.4-zen1-1-zen-x86_64-with-glibc2.38\",\r\n \"Python\": \"3.11.4\",\r\n \"Version\": \"v1.7.0-RC-5-gf92d6149\",\r\n \"Commit\": \"f92d61497a426a19818625c3ccdaae9beeb82b31\",\r\n \"Script path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui\",\r\n \"Data path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui\",\r\n \"Extensions dir\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions\",\r\n \"Checksum\": \"e15aad6adb98a2a0ad13cad2b45b61b03565ef4f258783021da82b4ef7f37fa9\",\r\n \"Commandline\": [\r\n \"launch.py\"\r\n ],\r\n \"Torch env info\": {\r\n \"torch_version\": \"2.2.0\",\r\n \"is_debug_build\": \"False\",\r\n \"cuda_compiled_version\": \"N/A\",\r\n \"gcc_version\": \"(GCC) 13.2.1 20230801\",\r\n \"clang_version\": \"16.0.6\",\r\n \"cmake_version\": \"version 3.26.4\",\r\n \"os\": \"Arch Linux (x86_64)\",\r\n \"libc_version\": \"glibc-2.38\",\r\n \"python_version\": \"3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)\",\r\n \"python_platform\": \"Linux-6.6.4-zen1-1-zen-x86_64-with-glibc2.38\",\r\n \"is_cuda_available\": \"True\",\r\n \"cuda_runtime_version\": null,\r\n \"cuda_module_loading\": \"LAZY\",\r\n \"nvidia_driver_version\": null,\r\n \"nvidia_gpu_models\": \"AMD Radeon RX 7900 XTX (gfx1100)\",\r\n \"cudnn_version\": null,\r\n \"pip_version\": \"pip3\",\r\n \"pip_packages\": [\r\n \"numpy==1.23.5\",\r\n \"open-clip-torch==2.20.0\",\r\n \"pytorch-lightning==1.9.4\",\r\n \"pytorch-triton-rocm==2.1.0+dafe145982\",\r\n \"torch==2.2.0.dev20231208+rocm5.6\",\r\n \"torchdiffeq==0.2.3\",\r\n \"torchmetrics==1.2.1\",\r\n \"torchsde==0.2.6\",\r\n \"torchvision==0.17.0.dev20231208+rocm5.6\"\r\n ],\r\n \"conda_packages\": [\r\n \"numpy 1.26.2 py311h24aa872_0 \",\r\n \"numpy-base 1.26.2 py311hbfb1bba_0 \",\r\n \"open-clip-torch 2.20.0 pypi_0 pypi\",\r\n \"pytorch-lightning 1.9.4 pypi_0 pypi\",\r\n \"pytorch-triton-rocm 2.1.0+dafe145982 pypi_0 pypi\",\r\n \"torch 2.2.0.dev20231208+rocm5.7 pypi_0 pypi\",\r\n \"torchaudio 2.2.0.dev20231208+rocm5.7 pypi_0 pypi\",\r\n \"torchdiffeq 0.2.3 pypi_0 pypi\",\r\n \"torchmetrics 1.2.1 pypi_0 pypi\",\r\n \"torchsde 0.2.5 pypi_0 pypi\",\r\n \"torchvision 0.17.0.dev20231208+rocm5.7 pypi_0 pypi\"\r\n ],\r\n \"hip_compiled_version\": \"5.6.31061-8c743ae5d\",\r\n \"hip_runtime_version\": \"5.6.31061\",\r\n \"miopen_runtime_version\": \"2.20.0\",\r\n \"caching_allocator_config\": \"\",\r\n \"is_xnnpack_available\": \"True\",\r\n \"cpu_info\": [\r\n \"Architecture: x86_64\",\r\n \"CPU op-mode(s): 32-bit, 64-bit\",\r\n \"Address sizes: 48 bits physical, 48 bits virtual\",\r\n \"Byte Order: Little Endian\",\r\n \"CPU(s): 32\",\r\n \"On-line CPU(s) list: 0-31\",\r\n \"Vendor ID: AuthenticAMD\",\r\n \"Model name: AMD Ryzen 9 5950X 16-Core Processor\",\r\n \"CPU family: 25\",\r\n \"Model: 33\",\r\n \"Thread(s) per core: 2\",\r\n \"Core(s) per socket: 16\",\r\n \"Socket(s): 1\",\r\n \"Stepping: 0\",\r\n \"Frequency boost: disabled\",\r\n \"CPU(s) scaling MHz: 49%\",\r\n \"CPU max MHz: 6279.4922\",\r\n \"CPU min MHz: 2200.0000\",\r\n \"BogoMIPS: 8383.88\",\r\n \"Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap\",\r\n \"Virtualization: AMD-V\",\r\n \"L1d cache: 512 KiB (16 instances)\",\r\n \"L1i cache: 512 KiB (16 instances)\",\r\n \"L2 cache: 8 MiB (16 instances)\",\r\n \"L3 cache: 64 MiB (2 instances)\",\r\n \"NUMA node(s): 1\",\r\n \"NUMA node0 CPU(s): 0-31\",\r\n \"Vulnerability Gather data sampling: Not affected\",\r\n \"Vulnerability Itlb multihit: Not affected\",\r\n \"Vulnerability L1tf: Not affected\",\r\n \"Vulnerability Mds: Not affected\",\r\n \"Vulnerability Meltdown: Not affected\",\r\n \"Vulnerability Mmio stale data: Not affected\",\r\n \"Vulnerability Retbleed: Not affected\",\r\n \"Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\",\r\n \"Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\",\r\n \"Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\",\r\n \"Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected\",\r\n \"Vulnerability Srbds: Not affected\",\r\n \"Vulnerability Tsx async abort: Not affected\"\r\n ]\r\n },\r\n \"Exceptions\": [],\r\n \"CPU\": {\r\n \"model\": \"\",\r\n \"count logical\": 32,\r\n \"count physical\": 16\r\n },\r\n \"RAM\": {\r\n \"total\": \"31GB\",\r\n \"used\": \"6GB\",\r\n \"free\": \"20GB\",\r\n \"active\": \"7GB\",\r\n \"inactive\": \"2GB\",\r\n \"buffers\": \"172MB\",\r\n \"cached\": \"5GB\",\r\n \"shared\": \"199MB\"\r\n },\r\n \"Extensions\": [\r\n {\r\n \"name\": \"clip-interrogator-ext\",\r\n \"path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/clip-interrogator-ext\",\r\n \"version\": \"0f1a4591\",\r\n \"branch\": \"main\",\r\n \"remote\": \"https://github.com/pharmapsychotic/clip-interrogator-ext.git\"\r\n },\r\n {\r\n \"name\": \"latent-upscale\",\r\n \"path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/latent-upscale\",\r\n \"version\": \"b9f75f44\",\r\n \"branch\": \"main\",\r\n \"remote\": \"https://github.com/feynlee/latent-upscale.git\"\r\n },\r\n {\r\n \"name\": \"sd-webui-controlnet\",\r\n \"path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/sd-webui-controlnet\",\r\n \"version\": \"feea1f65\",\r\n \"branch\": \"main\",\r\n \"remote\": \"https://github.com/Mikubill/sd-webui-controlnet.git\"\r\n },\r\n {\r\n \"name\": \"ultimate-upscale-for-automatic1111\",\r\n \"path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/ultimate-upscale-for-automatic1111\",\r\n \"version\": \"728ffcec\",\r\n \"branch\": \"master\",\r\n \"remote\": \"https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git\"\r\n }\r\n ],\r\n \"Inactive extensions\": [],\r\n \"Environment\": {\r\n \"GIT\": \"git\",\r\n \"GRADIO_ANALYTICS_ENABLED\": \"False\",\r\n \"TORCH_COMMAND\": \"pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm5.6\"\r\n },\r\n \"Config\": {\r\n \"samples_save\": true,\r\n \"samples_format\": \"png\",\r\n \"samples_filename_pattern\": \"\",\r\n \"save_images_add_number\": true,\r\n \"save_images_replace_action\": \"Replace\",\r\n \"grid_save\": true,\r\n \"grid_format\": \"png\",\r\n \"grid_extended_filename\": false,\r\n \"grid_only_if_multiple\": true,\r\n \"grid_prevent_empty_spots\": false,\r\n \"grid_zip_filename_pattern\": \"\",\r\n \"n_rows\": -1,\r\n \"font\": \"\",\r\n \"grid_text_active_color\": \"#000000\",\r\n \"grid_text_inactive_color\": \"#999999\",\r\n \"grid_background_color\": \"#ffffff\",\r\n \"save_images_before_face_restoration\": false,\r\n \"save_images_before_highres_fix\": false,\r\n \"save_images_before_color_correction\": false,\r\n \"save_mask\": false,\r\n \"save_mask_composite\": false,\r\n \"jpeg_quality\": 80,\r\n \"webp_lossless\": false,\r\n \"export_for_4chan\": true,\r\n \"img_downscale_threshold\": 4.0,\r\n \"target_side_length\": 4000,\r\n \"img_max_size_mp\": 200,\r\n \"use_original_name_batch\": true,\r\n \"use_upscaler_name_as_suffix\": false,\r\n \"save_selected_only\": true,\r\n \"save_init_img\": false,\r\n \"temp_dir\": \"\",\r\n \"clean_temp_dir_at_start\": false,\r\n \"save_incomplete_images\": false,\r\n \"notification_audio\": true,\r\n \"notification_volume\": 100,\r\n \"outdir_samples\": \"\",\r\n \"outdir_txt2img_samples\": \"outputs/txt2img-images\",\r\n \"outdir_img2img_samples\": \"outputs/img2img-images\",\r\n \"outdir_extras_samples\": \"outputs/extras-images\",\r\n \"outdir_grids\": \"\",\r\n \"outdir_txt2img_grids\": \"outputs/txt2img-grids\",\r\n \"outdir_img2img_grids\": \"outputs/img2img-grids\",\r\n \"outdir_save\": \"log/images\",\r\n \"outdir_init_images\": \"outputs/init-images\",\r\n \"save_to_dirs\": true,\r\n \"grid_save_to_dirs\": true,\r\n \"use_save_to_dirs_for_ui\": false,\r\n \"directories_filename_pattern\": \"[date]\",\r\n \"directories_max_prompt_words\": 8,\r\n \"ESRGAN_tile\": 192,\r\n \"ESRGAN_tile_overlap\": 8,\r\n \"realesrgan_enabled_models\": [\r\n \"R-ESRGAN 4x+\",\r\n \"R-ESRGAN 4x+ Anime6B\"\r\n ],\r\n \"upscaler_for_img2img\": null,\r\n \"face_restoration\": false,\r\n \"face_restoration_model\": \"CodeFormer\",\r\n \"code_former_weight\": 0.5,\r\n \"face_restoration_unload\": false,\r\n \"auto_launch_browser\": \"Local\",\r\n \"enable_console_prompts\": false,\r\n \"show_warnings\": false,\r\n \"show_gradio_deprecation_warnings\": true,\r\n \"memmon_poll_rate\": 8,\r\n \"samples_log_stdout\": false,\r\n \"multiple_tqdm\": true,\r\n \"print_hypernet_extra\": false,\r\n \"list_hidden_files\": true,\r\n \"disable_mmap_load_safetensors\": false,\r\n \"hide_ldm_prints\": true,\r\n \"dump_stacks_on_signal\": false,\r\n \"api_enable_requests\": true,\r\n \"api_forbid_local_requests\": true,\r\n \"api_useragent\": \"\",\r\n \"unload_models_when_training\": false,\r\n \"pin_memory\": false,\r\n \"save_optimizer_state\": false,\r\n \"save_training_settings_to_txt\": true,\r\n \"dataset_filename_word_regex\": \"\",\r\n \"dataset_filename_join_string\": \" \",\r\n \"training_image_repeats_per_epoch\": 1,\r\n \"training_write_csv_every\": 500,\r\n \"training_xattention_optimizations\": false,\r\n \"training_enable_tensorboard\": false,\r\n \"training_tensorboard_save_images\": false,\r\n \"training_tensorboard_flush_every\": 120,\r\n \"sd_model_checkpoint\": \"AOM3A1B_orangemixs.safetensors [5493a0ec49]\",\r\n \"sd_checkpoints_limit\": 1,\r\n \"sd_checkpoints_keep_in_cpu\": true,\r\n \"sd_checkpoint_cache\": 0,\r\n \"sd_unet\": \"Automatic\",\r\n \"enable_quantization\": false,\r\n \"enable_emphasis\": true,\r\n \"enable_batch_seeds\": true,\r\n \"comma_padding_backtrack\": 20,\r\n \"CLIP_stop_at_last_layers\": 1,\r\n \"upcast_attn\": true,\r\n \"randn_source\": \"GPU\",\r\n \"tiling\": false,\r\n \"hires_fix_refiner_pass\": \"second pass\",\r\n \"sdxl_crop_top\": 0,\r\n \"sdxl_crop_left\": 0,\r\n \"sdxl_refiner_low_aesthetic_score\": 2.5,\r\n \"sdxl_refiner_high_aesthetic_score\": 6.0,\r\n \"sd_vae_checkpoint_cache\": 1,\r\n \"sd_vae\": \"orangemix.vae.pt\",\r\n \"sd_vae_overrides_per_model_preferences\": true,\r\n \"auto_vae_precision\": true,\r\n \"sd_vae_encode_method\": \"Full\",\r\n \"sd_vae_decode_method\": \"Full\",\r\n \"inpainting_mask_weight\": 1.0,\r\n \"initial_noise_multiplier\": 1.0,\r\n \"img2img_extra_noise\": 0.0,\r\n \"img2img_color_correction\": false,\r\n \"img2img_fix_steps\": false,\r\n \"img2img_background_color\": \"#ffffff\",\r\n \"img2img_editor_height\": 720,\r\n \"img2img_sketch_default_brush_color\": \"#ffffff\",\r\n \"img2img_inpaint_mask_brush_color\": \"#ffffff\",\r\n \"img2img_inpaint_sketch_default_brush_color\": \"#ffffff\",\r\n \"return_mask\": false,\r\n \"return_mask_composite\": false,\r\n \"img2img_batch_show_results_limit\": 32,\r\n \"cross_attention_optimization\": \"Automatic\",\r\n \"s_min_uncond\": 0.0,\r\n \"token_merging_ratio\": 0.0,\r\n \"token_merging_ratio_img2img\": 0.0,\r\n \"token_merging_ratio_hr\": 0.0,\r\n \"pad_cond_uncond\": false,\r\n \"persistent_cond_cache\": true,\r\n \"batch_cond_uncond\": true,\r\n \"use_old_emphasis_implementation\": false,\r\n \"use_old_karras_scheduler_sigmas\": false,\r\n \"no_dpmpp_sde_batch_determinism\": false,\r\n \"use_old_hires_fix_width_height\": false,\r\n \"dont_fix_second_order_samplers_schedule\": false,\r\n \"hires_fix_use_firstpass_conds\": false,\r\n \"use_old_scheduling\": false,\r\n \"interrogate_keep_models_in_memory\": false,\r\n \"interrogate_return_ranks\": false,\r\n \"interrogate_clip_num_beams\": 1,\r\n \"interrogate_clip_min_length\": 24,\r\n \"interrogate_clip_max_length\": 48,\r\n \"interrogate_clip_dict_limit\": 1500,\r\n \"interrogate_clip_skip_categories\": [],\r\n \"interrogate_deepbooru_score_threshold\": 0.5,\r\n \"deepbooru_sort_alpha\": true,\r\n \"deepbooru_use_spaces\": true,\r\n \"deepbooru_escape\": true,\r\n \"deepbooru_filter_tags\": \"\",\r\n \"extra_networks_show_hidden_directories\": true,\r\n \"extra_networks_dir_button_function\": false,\r\n \"extra_networks_hidden_models\": \"When searched\",\r\n \"extra_networks_default_multiplier\": 1.0,\r\n \"extra_networks_card_width\": 0,\r\n \"extra_networks_card_height\": 0,\r\n \"extra_networks_card_text_scale\": 1.0,\r\n \"extra_networks_card_show_desc\": true,\r\n \"extra_networks_card_order_field\": \"Path\",\r\n \"extra_networks_card_order\": \"Ascending\",\r\n \"extra_networks_add_text_separator\": \" \",\r\n \"ui_extra_networks_tab_reorder\": \"\",\r\n \"textual_inversion_print_at_load\": false,\r\n \"textual_inversion_add_hashes_to_infotext\": true,\r\n \"sd_hypernetwork\": \"None\",\r\n \"keyedit_precision_attention\": 0.1,\r\n \"keyedit_precision_extra\": 0.05,\r\n \"keyedit_delimiters\": \".,\\\\/!?%^*;:{}=`~() \",\r\n \"keyedit_delimiters_whitespace\": [\r\n \"Tab\",\r\n \"Carriage Return\",\r\n \"Line Feed\"\r\n ],\r\n \"disable_token_counters\": false,\r\n \"return_grid\": true,\r\n \"do_not_show_images\": false,\r\n \"js_modal_lightbox\": true,\r\n \"js_modal_lightbox_initially_zoomed\": true,\r\n \"js_modal_lightbox_gamepad\": false,\r\n \"js_modal_lightbox_gamepad_repeat\": 250,\r\n \"gallery_height\": \"\",\r\n \"compact_prompt_box\": false,\r\n \"samplers_in_dropdown\": true,\r\n \"dimensions_and_batch_together\": true,\r\n \"sd_checkpoint_dropdown_use_short\": false,\r\n \"hires_fix_show_sampler\": false,\r\n \"hires_fix_show_prompts\": false,\r\n \"txt2img_settings_accordion\": false,\r\n \"img2img_settings_accordion\": false,\r\n \"localization\": \"None\",\r\n \"quicksettings_list\": [\r\n \"sd_model_checkpoint\"\r\n ],\r\n \"ui_tab_order\": [],\r\n \"hidden_tabs\": [],\r\n \"ui_reorder_list\": [],\r\n \"gradio_theme\": \"Default\",\r\n \"gradio_themes_cache\": true,\r\n \"show_progress_in_title\": true,\r\n \"send_seed\": true,\r\n \"send_size\": true,\r\n \"enable_pnginfo\": true,\r\n \"save_txt\": false,\r\n \"add_model_name_to_info\": true,\r\n \"add_model_hash_to_info\": true,\r\n \"add_vae_name_to_info\": true,\r\n \"add_vae_hash_to_info\": true,\r\n \"add_user_name_to_info\": false,\r\n \"add_version_to_infotext\": true,\r\n \"disable_weights_auto_swap\": true,\r\n \"infotext_skip_pasting\": [],\r\n \"infotext_styles\": \"Apply if any\",\r\n \"show_progressbar\": true,\r\n \"live_previews_enable\": false,\r\n \"live_previews_image_format\": \"png\",\r\n \"show_progress_grid\": true,\r\n \"show_progress_every_n_steps\": 5,\r\n \"show_progress_type\": \"Approx NN\",\r\n \"live_preview_allow_lowvram_full\": false,\r\n \"live_preview_content\": \"Prompt\",\r\n \"live_preview_refresh_period\": 300.0,\r\n \"live_preview_fast_interrupt\": false,\r\n \"hide_samplers\": [],\r\n \"eta_ddim\": 0.0,\r\n \"eta_ancestral\": 1.0,\r\n \"ddim_discretize\": \"uniform\",\r\n \"s_churn\": 0.0,\r\n \"s_tmin\": 0.0,\r\n \"s_tmax\": 0.0,\r\n \"s_noise\": 1.0,\r\n \"k_sched_type\": \"Automatic\",\r\n \"sigma_min\": 0.0,\r\n \"sigma_max\": 0.0,\r\n \"rho\": 0.0,\r\n \"eta_noise_seed_delta\": 0,\r\n \"always_discard_next_to_last_sigma\": false,\r\n \"sgm_noise_multiplier\": false,\r\n \"uni_pc_variant\": \"bh1\",\r\n \"uni_pc_skip_type\": \"time_uniform\",\r\n \"uni_pc_order\": 3,\r\n \"uni_pc_lower_order_final\": true,\r\n \"postprocessing_enable_in_main_ui\": [],\r\n \"postprocessing_operation_order\": [],\r\n \"upscaling_max_images_in_cache\": 5,\r\n \"postprocessing_existing_caption_action\": \"Ignore\",\r\n \"disabled_extensions\": [],\r\n \"disable_all_extensions\": \"none\",\r\n \"restore_config_state_file\": \"\",\r\n \"sd_checkpoint_hash\": \"5493a0ec491f5961dbdc1c861404088a6ae9bd4007f6a3a7c5dee8789cdc1361\",\r\n \"ldsr_steps\": 100,\r\n \"ldsr_cached\": false,\r\n \"SCUNET_tile\": 256,\r\n \"SCUNET_tile_overlap\": 8,\r\n \"SWIN_tile\": 192,\r\n \"SWIN_tile_overlap\": 8,\r\n \"SWIN_torch_compile\": false,\r\n \"hypertile_enable_unet\": false,\r\n \"hypertile_enable_unet_secondpass\": false,\r\n \"hypertile_max_depth_unet\": 3,\r\n \"hypertile_max_tile_unet\": 256,\r\n \"hypertile_swap_size_unet\": 3,\r\n \"hypertile_enable_vae\": false,\r\n \"hypertile_max_depth_vae\": 3,\r\n \"hypertile_max_tile_vae\": 128,\r\n \"hypertile_swap_size_vae\": 3,\r\n \"control_net_detectedmap_dir\": \"detected_maps\",\r\n \"control_net_models_path\": \"\",\r\n \"control_net_modules_path\": \"\",\r\n \"control_net_unit_count\": 3,\r\n \"control_net_model_cache_size\": 1,\r\n \"control_net_inpaint_blur_sigma\": 7,\r\n \"control_net_no_high_res_fix\": false,\r\n \"control_net_no_detectmap\": false,\r\n \"control_net_detectmap_autosaving\": false,\r\n \"control_net_allow_script_control\": false,\r\n \"control_net_sync_field_args\": true,\r\n \"controlnet_show_batch_images_in_ui\": false,\r\n \"controlnet_increment_seed_during_batch\": false,\r\n \"controlnet_disable_openpose_edit\": false,\r\n \"controlnet_ignore_noninpaint_mask\": false,\r\n \"lora_functional\": false,\r\n \"sd_lora\": \"None\",\r\n \"lora_preferred_name\": \"Alias from file\",\r\n \"lora_add_hashes_to_infotext\": true,\r\n \"lora_show_all\": false,\r\n \"lora_hide_unknown_for_versions\": [],\r\n \"lora_in_memory_limit\": 0,\r\n \"extra_options_txt2img\": [],\r\n \"extra_options_img2img\": [],\r\n \"extra_options_cols\": 1,\r\n \"extra_options_accordion\": false,\r\n \"canvas_hotkey_zoom\": \"Alt\",\r\n \"canvas_hotkey_adjust\": \"Ctrl\",\r\n \"canvas_hotkey_move\": \"F\",\r\n \"canvas_hotkey_fullscreen\": \"S\",\r\n \"canvas_hotkey_reset\": \"R\",\r\n \"canvas_hotkey_overlap\": \"O\",\r\n \"canvas_show_tooltip\": true,\r\n \"canvas_auto_expand\": true,\r\n \"canvas_blur_prompt\": false,\r\n \"canvas_disabled_functions\": [\r\n \"Overlap\"\r\n ]\r\n },\r\n \"Startup\": {\r\n \"total\": 11.257086753845215,\r\n \"records\": {\r\n \"initial startup\": 0.02352619171142578,\r\n \"prepare environment/checks\": 3.457069396972656e-05,\r\n \"prepare environment/git version info\": 0.009780406951904297,\r\n \"prepare environment/torch GPU test\": 2.7273693084716797,\r\n \"prepare environment/clone repositores\": 0.038356781005859375,\r\n \"prepare environment/run extensions installers/sd-webui-controlnet\": 0.14071893692016602,\r\n \"prepare environment/run extensions installers/ultimate-upscale-for-automatic1111\": 2.288818359375e-05,\r\n \"prepare environment/run extensions installers/clip-interrogator-ext\": 2.8869497776031494,\r\n \"prepare environment/run extensions installers/latent-upscale\": 5.626678466796875e-05,\r\n \"prepare environment/run extensions installers\": 3.0277533531188965,\r\n \"prepare environment\": 5.820652484893799,\r\n \"launcher\": 0.0008344650268554688,\r\n \"import torch\": 2.0337331295013428,\r\n \"import gradio\": 0.6256029605865479,\r\n \"setup paths\": 0.9430902004241943,\r\n \"import ldm\": 0.0025310516357421875,\r\n \"import sgm\": 2.384185791015625e-06,\r\n \"initialize shared\": 0.047745466232299805,\r\n \"other imports\": 0.5719733238220215,\r\n \"opts onchange\": 0.0002732276916503906,\r\n \"setup SD model\": 0.0003185272216796875,\r\n \"setup codeformer\": 0.07199668884277344,\r\n \"setup gfpgan\": 0.009232521057128906,\r\n \"set samplers\": 2.8371810913085938e-05,\r\n \"list extensions\": 0.0010488033294677734,\r\n \"restore config state file\": 5.4836273193359375e-06,\r\n \"list SD models\": 0.004712820053100586,\r\n \"list localizations\": 0.0001246929168701172,\r\n \"load scripts/custom_code.py\": 0.001154184341430664,\r\n \"load scripts/img2imgalt.py\": 0.0002789497375488281,\r\n \"load scripts/loopback.py\": 0.0001888275146484375,\r\n \"load scripts/outpainting_mk_2.py\": 0.0002484321594238281,\r\n \"load scripts/poor_mans_outpainting.py\": 0.0001766681671142578,\r\n \"load scripts/postprocessing_caption.py\": 0.0001506805419921875,\r\n \"load scripts/postprocessing_codeformer.py\": 0.00015020370483398438,\r\n \"load scripts/postprocessing_create_flipped_copies.py\": 0.00014519691467285156,\r\n \"load scripts/postprocessing_focal_crop.py\": 0.00043463706970214844,\r\n \"load scripts/postprocessing_gfpgan.py\": 0.00014495849609375,\r\n \"load scripts/postprocessing_split_oversized.py\": 0.00015592575073242188,\r\n \"load scripts/postprocessing_upscale.py\": 0.00021982192993164062,\r\n \"load scripts/processing_autosized_crop.py\": 0.0001621246337890625,\r\n \"load scripts/prompt_matrix.py\": 0.0001780986785888672,\r\n \"load scripts/prompts_from_file.py\": 0.0001876354217529297,\r\n \"load scripts/sd_upscale.py\": 0.00016450881958007812,\r\n \"load scripts/xyz_grid.py\": 0.0010995864868164062,\r\n \"load scripts/ldsr_model.py\": 0.11085081100463867,\r\n \"load scripts/lora_script.py\": 0.05980086326599121,\r\n \"load scripts/scunet_model.py\": 0.011086463928222656,\r\n \"load scripts/swinir_model.py\": 0.010489225387573242,\r\n \"load scripts/hotkey_config.py\": 0.0001678466796875,\r\n \"load scripts/extra_options_section.py\": 0.00020551681518554688,\r\n \"load scripts/hypertile_script.py\": 0.019654512405395508,\r\n \"load scripts/hypertile_xyz.py\": 8.058547973632812e-05,\r\n \"load scripts/clip_interrogator_ext.py\": 0.02592325210571289,\r\n \"load scripts/latent_upscale.py\": 0.0007441043853759766,\r\n \"load scripts/adapter.py\": 0.0003275871276855469,\r\n \"load scripts/api.py\": 0.12074923515319824,\r\n \"load scripts/batch_hijack.py\": 0.0005114078521728516,\r\n \"load scripts/cldm.py\": 0.00022983551025390625,\r\n \"load scripts/controlmodel_ipadapter.py\": 0.00032711029052734375,\r\n \"load scripts/controlnet.py\": 0.0494229793548584,\r\n \"load scripts/controlnet_diffusers.py\": 0.0001556873321533203,\r\n \"load scripts/controlnet_lllite.py\": 0.0001430511474609375,\r\n \"load scripts/controlnet_lora.py\": 0.00012731552124023438,\r\n \"load scripts/controlnet_model_guess.py\": 0.00011944770812988281,\r\n \"load scripts/controlnet_version.py\": 0.0001239776611328125,\r\n \"load scripts/enums.py\": 0.0003447532653808594,\r\n \"load scripts/external_code.py\": 6.246566772460938e-05,\r\n \"load scripts/global_state.py\": 0.0003178119659423828,\r\n \"load scripts/hook.py\": 0.0002903938293457031,\r\n \"load scripts/infotext.py\": 9.560585021972656e-05,\r\n \"load scripts/logging.py\": 0.00016260147094726562,\r\n \"load scripts/lvminthin.py\": 0.0001952648162841797,\r\n \"load scripts/movie2movie.py\": 0.00022029876708984375,\r\n \"load scripts/processor.py\": 0.00023818016052246094,\r\n \"load scripts/utils.py\": 0.00011324882507324219,\r\n \"load scripts/xyz_grid_support.py\": 0.0003902912139892578,\r\n \"load scripts/ultimate-upscale.py\": 0.00045228004455566406,\r\n \"load scripts/refiner.py\": 0.00011444091796875,\r\n \"load scripts/seed.py\": 0.00012302398681640625,\r\n \"load scripts\": 0.41962695121765137,\r\n \"load upscalers\": 0.001577138900756836,\r\n \"refresh VAE\": 0.0006160736083984375,\r\n \"refresh textual inversion templates\": 2.86102294921875e-05,\r\n \"scripts list_optimizers\": 0.00027680397033691406,\r\n \"scripts list_unets\": 4.76837158203125e-06,\r\n \"reload hypernetworks\": 0.0027685165405273438,\r\n \"initialize extra networks\": 0.004837512969970703,\r\n \"scripts before_ui_callback\": 0.00041604042053222656,\r\n \"create ui\": 0.4426920413970947,\r\n \"gradio launch\": 0.23865938186645508,\r\n \"add APIs\": 0.003912210464477539,\r\n \"app_started_callback/lora_script.py\": 0.0001537799835205078,\r\n \"app_started_callback/clip_interrogator_ext.py\": 0.0003566741943359375,\r\n \"app_started_callback/api.py\": 0.0010819435119628906,\r\n \"app_started_callback\": 0.001596689224243164\r\n }\r\n },\r\n \"Packages\": [\r\n \"absl-py==2.0.0\",\r\n \"accelerate==0.21.0\",\r\n \"addict==2.4.0\",\r\n \"aenum==3.1.15\",\r\n \"aiofiles==23.2.1\",\r\n \"aiohttp==3.9.1\",\r\n \"aiosignal==1.3.1\",\r\n \"altair==5.2.0\",\r\n \"antlr4-python3-runtime==4.9.3\",\r\n \"anyio==3.7.1\",\r\n \"attrs==23.1.0\",\r\n \"basicsr==1.4.2\",\r\n \"beautifulsoup4==4.12.2\",\r\n \"blendmodes==2022\",\r\n \"boltons==23.1.1\",\r\n \"cachetools==5.3.2\",\r\n \"certifi==2022.12.7\",\r\n \"cffi==1.16.0\",\r\n \"charset-normalizer==2.1.1\",\r\n \"clean-fid==0.1.35\",\r\n \"click==8.1.7\",\r\n \"clip-interrogator==0.6.0\",\r\n \"clip==1.0\",\r\n \"contourpy==1.2.0\",\r\n \"cssselect2==0.7.0\",\r\n \"cycler==0.12.1\",\r\n \"deprecation==2.1.0\",\r\n \"einops==0.4.1\",\r\n \"facexlib==0.3.0\",\r\n \"fastapi==0.94.0\",\r\n \"ffmpy==0.3.1\",\r\n \"filelock==3.9.0\",\r\n \"filterpy==1.4.5\",\r\n \"flatbuffers==23.5.26\",\r\n \"fonttools==4.46.0\",\r\n \"frozenlist==1.4.0\",\r\n \"fsspec==2023.12.1\",\r\n \"ftfy==6.1.3\",\r\n \"future==0.18.3\",\r\n \"fvcore==0.1.5.post20221221\",\r\n \"gdown==4.7.1\",\r\n \"gfpgan==1.3.8\",\r\n \"gitdb==4.0.11\",\r\n \"gitpython==3.1.32\",\r\n \"google-auth-oauthlib==1.1.0\",\r\n \"google-auth==2.25.1\",\r\n \"gradio-client==0.5.0\",\r\n \"gradio==3.41.2\",\r\n \"grpcio==1.60.0\",\r\n \"h11==0.12.0\",\r\n \"httpcore==0.15.0\",\r\n \"httpx==0.24.1\",\r\n \"huggingface-hub==0.19.4\",\r\n \"idna==3.4\",\r\n \"imageio==2.33.0\",\r\n \"importlib-metadata==7.0.0\",\r\n \"importlib-resources==6.1.1\",\r\n \"inflection==0.5.1\",\r\n \"iopath==0.1.9\",\r\n \"jinja2==3.1.2\",\r\n \"jsonmerge==1.8.0\",\r\n \"jsonschema-specifications==2023.11.2\",\r\n \"jsonschema==4.20.0\",\r\n \"kiwisolver==1.4.5\",\r\n \"kornia==0.6.7\",\r\n \"lark==1.1.2\",\r\n \"lazy-loader==0.3\",\r\n \"lightning-utilities==0.10.0\",\r\n \"llvmlite==0.41.1\",\r\n \"lmdb==1.4.1\",\r\n \"lpips==0.1.4\",\r\n \"lxml==4.9.3\",\r\n \"markdown==3.5.1\",\r\n \"markupsafe==2.1.3\",\r\n \"matplotlib==3.8.2\",\r\n \"mediapipe==0.10.8\",\r\n \"mpmath==1.2.1\",\r\n \"multidict==6.0.4\",\r\n \"networkx==3.0rc1\",\r\n \"numba==0.58.1\",\r\n \"numpy==1.23.5\",\r\n \"oauthlib==3.2.2\",\r\n \"omegaconf==2.2.3\",\r\n \"open-clip-torch==2.20.0\",\r\n \"opencv-contrib-python==4.8.1.78\",\r\n \"opencv-python==4.8.1.78\",\r\n \"orjson==3.9.10\",\r\n \"packaging==23.2\",\r\n \"pandas==2.1.4\",\r\n \"piexif==1.1.3\",\r\n \"pillow==9.5.0\",\r\n \"pip==23.1.2\",\r\n \"platformdirs==4.1.0\",\r\n \"portalocker==2.8.2\",\r\n \"protobuf==3.20.0\",\r\n \"psutil==5.9.5\",\r\n \"pyasn1-modules==0.3.0\",\r\n \"pyasn1==0.5.1\",\r\n \"pycparser==2.21\",\r\n \"pydantic==1.10.13\",\r\n \"pydub==0.25.1\",\r\n \"pyparsing==3.1.1\",\r\n \"pysocks==1.7.1\",\r\n \"python-dateutil==2.8.2\",\r\n \"python-multipart==0.0.6\",\r\n \"pytorch-lightning==1.9.4\",\r\n \"pytorch-triton-rocm==2.1.0+dafe145982\",\r\n \"pytz==2023.3.post1\",\r\n \"pywavelets==1.5.0\",\r\n \"pyyaml==6.0.1\",\r\n \"realesrgan==0.3.0\",\r\n \"referencing==0.32.0\",\r\n \"regex==2023.10.3\",\r\n \"reportlab==4.0.7\",\r\n \"requests-oauthlib==1.3.1\",\r\n \"requests==2.28.1\",\r\n \"resize-right==0.0.2\",\r\n \"rpds-py==0.13.2\",\r\n \"rsa==4.9\",\r\n \"safetensors==0.3.1\",\r\n \"scikit-image==0.21.0\",\r\n \"scipy==1.11.4\",\r\n \"semantic-version==2.10.0\",\r\n \"sentencepiece==0.1.99\",\r\n \"setuptools==65.5.0\",\r\n \"six==1.16.0\",\r\n \"smmap==5.0.1\",\r\n \"sniffio==1.3.0\",\r\n \"sounddevice==0.4.6\",\r\n \"soupsieve==2.5\",\r\n \"starlette==0.26.1\",\r\n \"svglib==1.5.1\",\r\n \"sympy==1.11.1\",\r\n \"tabulate==0.9.0\",\r\n \"tb-nightly==2.16.0a20231208\",\r\n \"tensorboard-data-server==0.7.2\",\r\n \"termcolor==2.4.0\",\r\n \"tf-keras-nightly==2.16.0.dev2023120810\",\r\n \"tifffile==2023.9.26\",\r\n \"timm==0.9.2\",\r\n \"tinycss2==1.2.1\",\r\n \"tokenizers==0.13.3\",\r\n \"tomesd==0.1.3\",\r\n \"tomli==2.0.1\",\r\n \"toolz==0.12.0\",\r\n \"torch==2.2.0.dev20231208+rocm5.6\",\r\n \"torchdiffeq==0.2.3\",\r\n \"torchmetrics==1.2.1\",\r\n \"torchsde==0.2.6\",\r\n \"torchvision==0.17.0.dev20231208+rocm5.6\",\r\n \"tqdm==4.66.1\",\r\n \"trampoline==0.1.2\",\r\n \"transformers==4.30.2\",\r\n \"typing-extensions==4.8.0\",\r\n \"tzdata==2023.3\",\r\n \"urllib3==1.26.13\",\r\n \"uvicorn==0.24.0.post1\",\r\n \"wcwidth==0.2.12\",\r\n \"webencodings==0.5.1\",\r\n \"websockets==11.0.3\",\r\n \"werkzeug==3.0.1\",\r\n \"yacs==0.1.8\",\r\n \"yapf==0.40.2\",\r\n \"yarl==1.9.4\",\r\n \"zipp==3.17.0\"\r\n ]\r\n}\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Console logs\n\n```Shell\n\u276f ./webui.sh (base) \r\n\r\n################################################################\r\nInstall script for stable-diffusion + Web UI\r\nTested on Debian 11 (Bullseye)\r\n################################################################\r\n\r\n################################################################\r\nRunning on ciel user\r\n################################################################\r\n\r\n################################################################\r\nCreate and activate python venv\r\n################################################################\r\n\r\n################################################################\r\nLaunching launch.py...\r\n################################################################\r\nUsing TCMalloc: libtcmalloc_minimal.so.4\r\nPython 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0]\r\nVersion: v1.7.0-RC-5-gf92d6149\r\nCommit hash: f92d61497a426a19818625c3ccdaae9beeb82b31\r\nLaunching Web UI with arguments: \r\nno module 'xformers'. Processing without...\r\nno module 'xformers'. Processing without...\r\nNo module 'xformers'. Proceeding without it.\r\n2023-12-09 17:08:09,876 - ControlNet - INFO - ControlNet v1.1.422\r\nControlNet preprocessor location: /home/ciel/stable-diffusion/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads\r\n2023-12-09 17:08:09,921 - ControlNet - INFO - ControlNet v1.1.422\r\nLoading weights [5493a0ec49] from /home/ciel/stable-diffusion/stable-diffusion-webui/models/Stable-diffusion/AOM3A1B_orangemixs.safetensors\r\nRunning on local URL: http://127.0.0.1:7860\r\n\r\nTo create a public link, set `share=True` in `launch()`.\r\nCreating model from config: /home/ciel/stable-diffusion/stable-diffusion-webui/configs/v1-inference.yaml\r\nStartup time: 8.9s (prepare environment: 4.0s, import torch: 2.0s, import gradio: 0.5s, setup paths: 0.8s, other imports: 0.5s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 0.2s).\r\nLoading VAE weights specified in settings: /home/ciel/stable-diffusion/stable-diffusion-webui/models/VAE/orangemix.vae.pt\r\nApplying attention optimization: Doggettx... done.\r\nModel loaded in 2.6s (load weights from disk: 0.6s, create model: 0.2s, apply weights to model: 1.4s, load VAE: 0.2s, calculate empty prompt: 0.1s).\r\nTraceback (most recent call last):\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/routes.py\", line 488, in run_predict\r\n output = await app.get_blocks().process_api(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py\", line 1431, in process_api\r\n result = await self.call_function(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py\", line 1103, in call_function\r\n prediction = await anyio.to_thread.run_sync(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n ^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/utils.py\", line 707, in wrapper\r\n response = f(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/modules/ui_prompt_styles.py\", line 27, in save_style\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py\", line 212, in save_styles\r\n style_paths.remove(\"do_not_save\")\r\nKeyError: 'do_not_save'\n```\n\n\n### Additional information\n\nI'm running dev branch due to the Navi3 bug, checking out master after launch seems to result in the same issue, but it could have just been jit-ed, didn't test very in-depth", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14276", "file_loc": {"base_commit": "f92d61497a426a19818625c3ccdaae9beeb82b31", "files": [{"path": "modules/styles.py", "status": "modified", "Loc": {"('StyleDatabase', '__init__', 95)": {"mod": [101, 102, 103, 104]}, "('StyleDatabase', None, 94)": {"mod": [158, 159, 160, 161]}, "('StyleDatabase', 'get_style_paths', 158)": {"mod": [175, 177]}, "('StyleDatabase', 'save_styles', 195)": {"mod": [199, 200, 201, 202, 204, 205, 206, 207, 208, 209, 211, 212]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["modules/styles.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "c3e9c1a7e8fdc949b8e638d79ab476507ff92f18", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/60067", "iss_label": "integration: environment_canada\nby-code-owner", "title": "Environment Canada (EC) radar integration slowing Environment Canada servers", "body": "### The problem\r\n\r\nThe `config_flow` change to the EC integration did not change the way the underlying radar retrieval works, but did enable radar for everyone. As a result the EC servers are getting far too many requests. We (the codeowners) have been working with EC to diagnose this issue and understand their concerns. \r\n\r\nWe are doing two things (PR is in progress). Caching requests to the EC servers. Work so far shows that through caching we can reduce the number of requests by over 90%. This fix is in the integration dependency library.\r\n\r\nSecond, we are creating the radar (camera) entity with `_attr_entity_registry_enabled_default = False` so that new radar entities are disabled by default. Many people use the integration for forecast only.\r\n\r\nLast, EC is putting a policy in place such that User Agent needs to be filled in to represent the calling library.\r\n\r\n### What version of Home Assistant Core has the issue?\r\n\r\n2021.12.0.dev0\r\n\r\n### What was the last working version of Home Assistant Core?\r\n\r\n_No response_\r\n\r\n### What type of installation are you running?\r\n\r\nHome Assistant Core\r\n\r\n### Integration causing the issue\r\n\r\nEnvironment Canada\r\n\r\n### Link to integration documentation on our website\r\n\r\nhttps://www.home-assistant.io/integrations/environment_canada/\r\n\r\n### Example YAML snippet\r\n\r\n_No response_\r\n\r\n### Anything in the logs that might be useful for us?\r\n\r\n_No response_\r\n\r\n### Additional information\r\n\r\nQuote from one of the email exchanges with EC:\r\n\r\n> What we observed is 1350 unique IP addresses using this code which made 23.5 million requests over 5 days.\r\n\r\nIn order to respond to EC as quickly as possible we are asking for consideration to release the PR, when available, in the next dot release.", "pr_html_url": "https://github.com/home-assistant/core/pull/60087", "file_loc": {"base_commit": "c3e9c1a7e8fdc949b8e638d79ab476507ff92f18", "files": [{"path": "homeassistant/components/environment_canada/camera.py", "status": "modified", "Loc": {"('ECCamera', '__init__', 49)": {"add": [57]}}}, {"path": "homeassistant/components/environment_canada/manifest.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [603]}}}, {"path": "requirements_test_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [372]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["homeassistant/components/environment_canada/camera.py", "homeassistant/components/environment_canada/manifest.json"], "doc": [], "test": [], "config": ["requirements_all.txt", "requirements_test_all.txt"], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "939539611f0cad12056f7be78ef6b2128b90b779", "iss_has_pr": 1, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/336", "iss_label": "bug\np2", "title": "Handle Nones in chunk.choices[0].delta", "body": "![WechatIMG434](https://github.com/abi/screenshot-to-code/assets/158557918/d2ddcd3e-f944-40cb-a74e-b54bec8938f4)\r\n\r\nThere is a successful request for the openai interface, but it seems that no code is generated.\r\n\r\nbackend-1 | ERROR: Exception in ASGI application\r\nbackend-1 | Traceback (most recent call last):\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py\", line 250, in run_asgi\r\nbackend-1 | result = await self.app(self.scope, self.asgi_receive, self.asgi_send)\r\nbackend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py\", line 84, in __call__\r\nbackend-1 | return await self.app(scope, receive, send)\r\nbackend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/fastapi/applications.py\", line 276, in __call__\r\nbackend-1 | await super().__call__(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/applications.py\", line 122, in __call__\r\nbackend-1 | await self.middleware_stack(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py\", line 149, in __call__\r\nbackend-1 | await self.app(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/middleware/cors.py\", line 75, in __call__\r\nbackend-1 | await self.app(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\nbackend-1 | raise exc\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\nbackend-1 | await self.app(scope, receive, sender)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py\", line 21, in __call__\r\nbackend-1 | raise e\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py\", line 18, in __call__\r\nbackend-1 | await self.app(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/routing.py\", line 718, in __call__\r\nbackend-1 | await route.handle(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/routing.py\", line 341, in handle\r\nbackend-1 | await self.app(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/routing.py\", line 82, in app\r\nbackend-1 | await func(session)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/fastapi/routing.py\", line 289, in app\r\nbackend-1 | await dependant.call(**values)\r\nbackend-1 | File \"/app/routes/generate_code.py\", line 251, in stream_code\r\nbackend-1 | completion = await stream_openai_response(\r\nbackend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbackend-1 | File \"/app/llm.py\", line 62, in stream_openai_response\r\nbackend-1 | content = chunk.choices[0].delta.content or \"\"\r\nbackend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbackend-1 | AttributeError: 'NoneType' object has no attribute 'content'\r\nbackend-1 | INFO: connection closed\r\n", "pr_html_url": "https://github.com/abi/screenshot-to-code/pull/341", "file_loc": {"base_commit": "939539611f0cad12056f7be78ef6b2128b90b779", "files": [{"path": "backend/llm.py", "status": "modified", "Loc": {"(None, 'stream_openai_response', 32)": {"mod": [62, 63, 64]}}}, {"path": "frontend/package.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [49]}}}, {"path": "frontend/src/App.tsx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [381]}}}, {"path": "frontend/yarn.lock", "status": "modified", "Loc": {"(None, None, None)": {"add": [5644, 5939]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["backend/llm.py", "frontend/src/App.tsx", "frontend/package.json"], "doc": [], "test": [], "config": ["frontend/yarn.lock"], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "bf895eb656dee9084273cd36395828bd06aa231d", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/6", "iss_label": "enhancement\ngood first issue\nAPI costs", "title": "Make Auto-GPT aware of it's running cost", "body": "Auto-GPT is expensive to run due to GPT-4's API cost.\n\nWe could experiment with making it aware of this fact, by tracking tokens as they are used and converting to a dollar cost. \n\nThis could also be displayed to the user to help them be more aware of exactly how much they are spending.", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/762", "file_loc": {"base_commit": "bf895eb656dee9084273cd36395828bd06aa231d", "files": [{"path": "autogpt/chat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "(None, 'chat_with_ai', 54)": {"add": [135]}}}, {"path": "autogpt/config/ai_config.py", "status": "modified", "Loc": {"('AIConfig', None, 21)": {"add": [28]}, "('AIConfig', '__init__', 31)": {"add": [40, 48], "mod": [32]}, "('AIConfig', 'load', 53)": {"add": [75], "mod": [55, 77]}, "('AIConfig', 'save', 79)": {"add": [94]}, "('AIConfig', 'construct_full_prompt', 99)": {"add": [149], "mod": [110]}}}, {"path": "autogpt/llm_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9]}, "(None, 'create_chat_completion', 56)": {"mod": [99, 107]}, "(None, 'create_embedding_with_ada', 156)": {"mod": [162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172]}}}, {"path": "autogpt/memory/base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "(None, 'get_ada_embedding', 11)": {"mod": [13, 14, 15, 16, 17, 18, 19, 20, 21]}}}, {"path": "autogpt/prompts/prompt.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, 'construct_main_ai_config', 78)": {"add": [88, 100, 109]}}}, {"path": "autogpt/setup.py", "status": "modified", "Loc": {"(None, 'generate_aiconfig_automatic', 139)": {"add": [194], "mod": [196]}, "(None, 'generate_aiconfig_manual', 70)": {"mod": [136]}}}, {"path": "tests/unit/test_commands.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 10]}, "(None, 'test_make_agent', 11)": {"mod": [17, 20]}}}, {"path": "tests/unit/test_setup.py", "status": "modified", "Loc": {"('TestAutoGPT', 'test_generate_aiconfig_automatic_fallback', 39)": {"add": [46]}, "('TestAutoGPT', 'test_prompt_user_manual_mode', 57)": {"add": [64]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["autogpt/chat.py", "autogpt/prompts/prompt.py", "autogpt/config/ai_config.py", "autogpt/memory/base.py", "autogpt/setup.py", "autogpt/llm_utils.py"], "doc": [], "test": ["tests/unit/test_commands.py", "tests/unit/test_setup.py"], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "3e01ce744a981d8f19ae77ec695005e7000f4703", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/5855", "iss_label": "bug", "title": "Generic extractor can crash if Brotli is not available", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Provide a description that is worded well enough to be understood\n\nTesting #5851 in a configuration where no Brotli decoder was available showed the crash in the log.\r\n\r\nThe problem is this extractor code:\r\nhttps://github.com/yt-dlp/yt-dlp/blob/1fc089143c79b02b8373ae1d785d5e3a68635d4d/yt_dlp/extractor/generic.py#L2306-L2318\r\n\r\nNormally there is a check for a supported Brotli encoder (using `SUPPORTED_ENCODINGS`). Specifying `*` in the `Accept-encoding` header bypasses that check.\r\n\r\nHowever, I don't think that `*` does what is wanted according to the comments in the above code. The code wants to get the resource with no decoding (because decoding in yt-dl[p] starts by reading the entire response), but `*` still allows the server to send a compressed response. What is wanted is the `identity` encoding which is the default if no other encoding is specified. Or, to re-cast the decoding process so that the whole response stream is not read before decoding, but that means creating stream decoding methods for Brotli and zlib.\r\n\r\nAlso, there could be a check for a supported encoding in `YoutubeDLHandler.http_response()`, perhaps synthesizing 416 or 406 id the server has sent an encoding that isn't supported, instead of the crash seen here.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', '-F', 'https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2022.11.11 [8b644025b] (source)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']\r\n[debug] Git HEAD: c73355510\r\n[debug] Python 3.9.15 (CPython i686 32bit) - Linux-4.4.0-210-generic-i686-with-glibc2.23 (OpenSSL 1.1.1s 1 Nov 2022, glibc 2.23)\r\n[debug] exe versions: ffmpeg 4.3, ffprobe 4.3\r\n[debug] Optional libraries: Cryptodome-3.11.0, certifi-2019.11.28, secretstorage-3.2.0, sqlite3-2.6.0\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1735 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.11.11, Current version: 2022.11.11\r\nyt-dlp is up to date (2022.11.11)\r\n[generic] Extracting URL: https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867\r\n[generic] cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867: Downloading webpage\r\nERROR: 'NoneType' object has no attribute 'decompress'\r\nTraceback (most recent call last):\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 1495, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 1571, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py\", line 680, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/generic.py\", line 2314, in _real_extract\r\n full_response = self._request_webpage(url, video_id, headers={\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py\", line 807, in _request_webpage\r\n return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 3719, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/utils.py\", line 1452, in http_response\r\n io.BytesIO(self.brotli(resp.read())), old_resp.headers, old_resp.url, old_resp.code)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/utils.py\", line 1389, in brotli\r\n return brotli.decompress(data)\r\nAttributeError: 'NoneType' object has no attribute 'decompress'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/3e01ce744a981d8f19ae77ec695005e7000f4703", "file_loc": {"base_commit": "3e01ce744a981d8f19ae77ec695005e7000f4703", "files": [{"path": "yt_dlp/extractor/generic.py", "status": "modified", "Loc": {"('GenericIE', None, 42)": {"add": [2156]}, "('GenericIE', '_real_extract', 2276)": {"mod": [2315]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["yt_dlp/extractor/generic.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "ded7b37234e229d9bde0a9a506f7c65605803731", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/543", "iss_label": "", "title": "Lack of pre-compiled results in lost interest", "body": "so I know the first thing people are going to say is, this isn't an issue. However, it is. by not having a precompiled version to download over half the people that find their way to this GitHub are going to lose interest. Honestly, I'm one of them. I attempted to compile it but then I saw that I had to track down each module for this, yeah quickly drove me away from it. all I wanted to do was mess around and see what it can do. even if the results arent mind-blowing the concept interests me. but due to not having a ready to use executable I like many others I'm sure of, have decided it isn't even worth messing with. ", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/546", "file_loc": {"base_commit": "ded7b37234e229d9bde0a9a506f7c65605803731", "files": [{"path": "toolbox/ui.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [11]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["toolbox/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "96b5814de70ad2435b6db5f49b607b136921f701", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/26948", "iss_label": "Documentation", "title": "The copy button on install copies an extensive comman including env activation", "body": "### Describe the issue linked to the documentation\n\nhttps://scikit-learn.org/stable/install.html\r\n\r\nAbove link will lead you to the sklearn downlanding for link . \r\nwhen you link copy link button it will copy \r\n`python3 -m venv sklearn-venvpython -m venv sklearn-venvpython -m venv sklearn-venvsource sklearn-venv/bin/activatesource sklearn-venv/bin/activatesklearn-venv\\Scripts\\activatepip install -U scikit-learnpip install -U scikit-learnpip install -U scikit-learnpip3 install -U scikit-learnconda create -n sklearn-env -c conda-forge scikit-learnconda activate sklearn-env`\r\n\r\ninstead of `pip3 install -U scikit-learn`\r\n\r\nif this is the issue so please issue i want to create a pull request for it and tell in which file this issue reside\r\nThanks\n\n### Suggest a potential alternative/fix\n\nBy resoving above issue", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/27052", "file_loc": {"base_commit": "96b5814de70ad2435b6db5f49b607b136921f701", "files": [{"path": "doc/install.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]}}}, {"path": "doc/themes/scikit-learn-modern/static/css/theme.css", "status": "modified", "Loc": {"(None, None, None)": {"add": [1216, 1220, 1225, 1233, 1236, 1239, 1243, 1247], "mod": [1208, 1209]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["doc/themes/scikit-learn-modern/static/css/theme.css"], "doc": ["doc/install.rst"], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "49b9682b3570211c7d8f619f8538c08fd5d8bdad", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/10036", "iss_label": "", "title": "[API DESIGN REVIEW] sample weight in ImageDataGenerator.flow", "body": "https://docs.google.com/document/d/14anankKROhliJCpInQH-pITatdjO9UzSN6Iz0MwcDHw/edit?usp=sharing\r\n\r\nMakes it easy to use data augmentation when sample weights are available. ", "pr_html_url": "https://github.com/keras-team/keras/pull/10092", "file_loc": {"base_commit": "49b9682b3570211c7d8f619f8538c08fd5d8bdad", "files": [{"path": "keras/preprocessing/image.py", "status": "modified", "Loc": {"('ImageDataGenerator', 'flow', 715)": {"add": [734, 759], "mod": [754]}, "('NumpyArrayIterator', None, 1188)": {"add": [1201]}, "('NumpyArrayIterator', '__init__', 1216)": {"add": [1241, 1278], "mod": [1217, 1218]}, "('NumpyArrayIterator', '_get_batches_of_transformed_samples', 1289)": {"add": [1313]}, "('ImageDataGenerator', None, 443)": {"mod": [715]}}}, {"path": "tests/keras/preprocessing/image_test.py", "status": "modified", "Loc": {"('TestImage', 'test_image_data_generator', 32)": {"add": [64]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/keras/preprocessing/image_test.py", "keras/preprocessing/image.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "efb53aafdcaae058962c6189ddecb3dc62b02c31", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/6514", "iss_label": "enhancement", "title": "Migrate from setup.py to pyproject.toml", "body": "We should migrate to the modern declarative setuptools metadata approach as discussed in https://setuptools.pypa.io/en/latest/userguide/quickstart.html and https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html, but only after the 2.12 release.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/6547", "file_loc": {"base_commit": "efb53aafdcaae058962c6189ddecb3dc62b02c31", "files": [{"path": ".bandit.yml", "status": "removed", "Loc": {}}, {"path": ".bumpversion.cfg", "status": "removed", "Loc": {}}, {"path": ".coveragerc", "status": "removed", "Loc": {}}, {"path": ".isort.cfg", "status": "removed", "Loc": {}}, {"path": ".pre-commit-config.yaml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6]}}}, {"path": "MANIFEST.in", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13]}}}, {"path": "pylintrc", "status": "removed", "Loc": {}}, {"path": "pytest.ini", "status": "removed", "Loc": {}}, {"path": "setup.cfg", "status": "removed", "Loc": {}}, {"path": "setup.py", "status": "removed", "Loc": {}}, {"path": "tests/test_crawler.py", "status": "modified", "Loc": {"('CrawlerProcessSubprocess', 'test_shutdown_forced', 890)": {"mod": [902]}}}, {"path": "tests/test_spiderloader/__init__.py", "status": "modified", "Loc": {"('SpiderLoaderTest', 'test_syntax_error_warning', 146)": {"mod": [147, 148, 149]}}}, {"path": "tox.ini", "status": "modified", "Loc": {"(None, None, None)": {"mod": [82]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/test_spiderloader/__init__.py", ".isort.cfg", ".coveragerc", "setup.cfg", "setup.py", ".bumpversion.cfg"], "doc": [], "test": ["tests/test_crawler.py"], "config": ["pytest.ini", ".pre-commit-config.yaml", "tox.ini", "pylintrc", ".bandit.yml", "MANIFEST.in"], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6e950dc9cacefd692dbd8987a3acd12a44b506f", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/5859", "iss_label": "question\nquestion-migrate", "title": "FastAPI==0.89.0 Cannot use `None` as a return type when `status_code` is set to 204 with `from __future__ import annotations`", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\nfrom __future__ import annotations \r\n\r\nfrom fastapi import FastAPI\r\n\r\napp = FastAPI()\r\n\r\n\r\n@app.get(\"/\", status_code=204)\r\ndef read_root() -> None:\r\n return {\"Hello\": \"World\"}\n```\n\n\n### Description\n\nIf we add:\r\n`from __future__ import annotations`\r\n\r\nIt changes the annotations structure and the response model is `NoneType` instead of `None`, which causes validation of the `statuc_code` vs `response_model` and raises an exception.\r\n\r\n```python\r\n ...\r\n File \".../site-packages/fastapi/routing.py\", line 635, in decorator\r\n self.add_api_route(\r\n File \".../site-packages/fastapi/routing.py\", line 574, in add_api_route\r\n route = route_class(\r\n File \".../site-packages/fastapi/routing.py\", line 398, in __init__\r\n assert is_body_allowed_for_status_code(\r\nAssertionError: Status code 204 must not have a response body\r\n```\r\n\r\nI am working on a fix for it right now.\n\n### Operating System\n\nmacOS\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.89.0\n\n### Python Version\n\n3.10\n\n### Additional Context\n\n_No response_", "pr_html_url": "https://github.com/fastapi/fastapi/pull/2246", "file_loc": {"base_commit": "c6e950dc9cacefd692dbd8987a3acd12a44b506f", "files": [{"path": ".github/workflows/preview-docs.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [38]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [".github/workflows/preview-docs.yml"], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "3938f81c1b4a5ee81d5bfc6563c17a225f7e5068", "iss_html_url": "https://github.com/3b1b/manim/issues/1330", "iss_label": "", "title": "Error after installing manim", "body": "I installed all manim & dependecies, but when I ran `python -m manim example_scenes.py OpeningManimExample`, I got the following error:\r\n`Traceback (most recent call last):\r\n File \"c:\\users\\jm\\anaconda3\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"c:\\users\\jm\\anaconda3\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manim.py\", line 5, in \r\n manimlib.main()\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\__init__.py\", line 9, in main\r\n scenes = manimlib.extract_scene.main(config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\extract_scene.py\", line 113, in main\r\n scenes = get_scenes_to_render(all_scene_classes, scene_config, config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\extract_scene.py\", line 74, in get_scenes_to_render\r\n scene = scene_class(**scene_config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\scene\\scene.py\", line 44, in __init__\r\n self.window = Window(self, **self.window_config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\window.py\", line 19, in __init__\r\n super().__init__(**kwargs)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\moderngl_window\\context\\pyglet\\window.py\", line 51, in __init__\r\n self._window = PygletWrapper(\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\window\\win32\\__init__.py\", line 134, in __init__\r\n super(Win32Window, self).__init__(*args, **kwargs)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\window\\__init__.py\", line 603, in __init__\r\n config = screen.get_best_config(config)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\canvas\\base.py\", line 194, in get_best_config\r\n raise window.NoSuchConfigException()\r\npyglet.window.NoSuchConfigException`.\r\nAny advice? And thank you", "code": null, "pr_html_url": "https://github.com/3b1b/manim/pull/1343", "commit_html_url": null, "file_loc": {"base_commit": "3938f81c1b4a5ee81d5bfc6563c17a225f7e5068", "files": [{"path": "manimlib/window.py", "status": "modified", "Loc": {"('Window', None, 10)": {"mod": [15]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["manimlib/window.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "84b283e6200bcb051ed976782fbb2b123bf9b8fc", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/19793", "iss_label": "type:bug/performance", "title": "model.keras format much slower to load", "body": "Anyone experiencing unreasonably slow load times when loading a keras-format saved model? I have noticed this repeated when working in ipython, where simply instantiating a model via `Model.from_config` then calling `model.load_weights` is much (several factors) faster than loading a `model.keras` file.\r\n\r\nMy understanding is the keras format is simply a zip file with the config.json file and weights h5 (iirc) but weirdly enough, there's something not right going on while loading.", "pr_html_url": "https://github.com/keras-team/keras/pull/19852", "file_loc": {"base_commit": "84b283e6200bcb051ed976782fbb2b123bf9b8fc", "files": [{"path": "keras/src/saving/saving_lib.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 34]}, "(None, '_save_model_to_fileobj', 95)": {"mod": [112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 127, 128, 129, 130, 131, 132, 133, 134, 135]}, "(None, '_load_model_from_fileobj', 157)": {"mod": [175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 186, 187, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204]}, "(None, 'load_weights_only', 239)": {"mod": [253, 254, 255]}}}, {"path": "keras/src/saving/saving_lib_test.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [614]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["keras/src/saving/saving_lib_test.py", "keras/src/saving/saving_lib.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "4cdb266dac852859f695b0555cbe49e58343e69a", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/3539", "iss_label": "bug", "title": "Bug in Conditional Include", "body": "Hi,\n\nI know that when using conditionals on an include, 'All the tasks get evaluated, but the conditional is applied to each and every task'. However this breaks when some of that tasks register variables and other tasks in the group use those variable.\n\nExample:\n\nmain.yml:\n\n```\n- include: extra.yml\n when: do_extra is defined\n```\n\nextra.yml:\n\n```\n- name: check if we can do task A\n shell: check_if_task_A_possible\n register: A_possible\n ignore_errors: yes\n\n- name: task A\n shell: run_task_A\n when: A_possible.rc == 0\n```\n\nNow if you run main.yml and 'do_extra' is not defined, the run will fail on 'task A' because when the 'when' condition is evaluated, the variable A_possible will not exist.\n\nIt is not sufficient to just add the top-level include conditional above the other because right now it looks like the two conditions are compounded and tested together which will still fail because A_possible is not defined. I think you would have to run the file level conditional before the task level ones to keep this from happening.\n", "pr_html_url": "https://github.com/ansible/ansible/pull/20158", "file_loc": {"base_commit": "4cdb266dac852859f695b0555cbe49e58343e69a", "files": [{"path": "lib/ansible/modules/windows/win_robocopy.ps1", "status": "modified", "Loc": {"(None, None, None)": {"mod": [25, 26, 27, 28, 73, 76, 93, 94, 95, 114, 115, 167, 168]}}}, {"path": "lib/ansible/modules/windows/win_robocopy.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [132]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/modules/windows/win_robocopy.ps1", "lib/ansible/modules/windows/win_robocopy.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "f5dacf84468ab7e0631cc61a3f1431a32e3e143c", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/2654", "iss_label": "Feature Request\nContributor Friendly", "title": "utils.get_netrc_auth silently fails when netrc exists but fails to parse", "body": "My .netrc contains a line for the github auth, [like this](https://gist.github.com/wikimatze/9790374).\n\nIt turns out that `netrc.netrc()` doesn't like that:\n\n```\n>>> from netrc import netrc\n>>> netrc()\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py\", line 35, in __init__\n self._parse(file, fp, default_netrc)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py\", line 117, in _parse\n file, lexer.lineno)\nnetrc.NetrcParseError: bad follower token 'protocol' (/Users/david/.netrc, line 9)\n```\n\n`get_netrc_auth` catches the `NetrcParseError` [but just ignores it](https://github.com/kennethreitz/requests/blob/master/requests/utils.py#L106).\n\nAt least having it emit a warning would have saved some hair-pulling.\n", "pr_html_url": "https://github.com/psf/requests/pull/2656", "file_loc": {"base_commit": "f5dacf84468ab7e0631cc61a3f1431a32e3e143c", "files": [{"path": "requests/utils.py", "status": "modified", "Loc": {"(None, 'get_netrc_auth', 70)": {"mod": [70, 108, 109]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "0877741b0350d200be7f1e6cca2780a25ee29cd0", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5851", "iss_label": "bug", "title": "Inference failing using ExLlamav2 version 0.0.18", "body": "### Describe the bug\r\n\r\nSince ExLlamav2 was upgraded to version 0.0.18 in the requirements.txt, inference using it is no longer working and fails with the error in the logs below. Reverting to version 0.0.17 resolves the issue.\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n1. Install latest main branch (current commit is `26d822f64f2a029306b250b69dc58468662a4fc6`)\r\n2. Download `GPTQ` model\r\n3. Use `ExLlamav2_HF` model loader\r\n4. Go to `Chat` tab and ask the AI a question.\r\n5. Observe error, even though the model loaded successfully.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\n21:35:11-061459 INFO Loading \"TheBloke_dolphin-2.6-mistral-7B-GPTQ\"\r\n21:35:13-842112 INFO LOADER: \"ExLlamav2\"\r\n21:35:13-843422 INFO TRUNCATION LENGTH: 32768\r\n21:35:13-844234 INFO INSTRUCTION TEMPLATE: \"Alpaca\"\r\n21:35:13-845014 INFO Loaded the model in 2.78 seconds.\r\nTraceback (most recent call last):\r\n File \"/workspace/text-generation-webui/modules/text_generation.py\", line 429, in generate_reply_custom\r\n for reply in shared.model.generate_with_streaming(question, state):\r\n File \"/workspace/text-generation-webui/modules/exllamav2.py\", line 140, in generate_with_streaming\r\n self.generator.begin_stream(ids, settings, loras=self.loras)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 198, in begin_stream\r\n self.begin_stream_ex(input_ids,\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 296, in begin_stream_ex\r\n self._gen_begin_reuse(input_ids, gen_settings)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 624, in _gen_begin_reuse\r\n self._gen_begin(in_tokens, gen_settings)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 586, in _gen_begin\r\n self.model.forward(self.sequence_ids[:, :-1],\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py\", line 694, in forward\r\n r, ls = self._forward(input_ids = input_ids[:, chunk_begin : chunk_end],\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py\", line 776, in _forward\r\n x = module.forward(x, cache = cache, attn_params = attn_params, past_len = past_len, loras = loras, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/attn.py\", line 596, in forward\r\n attn_output = flash_attn_func(q_states, k_states, v_states, causal = True)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 825, in flash_attn_func\r\n return FlashAttnFunc.apply(\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py\", line 553, in apply\r\n return super().apply(*args, **kwargs) # type: ignore[misc]\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 507, in forward\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_forward(\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 51, in _flash_attn_forward\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.fwd(\r\nRuntimeError: CUDA error: an illegal memory access was encountered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\n\r\n### System Info\r\n\r\n* Ubuntu 22.04 LTS\r\n* Nvidia A5000 GPU on Runpod\r\n* CUDA 12.1\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/0877741b0350d200be7f1e6cca2780a25ee29cd0", "file_loc": {"base_commit": "0877741b0350d200be7f1e6cca2780a25ee29cd0", "files": [{"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, 59)": {"mod": [59, 60, 61, 62, 63]}}}, {"path": "requirements_amd.txt", "status": "modified", "Loc": {"(None, None, 45)": {"mod": [45, 46, 47]}}}, {"path": "requirements_amd_noavx2.txt", "status": "modified", "Loc": {"(None, None, 43)": {"mod": [43, 44, 45]}}}, {"path": "requirements_apple_intel.txt", "status": "modified", "Loc": {"(None, None, 41)": {"mod": [41]}}}, {"path": "requirements_apple_silicon.txt", "status": "modified", "Loc": {"(None, None, 43)": {"mod": [43]}}}, {"path": "requirements_noavx2.txt", "status": "modified", "Loc": {"(None, None, 59)": {"mod": [59, 60, 61, 62, 63]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements_apple_silicon.txt", "requirements_amd_noavx2.txt", "requirements_apple_intel.txt", "requirements_amd.txt", "requirements.txt", "requirements_noavx2.txt"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "89477ea9d3a83181b0222b732a81c71db9edf142", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/2013", "iss_label": "bug", "title": "[BUG] Another permissions error when installing with docker-compose", "body": "### Pre-check\n\n- [X] I have searched the existing issues and none cover this bug.\n\n### Description\n\nThis looks similar, but not the same as #1876\r\n\r\nAs for following the instructions, I've not seen any relevant guide to installing with Docker, hence working a bit blind. \r\n\r\nBackground: I'm trying to run this on an Asustor NAS, which offers very little ability to customize the environment. Ideally, I'd just like to be able to run this by pasting a docker-compose file into Portainer, and having it work it's magic from there:\r\n\r\n---\r\n\r\n```\r\nsal@halob:/volume1/home/sal/apps/private-gpt $ docker-compose up\r\n[+] Running 3/3\r\n \u2714 Network private-gpt_default Created 0.1s\r\n \u2714 Container private-gpt-ollama-1 Created 0.1s\r\n \u2714 Container private-gpt-private-gpt-1 Created 0.1s\r\nAttaching to ollama-1, private-gpt-1\r\nollama-1 | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.\r\nollama-1 | Your new public key is:\r\nollama-1 |\r\nollama-1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNQkShAIoUDyyueUTiCHM9/AZfZ+rxnUZgmh+YByBVB\r\nollama-1 |\r\nollama-1 | 2024/07/23 23:20:28 routes.go:1096: INFO server config env=\"map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]\"\r\nollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:778 msg=\"total blobs: 0\"\r\nollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:785 msg=\"total unused blobs removed: 0\"\r\nollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=routes.go:1143 msg=\"Listening on [::]:11434 (version 0.2.6)\"\r\nollama-1 | time=2024-07-23T23:20:28.318Z level=INFO source=payload.go:30 msg=\"extracting embedded files\" dir=/tmp/ollama1112441504/runners\r\nprivate-gpt-1 | 23:20:29.406 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'docker']\r\nollama-1 | time=2024-07-23T23:20:33.589Z level=INFO source=payload.go:44 msg=\"Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60102 cpu]\"\r\nollama-1 | time=2024-07-23T23:20:33.589Z level=INFO source=gpu.go:205 msg=\"looking for compatible GPUs\"\r\nollama-1 | time=2024-07-23T23:20:33.589Z level=WARN source=gpu.go:225 msg=\"CPU does not have minimum vector extensions, GPU inference disabled\" required=avx detected=\"no vector extensions\"\r\nollama-1 | time=2024-07-23T23:20:33.590Z level=INFO source=types.go:105 msg=\"inference compute\" id=0 library=cpu compute=\"\" driver=0.0 name=\"\" total=\"31.1 GiB\" available=\"28.1 GiB\"\r\nprivate-gpt-1 | There was a problem when trying to write in your cache folder (/nonexistent/.cache/huggingface/hub). You should set the environment variable TRANSFORMERS_CACHE to a writable directory.\r\nprivate-gpt-1 | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\nprivate-gpt-1 | 23:20:40.419 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama\r\nprivate-gpt-1 | Traceback (most recent call last):\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 798, in get\r\nprivate-gpt-1 | return self._context[key]\r\nprivate-gpt-1 | ~~~~~~~~~~~~~^^^^^\r\nprivate-gpt-1 | KeyError: \r\nprivate-gpt-1 |\r\nprivate-gpt-1 | During handling of the above exception, another exception occurred:\r\nprivate-gpt-1 |\r\nprivate-gpt-1 | Traceback (most recent call last):\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 798, in get\r\nprivate-gpt-1 | return self._context[key]\r\nprivate-gpt-1 | ~~~~~~~~~~~~~^^^^^\r\nprivate-gpt-1 | KeyError: \r\nprivate-gpt-1 |\r\nprivate-gpt-1 | During handling of the above exception, another exception occurred:\r\nprivate-gpt-1 |\r\nprivate-gpt-1 | Traceback (most recent call last):\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 798, in get\r\nprivate-gpt-1 | return self._context[key]\r\nprivate-gpt-1 | ~~~~~~~~~~~~~^^^^^\r\nprivate-gpt-1 | KeyError: \r\nprivate-gpt-1 |\r\nprivate-gpt-1 | During handling of the above exception, another exception occurred:\r\nprivate-gpt-1 |\r\nprivate-gpt-1 | Traceback (most recent call last):\r\nprivate-gpt-1 | File \"\", line 198, in _run_module_as_main\r\nprivate-gpt-1 | File \"\", line 88, in _run_code\r\nprivate-gpt-1 | File \"/home/worker/app/private_gpt/__main__.py\", line 5, in \r\nprivate-gpt-1 | from private_gpt.main import app\r\nprivate-gpt-1 | File \"/home/worker/app/private_gpt/main.py\", line 6, in \r\nprivate-gpt-1 | app = create_app(global_injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/private_gpt/launcher.py\", line 63, in create_app\r\nprivate-gpt-1 | ui = root_injector.get(PrivateGptUi)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 974, in get\r\nprivate-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 800, in get\r\nprivate-gpt-1 | instance = self._get_instance(key, provider, self.injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 811, in _get_instance\r\nprivate-gpt-1 | return provider.get(injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 264, in get\r\nprivate-gpt-1 | return injector.create_object(self._cls)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 998, in create_object\r\nprivate-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1031, in call_with_injection\r\nprivate-gpt-1 | dependencies = self.args_to_inject(\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1079, in args_to_inject\r\nprivate-gpt-1 | instance: Any = self.get(interface)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 974, in get\r\nprivate-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 800, in get\r\nprivate-gpt-1 | instance = self._get_instance(key, provider, self.injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 811, in _get_instance\r\nprivate-gpt-1 | return provider.get(injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 264, in get\r\nprivate-gpt-1 | return injector.create_object(self._cls)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 998, in create_object\r\nprivate-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1031, in call_with_injection\r\nprivate-gpt-1 | dependencies = self.args_to_inject(\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1079, in args_to_inject\r\nprivate-gpt-1 | instance: Any = self.get(interface)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 974, in get\r\nprivate-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 800, in get\r\nprivate-gpt-1 | instance = self._get_instance(key, provider, self.injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 811, in _get_instance\r\nprivate-gpt-1 | return provider.get(injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 264, in get\r\nprivate-gpt-1 | return injector.create_object(self._cls)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 998, in create_object\r\nprivate-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1040, in call_with_injection\r\nprivate-gpt-1 | return callable(*full_args, **dependencies)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/private_gpt/components/vector_store/vector_store_component.py\", line 114, in __init__\r\nprivate-gpt-1 | client = QdrantClient(\r\nprivate-gpt-1 | ^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/qdrant_client.py\", line 117, in __init__\r\nprivate-gpt-1 | self._client = QdrantLocal(\r\nprivate-gpt-1 | ^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py\", line 66, in __init__\r\nprivate-gpt-1 | self._load()\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py\", line 97, in _load\r\nprivate-gpt-1 | os.makedirs(self.location, exist_ok=True)\r\nprivate-gpt-1 | File \"\", line 215, in makedirs\r\nprivate-gpt-1 | File \"\", line 225, in makedirs\r\nprivate-gpt-1 | PermissionError: [Errno 13] Permission denied: 'local_data/private_gpt'\r\n^CGracefully stopping... (press Ctrl+C again to force)\r\n[+] Stopping 2/2\r\n \u2714 Container private-gpt-private-gpt-1 Stopped 0.3s\r\n \u2714 Container private-gpt-ollama-1 Stopped \r\n```\n\n### Steps to Reproduce\n\n1. Clone the repo\r\n2. docker-compose build\r\n3. docker-compose up\n\n### Expected Behavior\n\nIt should just run\n\n### Actual Behavior\n\nError, as reported above\n\n### Environment\n\nRunning on an Asustor router, docker 25.0.5\n\n### Additional Information\n\n_No response_\n\n### Version\n\nlatest\n\n### Setup Checklist\n\n- [X] Confirm that you have followed the installation instructions in the project\u2019s documentation.\n- [X] Check that you are using the latest version of the project.\n- [X] Verify disk space availability for model storage and data processing.\n- [X] Ensure that you have the necessary permissions to run the project.\n\n### NVIDIA GPU Setup Checklist\n\n- [ ] Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to [CUDA's documentation](https://docs.nvidia.com/deploy/cuda-compatibility/#frequently-asked-questions))\n- [ ] Ensure an NVIDIA GPU is installed and recognized by the system (run `nvidia-smi` to verify).\n- [ ] Ensure proper permissions are set for accessing GPU resources.\n- [ ] Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run `sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi`)", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/2059", "file_loc": {"base_commit": "89477ea9d3a83181b0222b732a81c71db9edf142", "files": [{"path": "Dockerfile.llamacpp-cpu", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 23, 30]}}}, {"path": "Dockerfile.ollama", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 13, 20]}}}, {"path": "docker-compose.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [10, 29, 34], "mod": [15, 47, 60]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docker-compose.yaml"], "test": [], "config": ["Dockerfile.ollama", "Dockerfile.llamacpp-cpu"], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "e04b8e70e60df88751af5cd667cafb66dc32b397", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/26590", "iss_label": "Bug", "title": "KNNImputer add_indicator fails to persist where missing data had been present in training", "body": "### Describe the bug\r\n\r\nHello, I've encountered an issue where the KNNImputer fails to record the fields where there were missing data at the time when `.fit` is called, but not recognised if `.transform` is called on a dense matrix. I would have expected it to return a 2x3 matrix rather than 2x2, with `missingindicator_A = False` for all cases.\r\n\r\nReproduction steps below. Any help much appreciated :)\r\n\r\n### Steps/Code to Reproduce\r\n\r\n```python\r\n>>> import pandas as pd\r\n>>> from sklearn.impute import KNNImputer\r\n>>> knn = KNNImputer(add_indicator=True)\r\n>>> df = pd.DataFrame({'A': [0, None], 'B': [1, 2]})\r\n>>> df\r\n A B\r\n0 0.0 1\r\n1 NaN 2\r\n>>> knn.fit(df)\r\nKNNImputer(add_indicator=True)\r\n>>> pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())\r\n A B missingindicator_A\r\n0 0.0 1.0 0.0\r\n1 0.0 2.0 1.0\r\n>>> df['A'] = 0\r\n>>> pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\n A B missingindicator_A\r\n0 0.0 1.0 0.0\r\n1 0.0 2.0 0.0\r\n```\r\n\r\n### Actual Results\r\n\r\n```pytb\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[30], line 1\r\n----> 1 pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pandas/core/frame.py:694, in DataFrame.__init__(self, data, index, columns, dtype, copy)\r\n 684 mgr = dict_to_mgr(\r\n 685 # error: Item \"ndarray\" of \"Union[ndarray, Series, Index]\" has no\r\n 686 # attribute \"name\"\r\n (...)\r\n 691 typ=manager,\r\n 692 )\r\n 693 else:\r\n--> 694 mgr = ndarray_to_mgr(\r\n 695 data,\r\n 696 index,\r\n 697 columns,\r\n 698 dtype=dtype,\r\n 699 copy=copy,\r\n 700 typ=manager,\r\n 701 )\r\n 703 # For data is list-like, or Iterable (will consume into list)\r\n 704 elif is_list_like(data):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pandas/core/internals/construction.py:351, in ndarray_to_mgr(values, index, columns, dtype, copy, typ)\r\n 346 # _prep_ndarray ensures that values.ndim == 2 at this point\r\n 347 index, columns = _get_axes(\r\n 348 values.shape[0], values.shape[1], index=index, columns=columns\r\n 349 )\r\n--> 351 _check_values_indices_shape_match(values, index, columns)\r\n 353 if typ == \"array\":\r\n 355 if issubclass(values.dtype.type, str):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pandas/core/internals/construction.py:422, in _check_values_indices_shape_match(values, index, columns)\r\n 420 passed = values.shape\r\n 421 implied = (len(index), len(columns))\r\n--> 422 raise ValueError(f\"Shape of passed values is {passed}, indices imply {implied}\")\r\n\r\nValueError: Shape of passed values is (2, 2), indices imply (2, 3)\r\n```\r\n\r\n### Versions\r\n\r\n```shell\r\npython3, sklearn = 1.2.1\r\n```\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/26600", "file_loc": {"base_commit": "e04b8e70e60df88751af5cd667cafb66dc32b397", "files": [{"path": "doc/whats_new/v1.3.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}}}, {"path": "sklearn/impute/_knn.py", "status": "modified", "Loc": {"('KNNImputer', 'transform', 242)": {"mod": [285]}}}, {"path": "sklearn/impute/tests/test_common.py", "status": "modified", "Loc": {"(None, 'test_keep_empty_features', 171)": {"add": [183]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/impute/_knn.py"], "doc": ["doc/whats_new/v1.3.rst"], "test": ["sklearn/impute/tests/test_common.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "9660ec7813a0e77ec3411682b0084d07b540084e", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/543", "iss_label": "", "title": "Adding sudo works for `aura -Sy` but not `aura -Ay`", "body": "`fuck` is unable to add `sudo` to an `aura -Ay` command:\n\n```\n$ aura -Ay foobar-beta-git # from AUR\naura >>= You have to use `sudo` for that.\n$ fuck\nNo fucks given\n```\n\nBut works as expected for `aura -Sy`:\n\n```\n$ aura -Sy foobar # pacman alias\nerror: you cannot perform this operation unless you are root.\naura >>= Please check your input.\n$ fuck\nsudo aura -Sy foobar [enter/\u2191/\u2193/ctrl+c]\n```\n\nIt's slightly annoying anyway that the `aura` outut is different in these cases, but is it possible for `thefuck` to work-around? Or is the only way for `aura` to give a stderr message containing \"root\"?\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/557", "file_loc": {"base_commit": "9660ec7813a0e77ec3411682b0084d07b540084e", "files": [{"path": "thefuck/rules/sudo.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/rules/sudo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "2707099b23a0a8580731553629566c1182d26f48", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/29294", "iss_label": "Moderate\nhelp wanted", "title": "ConvergenceWarnings cannot be turned off", "body": "Hi, I'm unable to turn off convergence warnings from `GraphicalLassoCV`.\r\n\r\nI've tried most of the solutions from, and none of them worked (see below for actual implementations):\r\nhttps://stackoverflow.com/questions/879173/how-to-ignore-deprecation-warnings-in-python\r\nhttps://stackoverflow.com/questions/32612180/eliminating-warnings-from-scikit-learn/33812427#33812427\r\nhttps://stackoverflow.com/questions/53968004/how-to-silence-all-sklearn-warning\r\nhttps://stackoverflow.com/questions/14463277/how-to-disable-python-warnings\r\n\r\nContrary to what the designers of the sklearn's exceptions must have thought when it was implemented, some of us actually use stdout to log important information of the host program for diagnostics purposes. Flooding it with garbage that cannot be turned off, as is in the case with cross-validation, is not ok. \r\n\r\nTo briefly speak to the severity of the issue, the above sklearn-specific questions relating to suppressing warnings have been viewed ~500K times with combined ~400 upvotes, and dates back 7 years. \r\n\r\nI've tried the following (`n_jobs` parameter does not appear to affect the result):\r\n\r\n```py\r\nfrom sklearn.covariance import GraphicalLassoCV\r\nimport warnings\r\nwarnings.filterwarnings(\"ignore\", category=ConvergenceWarning)\r\n\r\nmodel = GraphicalLassoCV(n_jobs=4)\r\nmodel = model.fit(data)\r\n```\r\n\r\n```py\r\nfrom sklearn.covariance import GraphicalLassoCV\r\nimport warnings\r\nwarnings.filterwarnings(action='ignore')\r\n\r\nmodel = GraphicalLassoCV(n_jobs=4)\r\nmodel = model.fit(data)\r\n```\r\n\r\n```py\r\nimport warnings\r\nwith warnings.catch_warnings():\r\n warnings.simplefilter(\"ignore\", ConvergenceWarning)\r\n\r\n model = GraphicalLassoCV(n_jobs=4)\r\n model = model.fit(data)\r\n```\r\n\r\n```py\r\nfrom sklearn.covariance import GraphicalLassoCV\r\ndef warn(*args, **kwargs):\r\n pass\r\nimport warnings\r\nwarnings.warn = warn\r\n\r\nmodel = GraphicalLassoCV(n_jobs=4)\r\nmodel = model.fit(data)\r\n```\r\n\r\n```py\r\nimport contextlib\r\nimport os, sys\r\n\r\n@contextlib.contextmanager\r\ndef suppress_stdout():\r\n with open(os.devnull, 'w') as fnull:\r\n old_stdout = sys.stdout\r\n sys.stdout = fnull\r\n try:\r\n yield\r\n finally:\r\n sys.stdout = old_stdout\r\n\r\nwith suppress_stdout():\r\n model = GraphicalLassoCV(n_jobs=4)\r\n model = model.fit(data)\r\n```\r\n\r\n```py\r\nimport logging\r\nlogging.captureWarnings(True)\r\n\r\nlogging.getLogger(\"py.warnings\").setLevel(logging.ERROR)\r\n\r\nmodel = GraphicalLassoCV(n_jobs=4)\r\nmodel = model.fit(data)\r\n```", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/30380", "file_loc": {"base_commit": "2707099b23a0a8580731553629566c1182d26f48", "files": [{"path": "sklearn/utils/parallel.py", "status": "modified", "Loc": {"('_FuncWrapper', 'with_config', 121)": {"add": [122]}, "(None, '_with_config', 24)": {"mod": [24, 26, 27]}, "('Parallel', '__call__', 54)": {"mod": [73, 74, 77]}, "('_FuncWrapper', None, 114)": {"mod": [121]}, "('_FuncWrapper', '__call__', 125)": {"mod": [126, 127, 137, 138]}}}, {"path": "sklearn/utils/tests/test_parallel.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 11]}, "(None, 'test_dispatch_config_parallel', 56)": {"add": [100]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/utils/parallel.py"], "doc": [], "test": ["sklearn/utils/tests/test_parallel.py"], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "7b2b1eff57e41364b4b427e36e766607e7eed3a0", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/20", "iss_label": "", "title": "Control Loop: long term planning and execution", "body": "The biggest, most complicated aspect of Devin is long-term planning and execution. I'd like to start a discussion about how this might work in OpenDevin.\r\n\r\nThere's some [recent prior work from Microsoft](https://arxiv.org/pdf/2403.08299.pdf) with some impressive results. I'll summarize here, with some commentary.\r\n\r\n## Overall Flow\r\n* User specifies objective and associated settings\r\n* Conversation Manager kicks in\r\n* Sends convo to Agent Scheduler\r\n* Agents execute commands\r\n* Output is placed back into the conversation\r\n* Rinse and repeat\r\n\r\n## Configuraiton \r\n* A YAML file defines a set of actions/commands the bot can take (e.g. `npm test`)\r\n * comment: why not just leave it open-ended?\r\n* You can have different agents with different capabilities, e.g. a \"dev agent\" and a \"reviewer agent\", who work collaboratively\r\n * comment: this sounds like MetaGPT\r\n \r\n## Components\r\n### Conversation Manager\r\n* maintains message history and command outputs\r\n* decides when to interrupt the conversation\r\n * comment: for what? more info from the user?\r\n* decides when the conversation is over, i.e. task has been completed\r\n * agent can send a \"stop\" command, max tokens can be reached, problems w/ execution environment\r\n### Parser\r\n* interprets agent output and turns it into commands, file edits, etc\r\n* in case of parsing failure, a message is sent back to the agent to rewrite its command\r\n### Output Organizer\r\n* Takes command output and selectively places it into the conversation history\r\n * sometimes summarizes the content first\r\n * comment: why not just drop everything back into the conversation history (maybe truncating really long CLI output) \r\n### Agent Scheduler\r\n* orchestrates different agents\r\n* uses different algos for deciding who gets to go next\r\n * round-robin: everyone takes turns in order\r\n * token-based: agent gets to keep going until it says it's done\r\n * priority-based: agents go based on (user defined?) priority\r\n### Tools Library\r\n * file editing (can edit entire file, or specify start line and end line)\r\n * retrieval (file contents, `ls`, `grep`). Seems to use vector search as well\r\n * build and execution: abstracts away the implementation in favor of simple commands like `build foo`\r\n * testing and validation: includes linters and bug-finding utils\r\n * git: can commit, push, merge\r\n * communication: can as human for input/feedback, can talk to other agents\r\n### Evaluation Environment\r\n* runs in Docker\r\n\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/3771", "file_loc": {"base_commit": "7b2b1eff57e41364b4b427e36e766607e7eed3a0", "files": [{"path": ".gitignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [230]}}}, {"path": "containers/runtime/README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 3, 5, 9]}}}, {"path": "frontend/src/components/AgentStatusBar.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [20, 92], "mod": [94, 95, 96, 97, 98, 99, 100]}}}, {"path": "frontend/src/i18n/translation.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [465, 482], "mod": [75, 81, 87, 339, 344, 389, 392, 393, 397, 402, 407, 412, 417, 422, 427, 432, 437, 442, 447, 452, 457, 462, 467, 472, 478, 490, 496, 499, 502, 505, 508, 511, 514, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 536, 541, 546, 551, 556, 561, 566, 571, 576, 581, 586, 605, 610, 615, 620, 638, 643, 648, 653, 658, 690, 736, 741, 746, 751, 757, 763, 769, 775, 781, 786, 791, 794, 799, 805, 811, 816, 817, 822, 823]}}}, {"path": "frontend/src/services/actions.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 140], "mod": [12]}, "(None, 'handleAssistantMessage', 141)": {"add": [153], "mod": [152]}}}, {"path": "frontend/src/services/session.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}, "('Session', None, 11)": {"add": [15, 147, 148]}, "('Session', '_setupSocket', 76)": {"add": [85, 117], "mod": [97]}, "('Session', 'send', 148)": {"mod": [150]}}}, {"path": "frontend/src/store.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [10, 21]}}}, {"path": "frontend/src/types/Message.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [33]}}}, {"path": "frontend/src/types/ResponseType.tsx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 3]}}}, {"path": "openhands/core/main.py", "status": "modified", "Loc": {"(None, 'create_runtime', 50)": {"mod": [58]}}}, {"path": "openhands/runtime/client/client.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18, 20, 564]}}}, {"path": "openhands/runtime/client/runtime.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "('EventStreamRuntime', '__init__', 115)": {"add": [121, 132, 133, 159, 181], "mod": [149, 172, 174]}, "('EventStreamRuntime', '_init_container', 197)": {"add": [283], "mod": [204, 205, 206, 244, 248, 254]}, "('EventStreamRuntime', '_find_available_port', 534)": {"add": [541]}}}, {"path": "openhands/runtime/e2b/runtime.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('E2BRuntime', '__init__', 21)": {"add": [27], "mod": [29]}}}, {"path": "openhands/runtime/remote/runtime.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "('RemoteRuntime', '__init__', 51)": {"add": [57], "mod": [171]}}}, {"path": "openhands/runtime/runtime.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "('Runtime', '__init__', 54)": {"add": [60, 65]}}}, {"path": "openhands/server/session/agent.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0]}, "('AgentSession', 'start', 40)": {"add": [48], "mod": [67]}, "('AgentSession', '_create_security_analyzer', 92)": {"add": [100], "mod": [99]}, "('AgentSession', '_create_runtime', 105)": {"add": [123], "mod": [115, 119]}, "('AgentSession', None, 13)": {"add": [125], "mod": [105]}, "('AgentSession', '_create_controller', 126)": {"mod": [181]}}}, {"path": "openhands/server/session/manager.py", "status": "modified", "Loc": {"('SessionManager', 'send', 36)": {"mod": [38, 40]}}}, {"path": "openhands/server/session/session.py", "status": "modified", "Loc": {"('Session', None, 30)": {"add": [35]}, "('Session', '__init__', 37)": {"add": [47]}, "('Session', '_initialize_agent', 71)": {"add": [115]}, "('Session', 'send', 167)": {"add": [174]}, "('Session', 'load_from_data', 192)": {"add": [197]}, "(None, None, None)": {"mod": [24]}, "('Session', 'on_event', 127)": {"mod": [128, 138]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["openhands/runtime/e2b/runtime.py", "frontend/src/types/Message.tsx", "frontend/src/types/ResponseType.tsx", "frontend/src/store.ts", "openhands/runtime/remote/runtime.py", "openhands/runtime/runtime.py", "frontend/src/services/session.ts", "openhands/server/session/agent.py", "openhands/core/main.py", "frontend/src/i18n/translation.json", "openhands/server/session/session.py", "openhands/runtime/client/client.py", "frontend/src/components/AgentStatusBar.tsx", "openhands/runtime/client/runtime.py", "frontend/src/services/actions.ts", "openhands/server/session/manager.py"], "doc": ["containers/runtime/README.md"], "test": [], "config": [".gitignore"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "2242702cf94eab7275f2cb148859135018d9b280", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/1251", "iss_label": "enhancement", "title": "Sandbox Capabilities Framework", "body": "**Summary**\r\nWe have an existing use case for a Jupyter-aware agent, which always runs in a sandbox where Jupyter is available. There are some other scenarios I can think of where an agent might want some guarantees about what it can do with the sandbox:\r\n* We might want a \"postgres migration writer\", which needs access to a postgres instance\r\n* We might have a \"cypress test creator\" agent, which would need access to cypress\r\n* Further down the road, we might want to have an [Open Interpreter](https://github.com/OpenInterpreter/open-interpreter) agent, which needs access to osascript\r\n* etc etc\r\n\r\nThis proposal would allow agents to guarantee that certain programs are available in the sandbox, or that certain services are running in a predictable way.\r\n\r\n\r\nWhat if we did something like this:\r\n\r\n\r\n**Motivation**\r\nWe want agents to be able to have certain guarantees about the sandbox environment. But we also want our sandbox interface to be generic--something like \"you have a bash terminal\".\r\n\r\nThe latter is especially important, because we want users to be able to bring their own sandbox images. E.g. you might use an off-the-shelf haskell image if your project uses haskell--otherwise you'd need to go through the install process every time you start OpenDevin, or maintain a fork of the sandbox.\r\n\r\n**Technical Design**\r\n* For every requirement we support (e.g. jupyter, postgres, cypress), we have a bash script that\r\n * checks if it's installed\r\n * if not, installs it\r\n * maybe starts something in the background\r\n* Let agents specify a list of requirements\r\n * e.g. CodeActAgent could say requirements: ['jupyter']\r\n* When we start the Agent+Sandbox pair, we run the necessary bash scripts\r\n * should be pretty quick if the requirement is already built into the image\r\n* Then the agent has some guarantees about the requirement being met, and how it's running\r\n * e.g. we can put in the prompt \"there's a postgres server running on port 5432, user foo, password bar\"\r\n* If there are specific ways of interacting with that env (e.g. for jupyter, it seems we have to write to a websocket that's open in the sandbox?) the agent can implement custom Actions, like run_in_jupyter\r\n\r\n**Alternatives to Consider**\r\n* Building a bunch of stuff into one big sandbox\r\n* Building special sandboxes that are required by certain agents (e.g. a JupyterSandbox)\r\n\r\n**Additional context**\r\nhttps://opendevin.slack.com/archives/C06QKSD9UBA/p1713552591042089\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/1255", "file_loc": {"base_commit": "2242702cf94eab7275f2cb148859135018d9b280", "files": [{"path": "Makefile", "status": "modified", "Loc": {"(None, None, None)": {"add": [220]}}}, {"path": "agenthub/codeact_agent/codeact_agent.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [17]}, "('CodeActAgent', None, 66)": {"add": [71]}}}, {"path": "opendevin/agent.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8]}, "('Agent', None, 11)": {"add": [19]}}}, {"path": "opendevin/config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 20]}, "(None, 'get', 140)": {"add": [147]}}}, {"path": "opendevin/controller/action_manager.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16]}, "('ActionManager', None, 19)": {"add": [43]}}}, {"path": "opendevin/controller/agent_controller.py", "status": "modified", "Loc": {"('AgentController', '__init__', 41)": {"add": [55]}}}, {"path": "opendevin/sandbox/docker/exec_box.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6]}, "('DockerExecBox', None, 36)": {"add": [124]}}}, {"path": "opendevin/sandbox/docker/local_box.py", "status": "modified", "Loc": {"('LocalBox', None, 25)": {"add": [41]}}}, {"path": "opendevin/sandbox/docker/ssh_box.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6, 17, 357], "mod": [359]}, "('DockerSSHBox', 'setup_user', 95)": {"add": [139]}, "('DockerSSHBox', None, 46)": {"add": [210]}, "('DockerSSHBox', 'restart_docker_container', 271)": {"add": [309]}}}, {"path": "opendevin/sandbox/e2b/sandbox.py", "status": "modified", "Loc": {"('E2BBox', None, 14)": {"add": [63]}}}, {"path": "opendevin/sandbox/sandbox.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "('Sandbox', 'close', 28)": {"add": [29]}, "('Sandbox', None, 8)": {"mod": [8]}}}, {"path": "opendevin/schema/config.py", "status": "modified", "Loc": {"('ConfigType', None, 4)": {"add": [10]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["opendevin/sandbox/docker/ssh_box.py", "opendevin/schema/config.py", "agenthub/codeact_agent/codeact_agent.py", "opendevin/controller/action_manager.py", "opendevin/sandbox/docker/local_box.py", "opendevin/sandbox/e2b/sandbox.py", "opendevin/sandbox/sandbox.py", "opendevin/sandbox/docker/exec_box.py", "opendevin/agent.py", "opendevin/config.py", "opendevin/controller/agent_controller.py"], "doc": [], "test": [], "config": ["Makefile"], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "0ea743029db0d47f09d33ef90f50ad84c20b085f", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/263", "iss_label": "", "title": "Very slow extraction with scripts vs fakeapp 1.1", "body": "1080ti + OC'd 2600k using winpython 3.6.2 cuda 9.0 and tensorflow 1.6\r\n\r\n**Training** utilizes ~50% of the GPU now (which is better than the ~25% utilized with FA 1.1) but extraction doesn't seem to utilize the GPU at all (getting around 1.33it/s) wheras with FA 1.1 I get around 17it/s - tried CNN and it dropped down to taking nearly a minute per file. Although I say it doesn't utilize the GPU it still seems to use all 11GB of RAM on the GPU, just none of the compute cores or processor are in use. CPU is using about 17%.\r\n\r\nTried using extracted data from FA 1.1 with .py -convert but it just says 'no alignment found for file: x\" for every file even tho --alignments points to the path with alignments.json \r\n\r\nI would've thought the alignments.json from FA 1.1 was compatible so I'm not sure if the above is a separate issue or not.", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/259", "file_loc": {"base_commit": "0ea743029db0d47f09d33ef90f50ad84c20b085f", "files": [{"path": "lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py", "status": "modified", "Loc": {"(None, 'initialize', 108)": {"add": [126], "mod": [108, 117, 123, 124, 125]}, "(None, 'extract', 137)": {"mod": [137, 138, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 183]}}}, {"path": "lib/cli.py", "status": "modified", "Loc": {"('DirectoryProcessor', 'get_faces_alignments', 140)": {"mod": [149]}, "('DirectoryProcessor', 'get_faces', 159)": {"mod": [161, 165]}}}, {"path": "lib/faces_detect.py", "status": "modified", "Loc": {"(None, 'detect_faces', 3)": {"mod": [3, 4]}}}, {"path": "scripts/extract.py", "status": "modified", "Loc": {"('ExtractTrainingData', 'add_optional_arguments', 22)": {"mod": [25]}, "('ExtractTrainingData', 'process', 79)": {"mod": [95]}, "('ExtractTrainingData', 'processFiles', 100)": {"mod": [105]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/faces_detect.py", "lib/cli.py", "lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py", "scripts/extract.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "ef176c663195489b44030bfe1fb94a317762c8d5", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/3323", "iss_label": "feature\nreviewed", "title": "Support PEP 593 `Annotated` for specifying dependencies and parameters", "body": "### First check\r\n\r\n* [x] I added a very descriptive title to this issue.\r\n* [x] I used the GitHub search to find a similar issue and didn't find it.\r\n* [x] I searched the FastAPI documentation, with the integrated search.\r\n* [x] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n* [x] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n* [x] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n* [x] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n* [x] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n* [x] After submitting this, I commit to:\r\n * Read open issues with questions until I find 2 issues where I can help someone and add a comment to help there.\r\n * Or, I already hit the \"watch\" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.\r\n * Implement a Pull Request for a confirmed bug.\r\n\r\n### Example\r\n\r\nI propse to allow transforming:\r\n\r\n\r\n```Python\r\nfrom typing import Optional\r\n\r\nfrom fastapi import Depends, FastAPI\r\n\r\napp = FastAPI()\r\n\r\n\r\nasync def common_parameters(q: Optional[str] = None, skip: int = 0, limit: int = 100):\r\n return {\"q\": q, \"skip\": skip, \"limit\": limit}\r\n\r\n\r\n@app.get(\"/items/\")\r\nasync def read_items(commons: dict = Depends(common_parameters)):\r\n return commons\r\n```\r\nto \r\n```Python\r\nfrom typing import Annotated, Optional\r\n\r\nfrom fastapi import Depends, FastAPI\r\n\r\napp = FastAPI()\r\n\r\n\r\nasync def common_parameters(q: Optional[str] = None, skip: int = 0, limit: int = 100):\r\n return {\"q\": q, \"skip\": skip, \"limit\": limit}\r\n\r\n\r\n@app.get(\"/items/\")\r\nasync def read_items(commons: Annotated[dict, Depends(common_parameters)]):\r\n return commons\r\n```\r\n\r\n### Discussion\r\n[PEP 593](https://www.python.org/dev/peps/pep-0593/) Added `Annotated` for adding additional annotations beyond type annotations. I think FastAPI's `Depends`, `Query`, `Body` and the likes fit well with the kind of additional annotations this supports.\r\n\r\nThis would also make default values less awkward:\r\n```python\r\n@app.get(\"/items/\")\r\nasync def read_items(q: Optional[str] = Query(None, max_length=50)):\r\n pass\r\n```\r\nCould become\r\n```python\r\n@app.get(\"/items/\")\r\nasync def read_items(q: Annotated[Optional[str], Query(max_length=50)] = None):\r\n pass\r\n```\r\n\r\nThis will also solve the issue mentioned [in the docs](https://fastapi.tiangolo.com/tutorial/path-params-numeric-validations/#order-the-parameters-as-you-need) of parameter ordering.\r\n\r\nFinally, it is sometimes convenient to use the same function as both a FastAPI dependency and a regular function. In these cases, because `= Depends(...)` is a default parameter value, if you forget to pass a parameter the error is not caught by your IDE. Worse, it is not caught at runtime because Python will just pass along the `Depends` object. This will probably cause an error down the road, but may silently succeed in some cases.\r\n\r\nI'm willing to implement this if you think it's a good idea.", "pr_html_url": "https://github.com/fastapi/fastapi/pull/4871", "file_loc": {"base_commit": "ef176c663195489b44030bfe1fb94a317762c8d5", "files": [{"path": "fastapi/dependencies/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [58], "mod": [51]}, "(None, 'get_dependant', 282)": {"add": [336], "mod": [301, 303, 307, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335]}, "(None, 'get_param_sub_dependant', 114)": {"mod": [115, 117, 118, 119, 120, 121, 124, 126]}, "(None, 'add_non_field_param_to_dependency', 340)": {"mod": [341, 343, 344, 346, 347, 349, 350, 352, 353, 355, 356, 358, 359]}, "(None, 'get_param_field', 364)": {"mod": [364, 366, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 416]}}}, {"path": "fastapi/param_functions.py", "status": "modified", "Loc": {"(None, 'Path', 7)": {"mod": [8]}}}, {"path": "fastapi/params.py", "status": "modified", "Loc": {"('Path', '__init__', 63)": {"add": [82], "mod": [65, 85]}, "('Form', '__init__', 280)": {"mod": [282]}, "('File', '__init__', 320)": {"mod": [322]}}}, {"path": "fastapi/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "(None, 'create_response_field', 60)": {"mod": [76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 88]}}}, {"path": "tests/main.py", "status": "modified", "Loc": {"(None, 'get_path_param_id', 52)": {"mod": [52, 53, 56, 57]}}}, {"path": "tests/test_application.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257]}}}, {"path": "tests/test_params_repr.py", "status": "modified", "Loc": {"(None, 'test_path_repr', 22)": {"mod": [22, 23]}}}, {"path": "tests/test_path.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [196]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["fastapi/dependencies/utils.py", "fastapi/utils.py", "fastapi/param_functions.py", "tests/main.py", "fastapi/params.py"], "doc": [], "test": ["tests/test_params_repr.py", "tests/test_application.py", "tests/test_path.py"], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/92417", "iss_label": "docs", "title": "Many references to unsupported Python versions in the stdlib docs", "body": "**Documentation**\r\n\r\nThere are currently many places in the stdlib docs where there are needless comments about how to maintain compatibility with Python versions that are now end-of-life. Many of these can now be removed, to improve brevity and clarity in the documentation.\r\n\r\nI plan to submit a number of PRs to fix these.\r\n\r\nPRs:\r\n \r\n- #92418 \r\n- #92419 \r\n- #92420 \r\n- #92421 \r\n- #92422 \r\n- #92423 \r\n- #92424 \r\n- #92425\r\n- https://github.com/python/cpython/pull/92502\r\n- #92538\r\n- #92539\r\n- #92543\r\n- #92544\r\n- [More to come]\r\n\r\nBackports:\r\n- https://github.com/python/cpython/pull/92459\r\n- https://github.com/python/cpython/pull/92460\r\n- https://github.com/python/cpython/pull/92461\r\n- https://github.com/python/cpython/pull/92462\r\n- https://github.com/python/cpython/pull/92463\r\n- https://github.com/python/cpython/pull/92491\r\n- https://github.com/python/cpython/pull/92467\r\n- https://github.com/python/cpython/pull/92468\r\n- https://github.com/python/cpython/pull/92492\r\n- https://github.com/python/cpython/pull/92464\r\n- https://github.com/python/cpython/pull/92465\r\n- https://github.com/python/cpython/pull/92466\r\n- https://github.com/python/cpython/pull/92472\r\n- https://github.com/python/cpython/pull/92473\r\n- https://github.com/python/cpython/pull/92474\r\n- https://github.com/python/cpython/pull/92485\r\n- https://github.com/python/cpython/pull/92486\r\n- https://github.com/python/cpython/pull/92487\r\n- https://github.com/python/cpython/pull/92606\r\n- https://github.com/python/cpython/pull/92607", "pr_html_url": "https://github.com/python/cpython/pull/92539", "file_loc": {"base_commit": "e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0", "files": [{"path": "Doc/library/unittest.mock-examples.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [663]}}}, {"path": "Doc/library/unittest.mock.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2384]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": " ", "info_type": ""}, "loctype": {"code": [], "doc": ["Doc/library/unittest.mock-examples.rst", "Doc/library/unittest.mock.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "23d8761615d0417eef5f52cc796518e44d41ca2a", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/19248", "iss_label": "Documentation\nmodule:cluster", "title": "Birch should be called BIRCH", "body": "C.f. the original paper.\r\nZhang, T.; Ramakrishnan, R.; Livny, M. (1996). \"BIRCH: an efficient data clustering method for very large databases\". Proceedings of the 1996 ACM SIGMOD international conference on Management of data - SIGMOD '96. pp. 103\u2013114. doi:10.1145/233269.233324", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/19368", "file_loc": {"base_commit": "23d8761615d0417eef5f52cc796518e44d41ca2a", "files": [{"path": "doc/modules/clustering.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [106, 946, 965, 999, 1001, 1005]}}}, {"path": "examples/cluster/plot_birch_vs_minibatchkmeans.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6, 39, 48, 58, 78]}}}, {"path": "examples/cluster/plot_cluster_comparison.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [146]}}}, {"path": "sklearn/cluster/_birch.py", "status": "modified", "Loc": {"('Birch', None, 335)": {"mod": [336]}, "('Birch', '_global_clustering', 648)": {"mod": [677]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/cluster/plot_birch_vs_minibatchkmeans.py", "sklearn/cluster/_birch.py", "examples/cluster/plot_cluster_comparison.py"], "doc": ["doc/modules/clustering.rst"], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "65b807e4e95fe6da3e30f13e4271dc9dcfaa334e", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/402", "iss_label": "type: bug", "title": "Dynamodbstreams Use Kinesis Shard Identifiers", "body": "\r\n\r\nDynamodbstreams seem to be making use of Kinesis shard identifiers which are considered invalid by botocore request validators.\r\n\r\nError response from boto3 when attempting to `get_shard_iterator` from shard ids returned from `describe_stream`:\r\n\r\n```\r\n[test-integration:L51:27s] exception = ParamValidationError(u'Parameter validation failed:\\nInvalid length for parameter ShardId, value: 20, valid range: 28-inf',)\r\n[test-integration:L52:27s]\r\n[test-integration:L53:27s] def _reraise_exception(self, exception):\r\n[test-integration:L54:27s] if hasattr(exception, 'response'):\r\n[test-integration:L55:27s] code = exception.response['Error']['Code']\r\n[test-integration:L56:27s]\r\n[test-integration:L57:27s] if code == 'TrimmedDataAccessException':\r\n[test-integration:L58:27s] raise TrimmedRecordsException()\r\n[test-integration:L59:27s] elif code == 'ResourceNotFoundException':\r\n[test-integration:L60:27s] raise ResourceDNEException()\r\n[test-integration:L61:27s]\r\n[test-integration:L62:27s] > raise exception\r\n[test-integration:L63:27s] E ParamValidationError: Parameter validation failed:\r\n[test-integration:L64:27s] E Invalid length for parameter ShardId, value: 20, valid range: 28-inf\r\n[test-integration:L65:27s]\r\n[test-integration:L66:27s] .tox/py27/lib/python2.7/site-packages/pyrokinesis/dynamodbstreams_ingress_backend.py:111: ParamValidationError\r\n```\r\n\r\nThe following is the response object I am getting back when I `describe_stream` on the stream's ARN:\r\n\r\n```\r\n[test-integration:L68:27s] {'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HTTPHeaders': {'content-length': '692', 'access-control-allow-origin': '*', 'date': 'Fri, 13 Oct 2017 12:47:00 GMT', 'server': 'Werkzeug/0.12.2 Python/2.7.13', 'content-type': 'application/json'}}, u'StreamDescription': {u'StreamLabel': u'TODO', u'StreamArn': u'arn:aws:dynamodb:us-east-1:000000000000:table/DynamoTest/stream/2017-10-13T12:47:00', u'Shards': [{u'ShardId': u'shardId-000000000000', u'SequenceNumberRange': {u'StartingSequenceNumber': u'49577893583130519883135457518096755974321873497073123330'}}], u'KeySchema': [{u'KeyType': u'HASH', u'AttributeName': u'ID'}], u'TableName': u'DynamoTest', u'StreamStatus': u'ENABLED'}}\r\n```\r\n\r\nMy localstack setup:\r\n\r\n```\r\nlocalstack 0.7.3\r\n\r\n[localstack:L2:1s] 2017-10-13 15:10:35,915 INFO spawned: 'dashboard' with pid 13\r\n[localstack:L3:1s] 2017-10-13 15:10:35,917 INFO spawned: 'infra' with pid 14\r\n[localstack:L4:1s] (. .venv/bin/activate; bin/localstack web --port=8080)\r\n[localstack:L5:1s] (. .venv/bin/activate; exec bin/localstack start)\r\n[localstack:L6:1s] Starting local dev environment. CTRL-C to quit.\r\n[localstack:L7:1s] * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)\r\n[localstack:L8:1s] * Restarting with stat\r\n[localstack:L9:1s] Starting mock Kinesis (http port 4568)...\r\n[localstack:L10:1s] Starting mock S3 (http port 4572)...\r\n[localstack:L11:1s] Starting mock DynamoDB (http port 4569)...\r\n[localstack:L12:1s] * Debugger is active!\r\n[localstack:L13:2s] * Debugger PIN: 281-540-735\r\n[localstack:L14:2s] 2017-10-13 15:10:37,123 INFO success: dashboard entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)\r\n[localstack:L15:2s] 2017-10-13 15:10:37,123 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)\r\n[localstack:L16:2s] Starting mock DynamoDB Streams service (http port 4570)...\r\n[localstack:L17:2s] Listening at http://:::4565\r\n[localstack:L18:2s] Initializing DynamoDB Local with the following configuration:\r\n[localstack:L19:2s] Port:\t4564\r\n[localstack:L20:2s] InMemory:\tfalse\r\n[localstack:L21:2s] DbPath:\t/tmp/localstack/dynamodb\r\n[localstack:L22:2s] SharedDb:\ttrue\r\n[localstack:L23:2s] shouldDelayTransientStatuses:\tfalse\r\n[localstack:L24:2s] CorsParams:\t*\r\n[localstack:L25:2s]\r\n[localstack:L26:2s] * Running on http://0.0.0.0:4563/ (Press CTRL+C to quit)\r\n```\r\n\r\n", "pr_html_url": "https://github.com/localstack/localstack/pull/403", "file_loc": {"base_commit": "65b807e4e95fe6da3e30f13e4271dc9dcfaa334e", "files": [{"path": "localstack/services/dynamodbstreams/dynamodbstreams_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 119]}, "(None, 'post_request', 47)": {"add": [76], "mod": [70, 78]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack/services/dynamodbstreams/dynamodbstreams_api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "ee76129812419d473eb62434051e81d5855255b6", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/602", "iss_label": "", "title": "Misspelling in docs @ flask.Flask.handle_exception", "body": "`Default exception handling that kicks in when an exception occours that is not caught. In debug mode the exception will be re-raised immediately, otherwise it is logged and the handler for a 500 internal server error is used. If no such handler exists, a default 500 internal server error message is displayed.`\n\nOccours should be occurs.\n\nI looked around in the project code to see if i could update this, but it looks like the docs subdir is no longer used? I could be wrong, if you let me know where this is at I'll update it and send a PR :)\n", "pr_html_url": "https://github.com/pallets/flask/pull/603", "file_loc": {"base_commit": "ee76129812419d473eb62434051e81d5855255b6", "files": [{"path": "flask/app.py", "status": "modified", "Loc": {"('Flask', 'handle_exception', 1266)": {"mod": [1268]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": "\u4e0d\u592a\u786e\u5b9a\u95ee\u9898\u7c7b\u522b\uff0c\u56e0\u4e3a\u662f\u5f00\u53d1\u8005\u8be2\u95eetypo error", "info_type": ""}, "loctype": {"code": ["flask/app.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "79d00adc52a091d0ddd1d8a96b06adf2f67f161b", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/36378", "iss_label": "cloud\naws\nmodule\naffects_2.4\nsupport:certified\ndocs", "title": "Documentation Error for ec2_vpc_nacl rules", "body": "##### ISSUE TYPE\r\n - Documentation Report\r\n\r\n##### COMPONENT NAME\r\nec2_vpc_nacl\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.4.3.0\r\n config file = None\r\n configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### SUMMARY\r\nThe example documentation is the wrong way round for ec2_vpc_nacl with respect to the icmp code and type.\r\n\r\n##### STEPS TO REPRODUCE\r\nhttps://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py#L87 has the order of the `icmp_code` and `icmp_type` inverted compared to the code that parses it https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py#L298\r\n\r\n##### EXPECTED RESULTS\r\n\r\n##### ACTUAL RESULTS\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/36380", "file_loc": {"base_commit": "79d00adc52a091d0ddd1d8a96b06adf2f67f161b", "files": [{"path": "lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [87]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "a32e238801d0a8f3c1bd97b98d038b40977a8cc6", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1174", "iss_label": "", "title": "New provider: Amazon Bedrock (AWS)", "body": "**Feature description**\r\nPlease include support for Amazon Bedrock models. These models can be from Amazon, Anthropic, AI21, Cohere, Mistral, or Meta Llama 2. \r\n\r\n**Your Feature**\r\n1. Create a new LLM Provides under [metagpt/provider](https://github.com/geekan/MetaGPT/tree/db65554c4931d4a95e20331b770cf4f7e5202264/metagpt/provider) for Amazon Bedrock\r\n2. Include it in the [LLMType](https://github.com/geekan/MetaGPT/blob/db65554c4931d4a95e20331b770cf4f7e5202264/metagpt/configs/llm_config.py#L17) available", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1231", "file_loc": {"base_commit": "a32e238801d0a8f3c1bd97b98d038b40977a8cc6", "files": [{"path": "config/puppeteer-config.json", "status": "modified", "Loc": {}}, {"path": "metagpt/configs/llm_config.py", "status": "modified", "Loc": {"('LLMType', None, 17)": {"add": [34]}, "('LLMConfig', None, 40)": {"add": [80], "mod": [77]}}}, {"path": "metagpt/provider/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19, 32]}}}, {"path": "metagpt/utils/token_counter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [212]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [72]}}}, {"path": "tests/metagpt/provider/mock_llm_config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [62]}}}, {"path": "tests/metagpt/provider/req_resp_const.py", "status": "modified", "Loc": {"(None, 'llm_general_chat_funcs_test', 174)": {"add": [185]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/utils/token_counter.py", "metagpt/provider/__init__.py", "metagpt/configs/llm_config.py", "config/puppeteer-config.json", "tests/metagpt/provider/mock_llm_config.py", "tests/metagpt/provider/req_resp_const.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "862cd05df4452592a99dd1a4fa10ce8cfb3766f7", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/37494", "iss_label": "Enhancement\nGroupby\nExtensionArray\nNA - MaskedArrays\nClosing Candidate", "title": "ENH: improve the resulting dtype for groupby operations on nullable dtypes", "body": "Follow-up on https://github.com/pandas-dev/pandas/pull/37433, and partly related to https://github.com/pandas-dev/pandas/issues/37493\r\n\r\nCurrently, after groupby operations we try to cast back to the original dtype when possible (at least in case of extension arrays). But this is not always correct, and also not done consistently. Some examples using the test case from the mentioned PR using a nullable Int64 column as input:\r\n\r\n```\r\nIn [1]: df = DataFrame(\r\n ...: {\r\n ...: \"A\": [\"A\", \"B\"] * 5,\r\n ...: \"B\": pd.array([1, 2, 3, 4, 5, 6, 7, 8, 9, pd.NA], dtype=\"Int64\"),\r\n ...: }\r\n ...: )\r\n\r\nIn [2]: df.groupby(\"A\")[\"B\"].sum()\r\nOut[2]: \r\nA\r\nA 25\r\nB 20\r\nName: B, dtype: Int64\r\n\r\nIn [3]: df.groupby(\"A\")[\"B\"].std()\r\nOut[3]: \r\nA\r\nA 3.162278\r\nB 2.581989\r\nName: B, dtype: float64\r\n\r\nIn [4]: df.groupby(\"A\")[\"B\"].mean()\r\nOut[4]: \r\nA\r\nA 5\r\nB 5\r\nName: B, dtype: Int64\r\n\r\nIn [5]: df.groupby(\"A\")[\"B\"].count()\r\nOut[5]: \r\nA\r\nA 5\r\nB 4\r\nName: B, dtype: int64\r\n```\r\n\r\nSo some observations:\r\n\r\n* For `sum()`, we correctly have Int64 for the result\r\n* For `std()`, we could use the nullable Float64 instead of float64 dtype\r\n* For `mean()`, we incorrectly cast back to Int64 dtype, as the result of mean should always be floating (in this case the casting just happened to work because the means were rounded numbers)\r\n* For `count()`, we did not create a nullable Int64 dtype for the result, while this could be done in the input is nullable", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/38291", "file_loc": {"base_commit": "862cd05df4452592a99dd1a4fa10ce8cfb3766f7", "files": [{"path": "pandas/core/dtypes/cast.py", "status": "modified", "Loc": {"(None, 'maybe_cast_result_dtype', 342)": {"mod": [360, 362, 363, 364, 365]}}}, {"path": "pandas/core/groupby/ops.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [47]}, "('BaseGrouper', '_ea_wrap_cython_operation', 493)": {"mod": [524]}}}, {"path": "pandas/tests/arrays/integer/test_arithmetic.py", "status": "modified", "Loc": {"(None, 'test_reduce_to_float', 261)": {"mod": [280]}}}, {"path": "pandas/tests/groupby/aggregate/test_cython.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}, "(None, 'test_cython_agg_nullable_int', 297)": {"add": [314]}}}, {"path": "pandas/tests/groupby/test_function.py", "status": "modified", "Loc": {"(None, 'test_apply_to_nullable_integer_returns_float', 1091)": {"mod": [1096]}}}, {"path": "pandas/tests/resample/test_datetime_index.py", "status": "modified", "Loc": {"(None, 'test_resample_integerarray', 112)": {"mod": [127]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/dtypes/cast.py", "pandas/core/groupby/ops.py"], "doc": [], "test": ["pandas/tests/groupby/aggregate/test_cython.py", "pandas/tests/arrays/integer/test_arithmetic.py", "pandas/tests/resample/test_datetime_index.py", "pandas/tests/groupby/test_function.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "eaf0a044fdc084ebeeb9bbfbcf42e6df2b1491bb", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/16730", "iss_label": "Bug\nBlocker\nmodule:decomposition", "title": "BUG: MLE for PCA mis-estimates rank", "body": "After #16224 it looks like this code no longer produces the correct result:\r\n```\r\nimport numpy as np\r\nfrom sklearn.decomposition import PCA\r\nn_samples, n_dim = 1000, 10\r\nX = np.random.RandomState(0).randn(n_samples, n_dim)\r\nX[:, -1] = np.mean(X[:, :-1], axis=-1) # true X dim is ndim - 1\r\npca_skl = PCA('mle', svd_solver='full')\r\npca_skl.fit(X)\r\nassert pca_skl.n_components_ == n_dim - 1\r\n```\r\nBefore #16224 this passed (`n_components_ == 9`) but after #16224 it gives 8. Not sure why this would happen given the singular value spectrum looks good:\r\n```\r\nimport matplotlib.pyplot as plt\r\ns = np.linalg.svdvals(X)\r\nplt.stem(s)\r\n```\r\n![Figure_1](https://user-images.githubusercontent.com/2365790/77180767-c4f62a00-6aa0-11ea-8dc8-99c6dc137a71.png)\r\n\r\nMaybe an off-by-one error somewhere?\r\n\r\ncc'ing @lschwetlick since it was your PR", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/16841", "file_loc": {"base_commit": "eaf0a044fdc084ebeeb9bbfbcf42e6df2b1491bb", "files": [{"path": "doc/whats_new/v0.23.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [142, 143, 144, 145]}}}, {"path": "sklearn/decomposition/_pca.py", "status": "modified", "Loc": {"(None, '_assess_dimension', 31)": {"mod": [31, 32, 39, 42, 45, 46, 58, 59, 60, 62, 65, 66, 67, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 84, 90, 91, 92, 93, 94, 95, 96]}, "(None, '_infer_dimension', 106)": {"mod": [106, 107, 109, 111, 112, 113, 114]}, "('PCA', '_fit_full', 436)": {"mod": [475]}}}, {"path": "sklearn/decomposition/tests/test_pca.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [592]}, "(None, 'test_fit_mle_too_few_samples', 615)": {"add": [625], "mod": [617]}, "(None, 'test_n_components_mle', 291)": {"mod": [298]}, "(None, 'test_infer_dim_1', 326)": {"mod": [336]}, "(None, 'test_infer_dim_2', 340)": {"mod": [351]}, "(None, 'test_infer_dim_3', 354)": {"mod": [364]}, "(None, 'test_infer_dim_bad_spec', 573)": {"mod": [573, 574, 577, 578, 579]}, "(None, 'test_assess_dimension_error_rank_greater_than_features', 582)": {"mod": [582, 583, 584, 586, 587, 588, 589, 590, 591]}, "(None, 'test_assess_dimension_small_eigenvalues', 594)": {"mod": [594, 595, 596, 597, 598, 599, 600, 601, 602]}, "(None, 'test_infer_dim_mle', 605)": {"mod": [605, 606, 607, 608, 612]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/decomposition/_pca.py"], "doc": ["doc/whats_new/v0.23.rst"], "test": ["sklearn/decomposition/tests/test_pca.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "07c7d5730a2685ef2281cc635e289685e5c3d478", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2813", "iss_label": "", "title": "Allow flexible routing with SERVER_NAME config", "body": "### Expected Behavior\r\n\r\nDeployed a flask application which is reachable over multiple domains and ports:\r\n- external via load balancer: `client - Host: example.org -> LB -> flask app`\r\n- internal via DNS service discovery without load balancer: `client - Host: instance-1231.example.org -> flask app` \r\n\r\nIf the client connects directly (`Host: instance-1231.example.org`) the app should be able to return absolute and stable URLs like `http://example.org/path/to/my/view` as the URL (`http://instance-1231.example.org/path/to/my/view`) with the internal DNS name is ephemeral.\r\nTherefore I configured the `SERVER_NAME` config key and `url_for` generates the intended absolute URL by using `_external=True` within and without request context. But the app should be still able to route requests coming with `Host: instance-1231.example.org`.\r\n\r\n### Actual Behavior\r\n\r\nFlasks creates the `werkzeug.routing.MapAdapter` with `server_name=app.config['SERVER_NAME']` and therefore no view method will match to incoming requests with `Host: instance-1231.example.org`.\r\n\r\n### Environment\r\n\r\n* Python version: 2.7.13 (I'm sorry)\r\n* Flask version: 1.0.2\r\n* Werkzeug version: 0.14.1\r\n\r\n### Applied workaround:\r\n\r\nOverwrite `Flask.create_url_adapter` and create `MapAdapter` for request context without `server_name` parameter. Routing and URL generation works fine.\r\n", "pr_html_url": "https://github.com/pallets/flask/pull/5634", "file_loc": {"base_commit": "07c7d5730a2685ef2281cc635e289685e5c3d478", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [25]}}}, {"path": "docs/config.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [270], "mod": [263, 264, 266, 267]}}}, {"path": "src/flask/app.py", "status": "modified", "Loc": {"('Flask', 'create_url_adapter', 423)": {"add": [436], "mod": [428, 430, 431, 432, 439, 440, 441, 442, 443, 444, 445, 448, 449, 450, 452, 453]}}}, {"path": "tests/test_basic.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6, 1485]}}}, {"path": "tests/test_blueprints.py", "status": "modified", "Loc": {"(None, 'test_nesting_subdomains', 953)": {"add": [970], "mod": [954, 963, 965, 967, 968, 969]}, "(None, 'test_child_and_parent_subdomain', 974)": {"add": [994], "mod": [975, 976, 978, 985, 987, 989, 990, 991, 992, 993, 997]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": ["src/flask/app.py"], "doc": ["docs/config.rst", "CHANGES.rst"], "test": ["tests/test_blueprints.py", "tests/test_basic.py"], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "0ffacedb3e41ec49df3606c0df1a1f0688868c32", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/20199", "iss_label": "affects_2.2\nmodule\nbug", "title": "Failure while using htpasswd module", "body": "_From @apolatynski on December 4, 2016 15:42_\n\n\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nhtpasswd\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.2.0.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n```\r\n\r\n##### CONFIGURATION\r\nDefault\r\n\r\n##### OS / ENVIRONMENT\r\nArchLinux\r\n\r\n##### SUMMARY\r\nhtpasswd module fails with message: `invalid version number '1.7.0.post20161124160753`\r\nLooks like it's related to `python2-passlib` package (installed from archlinux repository).\r\n\r\n##### STEPS TO REPRODUCE\r\nUsing a role with a task like below\r\n\r\n```\r\nhtpasswd:\r\n path=/etc/app/auth/htpasswd\r\n name=someuser\r\n crypt_scheme=bcrypt\r\n password={{ password }}\r\n owner=root\r\n mode=0640\r\n```\r\n\r\n##### EXPECTED RESULTS\r\nUser entry added to htpasswd file.\r\n\r\n##### ACTUAL RESULTS\r\nTask failure.\r\n\r\n\r\n```\r\nfatal: [host]: FAILED! => {\r\n \"changed\": false,\r\n \"failed\": true,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"backup\": null,\r\n \"content\": null,\r\n \"create\": true,\r\n \"crypt_scheme\": \"bcrypt\",\r\n \"delimiter\": null,\r\n \"directory_mode\": null,\r\n \"follow\": false,\r\n \"force\": null,\r\n \"group\": null,\r\n \"mode\": \"0640\",\r\n \"name\": \"someuser\",\r\n \"owner\": \"root\",\r\n \"password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\",\r\n \"path\": \"/etc/app/auth/htpasswd\",\r\n \"regexp\": null,\r\n \"remote_src\": null,\r\n \"selevel\": null,\r\n \"serole\": null,\r\n \"setype\": null,\r\n \"seuser\": null,\r\n \"src\": null,\r\n \"state\": \"present\",\r\n \"unsafe_writes\": null\r\n },\r\n \"module_name\": \"htpasswd\"\r\n },\r\n \"msg\": \"invalid version number '1.7.0.post20161124160753'\"\r\n}\r\n```\r\n\n\n_Copied from original issue: ansible/ansible-modules-core#5816_", "pr_html_url": "https://github.com/ansible/ansible/pull/20202", "file_loc": {"base_commit": "0ffacedb3e41ec49df3606c0df1a1f0688868c32", "files": [{"path": "lib/ansible/modules/web_infrastructure/htpasswd.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [106]}, "(None, 'present', 126)": {"mod": [140, 151]}, "(None, 'absent', 174)": {"mod": [178]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/modules/web_infrastructure/htpasswd.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2237", "iss_label": "site-enhancement", "title": "[YouTube] Add the Channel Banner link to the info.json when downloading a channel's videos", "body": "### Checklist\r\n\r\n- [X] I'm reporting a site feature request\r\n- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\r\n- [X] I've checked that all provided URLs are alive and playable in a browser\r\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\r\n\r\n### Region\r\n\r\n_No response_\r\n\r\n### Example URLs\r\n\r\nhttps://www.youtube.com/c/jschlattLIVE\r\n\r\nhttps://yt3.ggpht.com/DEcH0YOk5KknRHoC-QerpZVFUsldfTTM0ZarVr55rarrTbywYBBCKru61973B3l2t2g0hqV9jg=w2120-fcrop64=1,00000000ffffffff-k-c0xffffffff-no-nd-rj\r\n\r\n### Description\r\n\r\nWhen using a YouTube channel as the link and `--write-info-json` is used, it should fetch the link for the channel banner.\r\n\r\nThe manual method to downloading a channel's banner is to right click View Page Source on the banner, search \"tvbanner\", and find the link for the banner. If yt-dlp automated this process (in the same way it does the profile picture), that would be a great feature!\r\n\r\n### Verbose log\r\n\r\n```shell\r\nC:\\Users\\Ben\\Videos\\test>youtube-dl --flat-playlist --write-info-json --verbose https://www.youtube.com/channel/UCWZp4y1jqBuvLtiyxSs_ZBw\r\n[debug] Command-line config: ['--flat-playlist', '--write-info-json', '--verbose', 'https://www.youtube.com/channel/UCWZp4y1jqBuvLtiyxSs_ZBw']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8, err utf-8, pref cp1252\r\n[debug] yt-dlp version 2021.12.27 [6223f67] (win_exe)\r\n[debug] Python version 3.8.10 (CPython 64bit) - Windows-10-10.0.17763-SP0\r\n[debug] exe versions: ffmpeg git-2020-03-15-c467328, ffprobe git-2020-03-15-c467328\r\n[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[debug] [youtube:tab] Extracting URL: https://www.youtube.com/channel/UCWZp4y1jqBuvLtiyxSs_ZBw\r\n[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw: Downloading webpage\r\nWARNING: [youtube:tab] A channel/user page was given. All the channel's videos will be downloaded. To download only the videos in the home page, add a \"/featured\" to the URL\r\n[debug] [youtube:tab] Final URL: https://www.youtube.com/channel/UCWZp4y1jqBuvLtiyxSs_ZBw/videos\r\n[download] Downloading playlist: Big guy - Videos\r\n[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 1: Downloading API JSON\r\n[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 2: Downloading API JSON\r\n[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 3: Downloading API JSON\r\n[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 4: Downloading API JSON\r\n[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 5: Downloading API JSON\r\n[info] Writing playlist metadata as JSON to: Big guy - Videos [UCWZp4y1jqBuvLtiyxSs_ZBw].info.json\r\n[youtube:tab] playlist Big guy - Videos: Downloading 154 videos\r\n[download] Downloading video 1 of 154\r\n[download] Downloading video 2 of 154\r\n[download] Downloading video 3 of 154\r\n[download] Downloading video 4 of 154\r\n[download] Downloading video 5 of 154\r\n[download] Downloading video 6 of 154\r\n[download] Downloading video 7 of 154\r\n[download] Downloading video 8 of 154\r\n[download] Downloading video 9 of 154\r\n[download] Downloading video 10 of 154\r\n[download] Downloading video 11 of 154\r\n[download] Downloading video 12 of 154\r\n[download] Downloading video 13 of 154\r\n[download] Downloading video 14 of 154\r\n[download] Downloading video 15 of 154\r\n[download] Downloading video 16 of 154\r\n[download] Downloading video 17 of 154\r\n[download] Downloading video 18 of 154\r\n[download] Downloading video 19 of 154\r\n[download] Downloading video 20 of 154\r\n[download] Downloading video 21 of 154\r\n[download] Downloading video 22 of 154\r\n[download] Downloading video 23 of 154\r\n[download] Downloading video 24 of 154\r\n[download] Downloading video 25 of 154\r\n[download] Downloading video 26 of 154\r\n[download] Downloading video 27 of 154\r\n[download] Downloading video 28 of 154\r\n[download] Downloading video 29 of 154\r\n[download] Downloading video 30 of 154\r\n[download] Downloading video 31 of 154\r\n[download] Downloading video 32 of 154\r\n[download] Downloading video 33 of 154\r\n[download] Downloading video 34 of 154\r\n[download] Downloading video 35 of 154\r\n[download] Downloading video 36 of 154\r\n[download] Downloading video 37 of 154\r\n[download] Downloading video 38 of 154\r\n[download] Downloading video 39 of 154\r\n[download] Downloading video 40 of 154\r\n[download] Downloading video 41 of 154\r\n[download] Downloading video 42 of 154\r\n[download] Downloading video 43 of 154\r\n[download] Downloading video 44 of 154\r\n[download] Downloading video 45 of 154\r\n[download] Downloading video 46 of 154\r\n[download] Downloading video 47 of 154\r\n[download] Downloading video 48 of 154\r\n[download] Downloading video 49 of 154\r\n[download] Downloading video 50 of 154\r\n[download] Downloading video 51 of 154\r\n[download] Downloading video 52 of 154\r\n[download] Downloading video 53 of 154\r\n[download] Downloading video 54 of 154\r\n[download] Downloading video 55 of 154\r\n[download] Downloading video 56 of 154\r\n[download] Downloading video 57 of 154\r\n[download] Downloading video 58 of 154\r\n[download] Downloading video 59 of 154\r\n[download] Downloading video 60 of 154\r\n[download] Downloading video 61 of 154\r\n[download] Downloading video 62 of 154\r\n[download] Downloading video 63 of 154\r\n[download] Downloading video 64 of 154\r\n[download] Downloading video 65 of 154\r\n[download] Downloading video 66 of 154\r\n[download] Downloading video 67 of 154\r\n[download] Downloading video 68 of 154\r\n[download] Downloading video 69 of 154\r\n[download] Downloading video 70 of 154\r\n[download] Downloading video 71 of 154\r\n[download] Downloading video 72 of 154\r\n[download] Downloading video 73 of 154\r\n[download] Downloading video 74 of 154\r\n[download] Downloading video 75 of 154\r\n[download] Downloading video 76 of 154\r\n[download] Downloading video 77 of 154\r\n[download] Downloading video 78 of 154\r\n[download] Downloading video 79 of 154\r\n[download] Downloading video 80 of 154\r\n[download] Downloading video 81 of 154\r\n[download] Downloading video 82 of 154\r\n[download] Downloading video 83 of 154\r\n[download] Downloading video 84 of 154\r\n[download] Downloading video 85 of 154\r\n[download] Downloading video 86 of 154\r\n[download] Downloading video 87 of 154\r\n[download] Downloading video 88 of 154\r\n[download] Downloading video 89 of 154\r\n[download] Downloading video 90 of 154\r\n[download] Downloading video 91 of 154\r\n[download] Downloading video 92 of 154\r\n[download] Downloading video 93 of 154\r\n[download] Downloading video 94 of 154\r\n[download] Downloading video 95 of 154\r\n[download] Downloading video 96 of 154\r\n[download] Downloading video 97 of 154\r\n[download] Downloading video 98 of 154\r\n[download] Downloading video 99 of 154\r\n[download] Downloading video 100 of 154\r\n[download] Downloading video 101 of 154\r\n[download] Downloading video 102 of 154\r\n[download] Downloading video 103 of 154\r\n[download] Downloading video 104 of 154\r\n[download] Downloading video 105 of 154\r\n[download] Downloading video 106 of 154\r\n[download] Downloading video 107 of 154\r\n[download] Downloading video 108 of 154\r\n[download] Downloading video 109 of 154\r\n[download] Downloading video 110 of 154\r\n[download] Downloading video 111 of 154\r\n[download] Downloading video 112 of 154\r\n[download] Downloading video 113 of 154\r\n[download] Downloading video 114 of 154\r\n[download] Downloading video 115 of 154\r\n[download] Downloading video 116 of 154\r\n[download] Downloading video 117 of 154\r\n[download] Downloading video 118 of 154\r\n[download] Downloading video 119 of 154\r\n[download] Downloading video 120 of 154\r\n[download] Downloading video 121 of 154\r\n[download] Downloading video 122 of 154\r\n[download] Downloading video 123 of 154\r\n[download] Downloading video 124 of 154\r\n[download] Downloading video 125 of 154\r\n[download] Downloading video 126 of 154\r\n[download] Downloading video 127 of 154\r\n[download] Downloading video 128 of 154\r\n[download] Downloading video 129 of 154\r\n[download] Downloading video 130 of 154\r\n[download] Downloading video 131 of 154\r\n[download] Downloading video 132 of 154\r\n[download] Downloading video 133 of 154\r\n[download] Downloading video 134 of 154\r\n[download] Downloading video 135 of 154\r\n[download] Downloading video 136 of 154\r\n[download] Downloading video 137 of 154\r\n[download] Downloading video 138 of 154\r\n[download] Downloading video 139 of 154\r\n[download] Downloading video 140 of 154\r\n[download] Downloading video 141 of 154\r\n[download] Downloading video 142 of 154\r\n[download] Downloading video 143 of 154\r\n[download] Downloading video 144 of 154\r\n[download] Downloading video 145 of 154\r\n[download] Downloading video 146 of 154\r\n[download] Downloading video 147 of 154\r\n[download] Downloading video 148 of 154\r\n[download] Downloading video 149 of 154\r\n[download] Downloading video 150 of 154\r\n[download] Downloading video 151 of 154\r\n[download] Downloading video 152 of 154\r\n[download] Downloading video 153 of 154\r\n[download] Downloading video 154 of 154\r\n[info] Writing updated playlist metadata as JSON to: Big guy - Videos [UCWZp4y1jqBuvLtiyxSs_ZBw].info.json\r\n[download] Finished downloading playlist: Big guy - Videos\r\n```\r\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/2400", "file_loc": {"base_commit": "135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4", "files": [{"path": "yt_dlp/extractor/youtube.py", "status": "modified", "Loc": {"('YoutubeTabBaseInfoExtractor', '_extract_from_tabs', 3894)": {"mod": [3916, 3917, 3918, 3919, 3938]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["yt_dlp/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "a8968bfa696d51f73769c54f2630a9530488236a", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/46804", "iss_label": "Docs", "title": "DOC: building page for nested methods doesn't work", "body": "The following\r\n```\r\npython make.py --single pandas.Series.str.rsplit\r\n```\r\nfails to produce the docs:\r\n```\r\n(pandas-dev) marcogorelli@OVMG025 doc % python make.py clean && python make.py --single pandas.Series.str.rsplit\r\nRunning Sphinx v4.4.0\r\nloading translations [en]... done\r\nmaking output directory... done\r\n[autosummary] generating autosummary for: index.rst\r\n[autosummary] generating autosummary for: /Users/marcogorelli/pandas-dev/doc/source/reference/api/pandas.Series.str.rsplit.rst\r\nbuilding [mo]: targets for 0 po files that are out of date\r\nbuilding [html]: targets for 1 source files that are out of date\r\nupdating environment: [new config] 2 added, 0 changed, 0 removed\r\nreading sources... [100%] reference/api/pandas.Series.str.rsplit \r\nWARNING: autodoc: failed to import method 'str.rsplit' from module 'Series'; the following exception was raised:\r\nNo module named 'Series'\r\nlooking for now-outdated files... none found\r\npickling environment... done\r\nchecking consistency... done\r\npreparing documents... done\r\n/Users/marcogorelli/pandas-dev/doc/source/index.rst:44: WARNING: 'any' reference target not found: getting_started\r\n/Users/marcogorelli/pandas-dev/doc/source/index.rst:60: WARNING: 'any' reference target not found: user_guide\r\n/Users/marcogorelli/pandas-dev/doc/source/index.rst:77: WARNING: 'any' reference target not found: api\r\n/Users/marcogorelli/pandas-dev/doc/source/index.rst:94: WARNING: 'any' reference target not found: development\r\nwriting output... [100%] reference/api/pandas.Series.str.rsplit \r\nwaiting for workers...\r\ngenerating indices... genindex py-modindex done\r\nwriting additional pages... search done\r\ncopying images... [100%] _static/index_contribute.svg \r\ncopying static files... done\r\ncopying extra files... done\r\ndumping search index in English (code: en)... done\r\ndumping object inventory... done\r\nbuild succeeded, 5 warnings.\r\n```\r\n\r\nHowever, it works just fine to do\r\n```\r\npython make.py --single pandas.Series.value_counts\r\n```\r\n\r\nI haven't figured out how to address this, so opening an issue for now", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/46806", "file_loc": {"base_commit": "a8968bfa696d51f73769c54f2630a9530488236a", "files": [{"path": ".github/workflows/code-checks.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [82]}}}, {"path": ".github/workflows/docbuild-and-upload.yml", "status": "modified", "Loc": {}}, {"path": "ci/code_checks.sh", "status": "modified", "Loc": {"(None, None, None)": {"add": [14, 104], "mod": [16]}}}, {"path": "doc/source/index.rst.template", "status": "modified", "Loc": {"(None, None, None)": {"add": [28, 99, 105, 108]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [".github/workflows/docbuild-and-upload.yml", "doc/source/index.rst.template"], "test": [], "config": [".github/workflows/code-checks.yml"], "asset": ["ci/code_checks.sh"]}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "e88c39225ef545123860c679822f1b567fe65c27", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/33428", "iss_label": "Docs\ngood first issue", "title": "DOC: Data links in Pandas API Reference are broken 404", "body": "#### Location of the documentation\r\n\r\nhttps://pandas.pydata.org/docs/reference/api/pandas.plotting.parallel_coordinates.html\r\n...probably many examples in other sections\r\n\r\n#### Documentation problem\r\n\r\nResults in 404 not found error\r\ndf = pd.read_csv('https://raw.github.com/pandas-dev/pandas/master'\r\n '/pandas/tests/data/csv/iris.csv')\r\n\r\n#### Suggested fix for documentation\r\n\r\nThe GitHub site should be \"raw.githubusercontent.com\" \r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/33099", "file_loc": {"base_commit": "e88c39225ef545123860c679822f1b567fe65c27", "files": [{"path": "pandas/plotting/_misc.py", "status": "modified", "Loc": {"(None, 'parallel_coordinates', 311)": {"mod": [362]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/plotting/_misc.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "c1bed601e9b9a3f5fa8fb529cfa40df7a3a0b903", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4970", "iss_label": "question", "title": "Cannot load the model", "body": "I get an error when I run this code torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/path/last.pt', force_reload=True)\r\nIt was working until yesterday and now I receive an error \"raise ValueError(\"{!r} does not start with {!r}\"\r\nValueError: 'C:\\\\Users\\\\aaa\\\\.cache\\\\torch\\\\hub\\\\ultralytics_yolov5_master' does not start with 'C:\\\\Users\\\\aaa\\\\PycharmProjects\\\\project\\\\proejct1'\". I have removed the files inside the cache folder but it doesn't fix the error...\r\nAny suggestions will be appreciated. Thank you", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/4974", "file_loc": {"base_commit": "c1bed601e9b9a3f5fa8fb529cfa40df7a3a0b903", "files": [{"path": "models/tf.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}}}, {"path": "models/yolo.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [18]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["models/tf.py", "models/yolo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "674fb96b33c07c680844f674fcdf0767b6e3c2f9", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/17200", "iss_label": "IO Data\nIO JSON", "title": "read_json(lines=True) broken for s3 urls in Python 3 (v0.20.3)", "body": "#### Code Sample, a copy-pastable example if possible\r\n\r\nUsing Python\r\n```python\r\nimport pandas as pd\r\ninputdf = pd.read_json(path_or_buf=\"s3://path/to/python-lines/file.json\", lines=True)\r\n```\r\n\r\nThe file is similar to:\r\n```\r\n{\"url\": \"blah\", \"other\": \"blah\"}\r\n{\"url\": \"blah\", \"other\": \"blah\"}\r\n{\"url\": \"blah\", \"other\": \"blah\"}\r\n```\r\n\r\n#### Problem description\r\n\r\nWhen attempting to read a python lines file into a DataFrame using the s3 protocol, the above code will error with:\r\n\r\n```\r\n2017-08-08 11:06:14,225 - image_rank_csv - ERROR - initial_value must be str or None, not bytes\r\nTraceback (most recent call last):\r\n File \"image_rank_csv.py\", line 62, in run\r\n inputdf = pd.read_json(path_or_buf=\"s3://path/to/python-lines/file.json\", lines=True)\r\n File \"...env/lib/python3.6/site-packages/pandas/io/json/json.py\", line 347, in read_json\r\n lines = list(StringIO(json.strip()))\r\nTypeError: initial_value must be str or None, not bytes\r\n```\r\n\r\nThis works fine if the file is local, e.g.:\r\n```python\r\nimport pandas as pd\r\ninputdf = pd.read_json(path_or_buf=\"/local/path/to/python-lines/file.json\", lines=True)\r\n```\r\n\r\n#### Expected Output\r\n\r\nExpect to successfully read the file and error above not to occur.\r\n\r\nMy current thinking is that when we get the file handle: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L333 , you delegate to `s3fs`, which documents that [it only operates in Binary mode](http://s3fs.readthedocs.io/en/latest/#limitations). Therefore when you `read()`: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L335, Therefore passing to `StringIO` will fail here: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L347 . Maybe it needs a different handler for `BytesIO`?\r\n\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n
\r\n[paste the output of ``pd.show_versions()`` here below this line]\r\n```\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.6.1.final.0\r\npython-bits: 64\r\nOS: Darwin\r\nOS-release: 16.6.0\r\nmachine: x86_64\r\nprocessor: i386\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.20.3\r\npytest: None\r\npip: 9.0.1\r\nsetuptools: 36.2.7\r\nCython: None\r\nnumpy: 1.12.0\r\nscipy: 0.19.1\r\nxarray: None\r\nIPython: None\r\nsphinx: None\r\npatsy: None\r\ndateutil: 2.6.0\r\npytz: 2017.2\r\nblosc: None\r\nbottleneck: None\r\ntables: None\r\nnumexpr: None\r\nfeather: None\r\nmatplotlib: None\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: None\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: 2.6.2 (dt dec pq3 ext lo64)\r\njinja2: None\r\ns3fs: 0.1.2\r\npandas_gbq: None\r\npandas_datareader: None\r\n```\r\n
", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/17201", "file_loc": {"base_commit": "674fb96b33c07c680844f674fcdf0767b6e3c2f9", "files": [{"path": "doc/source/whatsnew/v0.21.1.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [91]}}}, {"path": "pandas/io/json/json.py", "status": "modified", "Loc": {"('JsonReader', 'read', 456)": {"add": [460], "mod": [462]}, "(None, None, None)": {"mod": [8]}, "('Parser', '_try_convert_data', 595)": {"mod": [615, 631, 642, 654, 664]}, "('Parser', '_try_convert_to_date', 669)": {"mod": [683, 700]}}}, {"path": "pandas/tests/io/json/test_pandas.py", "status": "modified", "Loc": {"('TestPandasContainer', None, 38)": {"add": [1034]}}}, {"path": "pandas/tests/io/parser/test_network.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7, 10, 18, 19, 20, 23, 24, 25, 26, 29, 30, 31, 32, 34, 35, 36, 37, 38, 40, 41, 42, 43, 44, 45, 47, 48, 49, 51, 52, 53, 55, 56, 58, 60]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/io/json/json.py"], "doc": ["doc/source/whatsnew/v0.21.1.txt"], "test": ["pandas/tests/io/parser/test_network.py", "pandas/tests/io/json/test_pandas.py"], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "1ddf398a81d23772fc9ac231a4e774af932f8360", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/3031", "iss_label": "bug\nenhancement\nseverity:medium\ntracked", "title": "[Runtime] Mega-issue to track all issues related to bash Interactive terminal", "body": "This is a mega-issue tracker for the **Interactive terminal** issue peoples run into.\r\n\r\n- [ ] https://github.com/OpenDevin/OpenDevin/issues/2754\r\n- [ ] https://github.com/OpenDevin/OpenDevin/issues/3008\r\n- [ ] https://github.com/OpenDevin/OpenDevin/issues/2799\r\n- [ ] https://github.com/OpenDevin/OpenDevin/issues/892\r\n- [ ] https://github.com/OpenDevin/OpenDevin/issues/3030\r\n- [ ] https://github.com/OpenDevin/OpenDevin/issues/3176\r\n\r\nFeel free to expand this list if i missed any relevant issue!\r\n\r\n---\r\n# Cause\r\n\r\nThese are typically caused by the same reason: OpenDevin uses [`pexcept`](https://pexpect.readthedocs.io/en/stable/overview.html) to interact with Bash shells, however, the current parsing logic only looks for the *next* `PS1` prompt (e.g., something like `root@hostname:/folderABC $`).\r\n\r\nThis will keep looking for such a pattern until it timeout, causing the following things to break, as listed in the PR above:\r\n- Open a new interactive program (e.g., `python3`), where the new prompt changes to `>>`\r\n- Open a new text editor (e.g., `nano`, `vim`), where the display could be completely broken? (I'm not familiar with the protocol here, though)\r\n- Enter a new conda virtual environment: conda will prepend the env name (e.g., `(base)`) before the `PS1` prompt, causing the current `pexpect` parsing to break\r\n- When the agent is asked for password (e.g., with patterns like `Password:`)\r\n- Prompt like `(yes/no/[fingerprint])` requesting user confirmation.\r\n\r\n# Fixes\r\n\r\nWe plan to resolve them as much as I can once arch refactor https://github.com/OpenDevin/OpenDevin/issues/2404 is completed. But these are a non-exhaustive list of patterns we are trying to `pexcept` and we cannot list everything here:\r\n1. [ ] Try to cover common use cases of these prompts (e.g., `[yes/no]` pattern, conda environment pattern)\r\n2. [ ] Figure out a more general way (rather than writing rules) for agents to interact with these (e.g., we don't write every rules explicitly, but for example, if we've been waiting for more than 5s and there's no new output from the terminal, it probably means it is waiting for user input and we should and it over to the agent - subsequently, we may need to allow agent to issue special keyboard actions like `ctrl+D` `ctrl+C`, etc).\r\n3. [ ] Add something in the prompt that forbids agent goes into interactive programs (e.g., interactive Python, vim, nano, etc)\r\n4. [ ] We need a way to detect if the agent accidentally goes into such an interactive program, and we need a way to force it out (we currently send `ctrl+C`, which might not work for a large variety of programs like `vim`).\r\n\r\n# If you want to help!\r\n\r\nTry to take a look at our existing bash parsing logic for the new architecture (under development!):\r\nhttps://github.com/OpenDevin/OpenDevin/blob/8bfa61f3e4beceb690562b4d105aa01dc50d58d7/opendevin/runtime/client/client.py#L62-L111\r\n\r\nYou can help to:\r\n1. Write test cases into `https://github.com/OpenDevin/OpenDevin/blob/main/tests/unit/test_runtime.py` to expose these interactive bash issues\r\n2. Try to fix them inside the `client/client.py` (and/or the `ssh_box.py` - but we plan to deprecate them soon, so only supporting these on `EventStreamRuntime` should be sufficient!)", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/4881", "file_loc": {"base_commit": "1ddf398a81d23772fc9ac231a4e774af932f8360", "files": [{"path": ".github/workflows/dummy-agent-test.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [38]}}}, {"path": ".github/workflows/eval-runner.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [31]}}}, {"path": ".github/workflows/py-unit-tests-mac.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [33]}}}, {"path": ".github/workflows/py-unit-tests.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [32]}}}, {"path": "docs/static/img/backend_architecture.puml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [126]}}}, {"path": "evaluation/benchmarks/agent_bench/run_infer.py", "status": "modified", "Loc": {"(None, 'complete_runtime', 111)": {"mod": [140, 167, 168]}}}, {"path": "evaluation/benchmarks/aider_bench/run_infer.py", "status": "modified", "Loc": {"(None, 'complete_runtime', 123)": {"mod": [148, 149, 150, 151]}}}, {"path": "evaluation/benchmarks/biocoder/run_infer.py", "status": "modified", "Loc": {"(None, 'complete_runtime', 168)": {"mod": [202, 226, 227, 228]}}}, {"path": "evaluation/benchmarks/bird/README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [130]}}}, {"path": "evaluation/benchmarks/bird/run_infer.py", "status": "modified", "Loc": {"(None, 'initialize_runtime', 249)": {"mod": [271, 272, 273, 274]}, "(None, 'complete_runtime', 283)": {"mod": [303, 304, 305, 306]}}}, {"path": "evaluation/benchmarks/humanevalfix/README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [74, 101, 128]}}}, {"path": "evaluation/benchmarks/humanevalfix/run_infer.py", "status": "modified", "Loc": {"(None, 'complete_runtime', 151)": {"mod": [174, 175, 176]}}}, {"path": "evaluation/benchmarks/ml_bench/run_infer.py", "status": "modified", "Loc": {"(None, 'complete_runtime', 145)": {"mod": [166]}}}, {"path": "evaluation/benchmarks/scienceagentbench/run_infer.py", "status": "modified", "Loc": {"(None, 'initialize_runtime', 91)": {"mod": [124, 125, 126, 127]}, "(None, 'complete_runtime', 136)": {"mod": [157, 158, 159, 160]}}}, {"path": "evaluation/benchmarks/swe_bench/eval_infer.py", "status": "modified", "Loc": {"(None, 'process_instance', 96)": {"add": [100, 148], "mod": [180, 203, 204, 205, 227, 245]}}}, {"path": "evaluation/benchmarks/swe_bench/run_infer.py", "status": "modified", "Loc": {"(None, 'initialize_runtime', 156)": {"add": [284]}, "(None, None, None)": {"add": [537]}, "(None, 'complete_runtime', 290)": {"mod": [340, 341]}, "(None, 'process_instance', 369)": {"mod": [388]}}}, {"path": "evaluation/benchmarks/swe_bench/scripts/eval/compare_outputs.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [109], "mod": [107]}}}, {"path": "evaluation/benchmarks/swe_bench/scripts/eval/convert_oh_output_to_md.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [22, 86], "mod": [88]}, "(None, 'write_row_to_md_file', 53)": {"mod": [53, 61, 62]}}}, {"path": "evaluation/benchmarks/swe_bench/scripts/eval/update_output_with_eval.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [113]}}}, {"path": "evaluation/integration_tests/tests/t01_fix_simple_typo.py", "status": "modified", "Loc": {"('Test', 'verify_result', 25)": {"mod": [27]}}}, {"path": "evaluation/integration_tests/tests/t02_add_bash_hello.py", "status": "modified", "Loc": {"('Test', 'initialize_runtime', 12)": {"mod": [13]}, "('Test', 'verify_result', 18)": {"mod": [20, 29]}}}, {"path": "evaluation/integration_tests/tests/t03_jupyter_write_file.py", "status": "modified", "Loc": {"('Test', 'initialize_runtime', 12)": {"mod": [13]}, "('Test', 'verify_result', 18)": {"mod": [20, 29]}}}, {"path": "evaluation/integration_tests/tests/t04_git_staging.py", "status": "modified", "Loc": {"('Test', 'initialize_runtime', 12)": {"mod": [13, 18, 23, 24, 25, 30]}, "('Test', 'verify_result', 35)": {"mod": [37, 46]}}}, {"path": "evaluation/integration_tests/tests/t05_simple_browsing.py", "status": "modified", "Loc": {"('Test', 'initialize_runtime', 85)": {"mod": [86, 90, 104, 105]}}}, {"path": "frontend/src/services/actions.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}, "(None, 'handleActionMessage', 60)": {"add": [64]}}}, {"path": "frontend/src/services/observations.ts", "status": "modified", "Loc": {"(None, 'handleObservationMessage', 14)": {"mod": [83, 84]}}}, {"path": "frontend/src/state/chat-slice.ts", "status": "modified", "Loc": {"(None, 'addAssistantAction', 88)": {"mod": [96]}, "(None, 'addAssistantObservation', 127)": {"mod": [147, 161]}}}, {"path": "frontend/src/types/core/observations.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [18], "mod": [16, 17]}}}, {"path": "frontend/src/types/message.tsx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [30, 31]}}}, {"path": "openhands/agenthub/codeact_agent/codeact_agent.py", "status": "modified", "Loc": {"('CodeActAgent', 'get_observation_message', 238)": {"mod": [280]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["openhands/agenthub/codeact_agent/codeact_agent.py", "evaluation/benchmarks/agent_bench/run_infer.py", "evaluation/benchmarks/bird/run_infer.py", "frontend/src/services/observations.ts", "evaluation/benchmarks/humanevalfix/run_infer.py", "evaluation/benchmarks/scienceagentbench/run_infer.py", "evaluation/benchmarks/swe_bench/scripts/eval/update_output_with_eval.py", "evaluation/integration_tests/tests/t03_jupyter_write_file.py", "evaluation/benchmarks/swe_bench/run_infer.py", "evaluation/integration_tests/tests/t01_fix_simple_typo.py", "frontend/src/state/chat-slice.ts", "frontend/src/types/message.tsx", "evaluation/benchmarks/swe_bench/scripts/eval/compare_outputs.py", "evaluation/integration_tests/tests/t05_simple_browsing.py", "evaluation/benchmarks/ml_bench/run_infer.py", "evaluation/integration_tests/tests/t02_add_bash_hello.py", "evaluation/benchmarks/biocoder/run_infer.py", "frontend/src/types/core/observations.ts", "evaluation/integration_tests/tests/t04_git_staging.py", "evaluation/benchmarks/swe_bench/eval_infer.py", "evaluation/benchmarks/aider_bench/run_infer.py", "frontend/src/services/actions.ts", "evaluation/benchmarks/swe_bench/scripts/eval/convert_oh_output_to_md.py"], "doc": ["docs/static/img/backend_architecture.puml", "evaluation/benchmarks/humanevalfix/README.md", "evaluation/benchmarks/bird/README.md"], "test": [], "config": [".github/workflows/dummy-agent-test.yml", ".github/workflows/py-unit-tests-mac.yml", ".github/workflows/py-unit-tests.yml", ".github/workflows/eval-runner.yml"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "23a7057be29ed7de44b5705d5bb4c4d0bbdea089", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/813", "iss_label": "bug", "title": "error seed': 42", "body": "Hi! I'm OpenDevin, an AI Software Engineer. What would you like to build with me today?\r\nuser avatar\r\nbana mali dan\u0131\u015fmanl\u0131k firmas\u0131 i\u00e7in web sitesi tasarla ve \u00e7al\u0131\u015ft\u0131r. Detayl\u0131 ve kapsaml\u0131 bir \u00e7al\u0131\u015fma olsun.\r\nassistant avatar\r\nStarting new task...\r\nassistant avatar\r\nOops. Something went wrong: gemini does not support parameters: {'seed': 42}. To drop these, set `litellm.drop_params=True` or for proxy: `litellm_settings: drop_params: true`\r\nassistant avatar\r\nOops. Something went wrong: Expecting CmdRunAction or AgentEchoAction for Action", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/830", "file_loc": {"base_commit": "23a7057be29ed7de44b5705d5bb4c4d0bbdea089", "files": [{"path": "agenthub/codeact_agent/codeact_agent.py", "status": "modified", "Loc": {"('CodeActAgent', 'step', 83)": {"mod": [126, 127]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["agenthub/codeact_agent/codeact_agent.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "818c9fadd9cb1748f2b5545e8ef5f141526ec14e", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/19281", "iss_label": "type:feature", "title": "Scatter update variable in TF optimizer", "body": "In TensorFlow there is a cool (fast) variable update operation - scatter_update (like \"assign\" for dense variables).\r\nIt would be cool if you override assign operation for such cases (i think it should looks like https://github.com/keras-team/keras/blob/master/keras/backend/tensorflow/optimizer.py#L45 )\r\n\r\nP.S.\r\nFound such case during migration of Keras v2 custom optimizer.", "pr_html_url": "https://github.com/keras-team/keras/pull/19313", "file_loc": {"base_commit": "818c9fadd9cb1748f2b5545e8ef5f141526ec14e", "files": [{"path": "keras/backend/tensorflow/optimizer.py", "status": "modified", "Loc": {"('TFOptimizer', None, 8)": {"add": [44]}}}, {"path": "keras/optimizers/optimizer_sparse_test.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10, 99]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["keras/backend/tensorflow/optimizer.py", "keras/optimizers/optimizer_sparse_test.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "d558bce8e9d5d4adfb0ab587be20b8a231dd1eea", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/39636", "iss_label": "Regression\nApply", "title": "BUG: ValueError on \".transform\" method applied to an empty DataFrame", "body": "- [X] I have checked that this issue has not already been reported.\r\n\r\n- [X] I have confirmed this bug exists on the latest version of pandas.\r\n\r\n- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.\r\n\r\n---\r\n\r\n#### Code Sample, a copy-pastable example\r\n\r\nOutput on version 1.1.5:\r\n```python\r\nIn [5]: import pandas as pd\r\n ...: df = pd.DataFrame([], columns=[\"id\", \"field\"])\r\n ...: df[\"id\"].transform(lambda x: x + 10)\r\nOut[5]: Series([], Name: id, dtype: object)\r\n```\r\n\r\nOutput on version 1.2.x:\r\n```python\r\nIn [4]: import pandas as pd\r\n ...: df = pd.DataFrame([], columns=[\"id\", \"field\"])\r\n ...: df[\"id\"].transform(lambda x: x + 10)\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n in \r\n----> 1 df[\"id\"].transform(lambda x: x + 10)\r\n\r\n~/.pyenv/versions/3.9.1/envs/odds-data-3.9.1/lib/python3.9/site-packages/pandas/core/series.py in transform(self, func, axis, *args, **kwargs)\r\n 3975 self, func: AggFuncType, axis: Axis = 0, *args, **kwargs\r\n 3976 ) -> FrameOrSeriesUnion:\r\n-> 3977 return transform(self, func, axis, *args, **kwargs)\r\n 3978 \r\n 3979 def apply(self, func, convert_dtype=True, args=(), **kwds):\r\n\r\n~/.pyenv/versions/3.9.1/envs/odds-data-3.9.1/lib/python3.9/site-packages/pandas/core/aggregation.py in transform(obj, func, axis, *args, **kwargs)\r\n 458 # when the dtype is not appropriate\r\n 459 if isinstance(result, (ABCSeries, ABCDataFrame)) and result.empty:\r\n--> 460 raise ValueError(\"Transform function failed\")\r\n 461 if not isinstance(result, (ABCSeries, ABCDataFrame)) or not result.index.equals(\r\n 462 obj.index\r\n\r\nValueError: Transform function failed\r\n```\r\n\r\n#### Problem description\r\n\r\nApplying `.transform` on an empty DataFrame raises a `ValueError` on version 1.2.x. This is a change on the behavior of 1.1.5 version that returns the same empty DataFrame (as `.apply` is still doing).\r\n\r\nThe change that added this error apparently is related to this commit https://github.com/pandas-dev/pandas/pull/35964/commits/7b6ab94720024d6696b19867f5f8f59f79587ff0 \r\n\r\n#### Expected Output\r\n\r\n```python\r\nIn [5]: import pandas as pd\r\n ...: df = pd.DataFrame([], columns=[\"id\", \"field\"])\r\n ...: df[\"id\"].transform(lambda x: x + 10)\r\nOut[5]: Series([], Name: id, dtype: object)\r\n```\r\n#### Output of ``pd.show_versions()``\r\n\r\n
\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit : 9d598a5e1eee26df95b3910e3f2934890d062caa\r\npython : 3.9.1.final.0\r\npython-bits : 64\r\nOS : Linux\r\nOS-release : 5.4.0-65-generic\r\nVersion : #73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021\r\nmachine : x86_64\r\nprocessor : x86_64\r\nbyteorder : little\r\nLC_ALL : None\r\nLANG : en_US.UTF-8\r\nLOCALE : en_US.UTF-8\r\n\r\npandas : 1.2.1\r\nnumpy : 1.20.0\r\npytz : 2021.1\r\ndateutil : 2.8.1\r\npip : 20.2.3\r\nsetuptools : 49.2.1\r\nCython : None\r\npytest : 6.2.2\r\nhypothesis : None\r\nsphinx : None\r\nblosc : None\r\nfeather : None\r\nxlsxwriter : None\r\nlxml.etree : 4.6.2\r\nhtml5lib : None\r\npymysql : None\r\npsycopg2 : None\r\njinja2 : None\r\nIPython : 7.20.0\r\npandas_datareader: None\r\nbs4 : None\r\nbottleneck : None\r\nfsspec : None\r\nfastparquet : None\r\ngcsfs : None\r\nmatplotlib : None\r\nnumexpr : None\r\nodfpy : None\r\nopenpyxl : None\r\npandas_gbq : None\r\npyarrow : None\r\npyxlsb : None\r\ns3fs : None\r\nscipy : 1.6.0\r\nsqlalchemy : 1.3.23\r\ntables : None\r\ntabulate : None\r\nxarray : None\r\nxlrd : None\r\nxlwt : None\r\nnumba : None\r\n\r\n
\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/39639", "file_loc": {"base_commit": "d558bce8e9d5d4adfb0ab587be20b8a231dd1eea", "files": [{"path": "doc/source/whatsnew/v1.2.2.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [23]}}}, {"path": "pandas/core/aggregation.py", "status": "modified", "Loc": {"(None, 'transform', 404)": {"mod": [460]}}}, {"path": "pandas/tests/apply/test_frame_transform.py", "status": "modified", "Loc": {"(None, 'test_transform_mixed_column_name_dtypes', 271)": {"add": [276]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": ["pandas/core/aggregation.py"], "doc": ["doc/source/whatsnew/v1.2.2.rst"], "test": ["pandas/tests/apply/test_frame_transform.py"], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "92c825be6a7362099400c9c3fe8b01ea13add3dc", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/19", "iss_label": "question\nanswered\nreviewed\nquestion-migrate", "title": "accessing the request object", "body": "In starlette you can access request object in function decorated with the route decorator. \n\nit seems very handy to be able to access middlewares etc, \nis there a way in fastapi to do that using the provided get/post/options.... decorators? \nsame question for the ApiRouter. \n\n```\n@app.route(\"/notes\", methods=[\"GET\"])\nasync def list_notes(request):\n query = notes.select()\n results = await request.database.fetchall(query)\n```\n\n ", "pr_html_url": "https://github.com/fastapi/fastapi/pull/25", "file_loc": {"base_commit": "92c825be6a7362099400c9c3fe8b01ea13add3dc", "files": [{"path": "docs/tutorial/extra-starlette.md", "status": "removed", "Loc": {}}, {"path": "mkdocs.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [56], "mod": [61]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["mkdocs.yml", "docs/tutorial/extra-starlette.md"], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "9572a2e00ddadb9fc7e2125c3e723b8a3b54be05", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/33238", "iss_label": "", "title": "CI/COMPAT: Linux py37_np_dev pipeline timeouts", "body": "#### Problem description\r\n\r\nLinux py37_np_dev pipeline appears to timeout for everyone after 60 minutes.\r\nThere are a couple hundred thousand errors like this:\r\n```\r\nException ignored in: 'pandas.io.sas._sas.Parser.process_byte_array_with_data'\r\nDeprecationWarning: tostring() is deprecated. Use tobytes() instead.\r\nDeprecationWarning: tostring() is deprecated. Use tobytes() instead.\r\n```\r\nHere is a [link](https://dev.azure.com/pandas-dev/pandas/_build/results?buildId=32212&view=logs&j=3a03f79d-0b41-5610-1aa4-b4a014d0bc70&t=4d05ed0e-1ed3-5bff-dd63-1e957f2766a9&l=792078) to it failing for me.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/33241", "file_loc": {"base_commit": "9572a2e00ddadb9fc7e2125c3e723b8a3b54be05", "files": [{"path": "pandas/_libs/writers.pyx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [115]}}}, {"path": "pandas/io/sas/sas.pyx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [434]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/io/sas/sas.pyx", "pandas/_libs/writers.pyx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "2086ff4065a43fa40d909f81e62623e265df5759", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2390", "iss_label": "bug", "title": "Sitemap spider not robust against wrong sitemap URLs in robots.txt", "body": "[The \"specs\"](http://www.sitemaps.org/protocol.html#submit_robots) do say that the URL should be a \"full URL\":\r\n\r\n> You can specify the location of the Sitemap using a robots.txt file. To do this, simply add the following line including the full URL to the sitemap:\r\n> `Sitemap: http://www.example.com/sitemap.xml`\r\n\r\nBut some robots.txt use relative ones.\r\n\r\nExample: http://www.asos.com/robots.txt\r\n\r\n```\r\nUser-agent: *\r\nSitemap: /sitemap.ashx\r\nSitemap: http://www.asos.com/sitemap.xml\r\nDisallow: /basket/\r\n(...)\r\n```\r\n\r\nSpider:\r\n```\r\nfrom scrapy.spiders import SitemapSpider\r\n\r\n\r\nclass TestSpider(SitemapSpider):\r\n name = \"test\"\r\n sitemap_urls = [\r\n 'http://www.asos.com/robots.txt',\r\n ]\r\n\r\n def parse(self, response):\r\n self.logger.info('parsing %r' % response.url)\r\n```\r\nLogs:\r\n\r\n```\r\n$ scrapy runspider spider.py\r\nLinux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.90 Safari/537.36'\r\n2016-11-09 17:46:19 [scrapy] INFO: Scrapy 1.2.1 started (bot: scrapybot)\r\n(...)\r\n2016-11-09 17:46:19 [scrapy] DEBUG: Crawled (200) (referer: None)\r\n2016-11-09 17:46:19 [scrapy] ERROR: Spider error processing (referer: None)\r\nTraceback (most recent call last):\r\n File \"/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/utils/defer.py\", line 102, in iter_errback\r\n yield next(it)\r\n File \"/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py\", line 29, in process_spider_output\r\n for x in result:\r\n File \"/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py\", line 22, in \r\n return (_set_referer(r) for r in result or ())\r\n File \"/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py\", line 37, in \r\n return (r for r in result or () if _filter(r))\r\n File \"/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py\", line 58, in \r\n return (r for r in result or () if _filter(r))\r\n File \"/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spiders/sitemap.py\", line 36, in _parse_sitemap\r\n yield Request(url, callback=self._parse_sitemap)\r\n File \"/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/http/request/__init__.py\", line 25, in __init__\r\n self._set_url(url)\r\n File \"/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/http/request/__init__.py\", line 57, in _set_url\r\n raise ValueError('Missing scheme in request url: %s' % self._url)\r\nValueError: Missing scheme in request url: /sitemap.ashx\r\n2016-11-09 17:46:19 [scrapy] INFO: Closing spider (finished)\r\n2016-11-09 17:46:19 [scrapy] INFO: Dumping Scrapy stats:\r\n{'downloader/request_bytes': 291,\r\n 'downloader/request_count': 1,\r\n 'downloader/request_method_count/GET': 1,\r\n 'downloader/response_bytes': 1857,\r\n 'downloader/response_count': 1,\r\n 'downloader/response_status_count/200': 1,\r\n 'finish_reason': 'finished',\r\n 'finish_time': datetime.datetime(2016, 11, 9, 16, 46, 19, 332383),\r\n 'log_count/DEBUG': 2,\r\n 'log_count/ERROR': 1,\r\n 'log_count/INFO': 7,\r\n 'response_received_count': 1,\r\n 'scheduler/dequeued': 1,\r\n 'scheduler/dequeued/memory': 1,\r\n 'scheduler/enqueued': 1,\r\n 'scheduler/enqueued/memory': 1,\r\n 'spider_exceptions/ValueError': 1,\r\n 'start_time': datetime.datetime(2016, 11, 9, 16, 46, 19, 71714)}\r\n2016-11-09 17:46:19 [scrapy] INFO: Spider closed (finished)\r\n```", "pr_html_url": "https://github.com/scrapy/scrapy/pull/2395", "file_loc": {"base_commit": "2086ff4065a43fa40d909f81e62623e265df5759", "files": [{"path": "scrapy/spiders/sitemap.py", "status": "modified", "Loc": {"('SitemapSpider', '_parse_sitemap', 33)": {"mod": [35]}}}, {"path": "scrapy/utils/sitemap.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}, "(None, 'sitemap_urls_from_robots', 37)": {"mod": [37, 43]}}}, {"path": "tests/test_spider.py", "status": "modified", "Loc": {"('SitemapSpiderTest', 'test_get_sitemap_urls_from_robotstxt', 331)": {"add": [334], "mod": [341]}}}, {"path": "tests/test_utils_sitemap.py", "status": "modified", "Loc": {"('SitemapTest', 'test_sitemap_urls_from_robots', 110)": {"add": [121], "mod": [127, 128]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/spiders/sitemap.py", "scrapy/utils/sitemap.py"], "doc": [], "test": ["tests/test_spider.py", "tests/test_utils_sitemap.py"], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "f21631401119e4af2e919dd662c3817b2c480c75", "iss_has_pr": 1, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/149", "iss_label": "", "title": "Tolerance factor not working from cli", "body": "* face_recognition version:\r\n* Python version: 3.5\r\n* Operating System: Ubuntu 16\r\n\r\n### Description\r\n\r\nHi! I tried to set the tolerance factor in the cli but it doesn't work....It says: \"Error: no such option: --tolerance\". I am using the preconfigured VM available on Medium Website. \r\n\r\n### What I Did\r\n\r\n```\r\n\r\nface_recognition --tolerance 0.5 ./knwown ./unkwnown\r\n```\r\nThanks!", "pr_html_url": "https://github.com/ageitgey/face_recognition/pull/137", "file_loc": {"base_commit": "f21631401119e4af2e919dd662c3817b2c480c75", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [132]}}}, {"path": "face_recognition/cli.py", "status": "modified", "Loc": {"(None, 'test_image', 35)": {"mod": [35, 48]}, "(None, 'process_images_in_process_pool', 60)": {"mod": [60, 72]}, "(None, 'main', 81)": {"mod": [81, 91, 93]}}}, {"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [28]}}}, {"path": "tests/test_face_recognition.py", "status": "modified", "Loc": {"('Test_face_recognition', 'test_command_line_interface_options', 185)": {"mod": [186]}, "('Test_face_recognition', 'test_command_line_interface', 192)": {"mod": [198, 200, 201]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["face_recognition/cli.py", "setup.py"], "doc": ["README.md"], "test": ["tests/test_face_recognition.py"], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "9ab90d8b608170fe57d893c2150eda3bc11a8b06", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2435", "iss_label": "bug", "title": "Failed to load embedding model: all-mpnet-base-v2 While Running Textgen in Colab Notebook", "body": "### Describe the bug\r\n\r\nI have used this command instead of using old Cuda in my ipynb \r\n\r\n`!git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa`\r\n\r\nNow, I ran the server using following code - \r\n\r\n`!python server.py --extensions openai --model guanaco-7B-GPTQ --model_type LLaMa --api --public-api --share --wbits 4 --groupsize 128`\r\n\r\nI am getting below error - \r\n\r\n```\r\nWARNING:The gradio \"share link\" feature uses a proprietary executable to create a reverse tunnel. Use it with care.\r\n2023-05-30 11:21:05.243240: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nbin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so\r\nINFO:Loading guanaco-7B-GPTQ...\r\nINFO:Found the following quantized model: models/guanaco-7B-GPTQ/Guanaco-7B-GPTQ-4bit-128g.no-act-order.safetensors\r\nINFO:Loaded the model in 14.96 seconds.\r\n\r\nINFO:Loading the extension \"openai\"...\r\n\r\nFailed to load embedding model: all-mpnet-base-v2\r\n```\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\nRun Colab.\r\n\r\nUse this notebook. [Colab](https://colab.research.google.com/drive/1wURKtZgM_SWhjy-NlHNVjHl-SKT5AwtF?usp=sharing)\r\n\r\nOpenai Extension not working as intended\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nWARNING:The gradio \"share link\" feature uses a proprietary executable to create a reverse tunnel. Use it with care.\r\n2023-05-30 11:21:05.243240: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nbin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so\r\nINFO:Loading guanaco-7B-GPTQ...\r\nINFO:Found the following quantized model: models/guanaco-7B-GPTQ/Guanaco-7B-GPTQ-4bit-128g.no-act-order.safetensors\r\nINFO:Loaded the model in 14.96 seconds.\r\n\r\nINFO:Loading the extension \"openai\"...\r\n\r\nFailed to load embedding model: all-mpnet-base-v2\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nGoogle COlab Notebook with T4 GPU\r\n```\r\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/2443", "file_loc": {"base_commit": "9ab90d8b608170fe57d893c2150eda3bc11a8b06", "files": [{"path": "extensions/openai/script.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [20]}, "('Handler', 'do_POST', 159)": {"mod": [197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["extensions/openai/script.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "ef1b9b91ccff680b7f931d75fd92c3caa6fcd622", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2083", "iss_label": "Needs triage", "title": "[BUG] typing: Progress in Group isn't happy", "body": "**Describe the bug**\r\n\r\nRunning mypy on the following code:\r\n\r\n```python\r\nfrom rich.console import Group\r\nfrom rich.progress import Progress\r\n\r\nouter_progress = Progress()\r\ninner_progress = Progress()\r\nlive_group = Group(outer_progress, inner_progress)\r\n```\r\n\r\nProduces:\r\n\r\n\r\n```console\r\n$ mypy --strict tmp.py\r\ntmp.py:6: error: Argument 1 to \"Group\" has incompatible type \"Progress\"; expected \"Union[ConsoleRenderable, RichCast, str]\"\r\ntmp.py:6: note: Following member(s) of \"Progress\" have conflicts:\r\ntmp.py:6: note: Expected:\r\ntmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, str]\r\ntmp.py:6: note: Got:\r\ntmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, RichCast, str]\r\ntmp.py:6: error: Argument 2 to \"Group\" has incompatible type \"Progress\"; expected \"Union[ConsoleRenderable, RichCast, str]\"\r\ntmp.py:6: note: Expected:\r\ntmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, str]\r\ntmp.py:6: note: Got:\r\ntmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, RichCast, str]\r\nFound 2 errors in 1 file (checked 1 source file)\r\n```\r\n\r\nI think `RichCast` should also be in the Protocol, that is, `__rich__` is allowed to return an object with `__rich__`, ~~or it should not be in `__rich__`, that is, `__rich__(self) -> Union[ConsoleRenderable, str]` should be used for all `__rich__` methods. Which is correct depends on runtime; can a `__rich__` return a `__rich__` which can return a `__rich__`, etc?~~. Ahah, I see `CHANGELOG.md:167:- Allowed `__rich__` to work recursively`, so it's the former.\r\n\r\nI'm preparing a PR.\r\n\r\n**Platform**\r\n
\r\nClick to expand\r\n\r\nWhat platform (Win/Linux/Mac) are you running on? What terminal software are you using?\r\n\r\nI may ask you to copy and paste the output of the following commands. It may save some time if you do it now.\r\n\r\nIf you're using Rich in a terminal:\r\n\r\n```\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'truecolor' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 68 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions(size=ConsoleDimensions(width=298, height=68), legacy_windows=False, min_width=1, max_width=298, is_terminal=True, encoding='utf-8', max_height=68, justify=None, overflow=None, no_wrap=False, highlight=None, markup=None, height=None) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=298, height=68) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 298 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 {'TERM': 'xterm-256color', 'COLORTERM': 'truecolor', 'CLICOLOR': None, 'NO_COLOR': None, 'TERM_PROGRAM': 'iTerm.app', 'COLUMNS': None, 'LINES': None, 'JPY_PARENT_PID': None, 'VSCODE_VERBOSE_LOGGING': None} \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Darwin\"\r\nrich==11.2.0\r\n```\r\n\r\n(Same issue after upgrading to Rich 12)\r\n\r\n
\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/2089", "file_loc": {"base_commit": "ef1b9b91ccff680b7f931d75fd92c3caa6fcd622", "files": [{"path": "rich/console.py", "status": "modified", "Loc": {"('RichCast', None, 265)": {"mod": [268]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["rich/console.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "3da26192cba7dbaa3109fc0454e658ec417aaf5f", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/89", "iss_label": "", "title": "feature request: replace history with corrected command.", "body": "It would be a nice feature to correct the command and the history.\nI would also like an option to not add {fuck,thefuck} to the history.\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/384", "file_loc": {"base_commit": "3da26192cba7dbaa3109fc0454e658ec417aaf5f", "files": [{"path": "thefuck/shells.py", "status": "modified", "Loc": {"('Fish', 'app_alias', 128)": {"mod": [129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["thefuck/shells.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "61e722aa126207efcdbc1ddcd4453854ad44ea09", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/10251", "iss_label": "", "title": "Extending Criterion", "body": "Unless I'm missing something, it's not completely trivial how one can use a custom `sklearn.tree._criterion.Criterion` for a decision tree. See my use case [here](https://stats.stackexchange.com/q/316954/98500).\r\n\r\nThings I have tried include:\r\n\r\n- Import the `ClassificationCriterion` in Python and subclass it. It seems that `node_impurity` and `children_impurity` do not get called, the impurity is always 0 (perhaps because they are `cdef` and not `cpdef`?). I'm also unsure what the parameters to `__new__` / `__cinit__` should be (e.g. `1` and `np.array([2], dtype='intp')` for a binary classification problem?), or how to pass them properly: I have to create the `Criterion` object from outside the tree to circumvent [the check on the `criterion` argument](https://github.com/scikit-learn/scikit-learn/blob/a24c8b464d094d2c468a16ea9f8bf8d42d949f84/sklearn/tree/tree.py#L324).\r\n\r\n- Extend `ClassificationCriterion` in a Cython file. This seems to work, but (a) it requires exporting `ClassificationCriterion` from `_criterion.pxd` and (b) it would be nice if it would be documented more extensively what should be done in `node_impurity` and `children_impurity`. I will post my code below once it seems to work correctly.\r\n\r\nMay I propose one of the following to make this easier?\r\n\r\n- Document what should be done to extend the class in Cython or Python - if Python should be allowed: I am aware of the performance issue with that, but in some cases it may be OK to do this in Python - I don't know.\r\n- Make it possible to pass a function or other object not extending `Criterion` to the tree, similar to how it is very easy to implement a custom scorer for validation functions. That would require changing the checks [here](https://github.com/scikit-learn/scikit-learn/blob/a24c8b464d094d2c468a16ea9f8bf8d42d949f84/sklearn/tree/tree.py#L324).", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10325", "file_loc": {"base_commit": "61e722aa126207efcdbc1ddcd4453854ad44ea09", "files": [{"path": "sklearn/tree/_criterion.pxd", "status": "modified", "Loc": {"(None, None, None)": {"add": [67]}}}, {"path": "sklearn/tree/_criterion.pyx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [215, 216, 707]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/tree/_criterion.pxd", "sklearn/tree/_criterion.pyx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "3d19272be75fe32edd4cf01cb2eeac2281305e42", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/27682", "iss_label": "good first issue\ncython", "title": "MAINT Directly `cimport` interfaces from `std::algorithm`", "body": "Some Cython implementations use interfaces from the standard library of C++, namely `std::algorithm::move` and `std::algorithm::fill` from [`std::algorithm`](https://en.cppreference.com/w/cpp/algorithm/).\r\n\r\nBefore Cython 3, those interfaces had to be imported directly using the verbose syntax from Cython:\r\n - https://github.com/scikit-learn/scikit-learn/blob/5fc67aeb092d636895b599921283221a68c7a2ad/sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp#L22-L26\r\n - https://github.com/scikit-learn/scikit-learn/blob/5fc67aeb092d636895b599921283221a68c7a2ad/sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp#L28-L33\r\n\r\nCython 3 introduced the following line natively, for those interfaces. Those interfaces should now be `cimported` directly. That is one can replace the line shown above respectively with:\r\n\r\n```cython\r\nfrom libcpp.algorithm cimport move\r\nfrom libcpp.algorithm cimport fill\r\n```\r\n\r\nI believe this is a good first Cython issue.\r\n\r\nAny reader should feel free to pick it up. It might be possible that there is some context missing.\r\n\r\nPlease let me know if you need help. :slightly_smiling_face: ", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/28489", "commit_html_url": null, "file_loc": {"base_commit": "3d19272be75fe32edd4cf01cb2eeac2281305e42", "files": [{"path": "sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp", "status": "modified", "Loc": {"(None, None, 16)": {"add": [16]}, "(None, None, 28)": {"mod": [28, 29, 30, 31, 32, 33]}}}, {"path": "sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp", "status": "modified", "Loc": {"(None, None, 6)": {"add": [6]}, "(None, None, 22)": {"mod": [22, 23, 24, 25, 26]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp", "sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "033bc2a6c9aec3a245eb1f1b4fadb2fbb7a514b8", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/429", "iss_label": "bug\nreviewed", "title": "OpenAPI: HTTP_422 response does not use custom media_type", "body": "**Describe the bug**\r\nFastAPI automatically adds an HTTP_422 response to all paths in the OpenAPI specification that have parameters or request body. This response does not use the media_type of response_class if any custom defined. Furthermore, it overwrites any error object format with the default one.\r\n\r\n**To Reproduce**\r\nCreate a path with parameters and add custom response_class to decorator. Add custom exception handlers that reformat the default error responses as per your liking. Then observe generated openapi.json\r\n\r\n```python\r\nfrom fastapi import FastAPI, HTTPException\r\nfrom fastapi.exceptions import RequestValidationError\r\nfrom starlette import status\r\nfrom starlette.responses import JSONResponse\r\nfrom . import schemas\r\n\r\napp = FastAPI()\r\n\r\nclass JsonApiResponse(JSONResponse):\r\n media_type = 'application/vnd+json.api'\r\n\r\n@app.exception_handler(HTTPException)\r\nasync def http_exception_handler(request, exc: HTTPException) -> JsonApiResponse:\r\n headers = getattr(exc, \"headers\", None)\r\n content = schemas.ErrorResponse(errors=[dict(title=\"Bad request\", detail=exc.detail, status=exc.status_code)]).dict()\r\n status_code = exc.status_code\r\n if headers:\r\n return JsonApiResponse(content=content, status_code=status_code, headers=headers)\r\n else:\r\n return JsonApiResponse(content=content, status_code=status_code)\r\n\r\n@app.exception_handler(RequestValidationError)\r\nasync def request_validation_exception_handler(request, exc: RequestValidationError) -> JsonApiResponse:\r\n http422 = status.HTTP_422_UNPROCESSABLE_ENTITY\r\n return JsonApiResponse(\r\n content=schemas.ErrorResponse(errors=[\r\n dict(title=err['type'], detail=err['msg'], source='/'.join(err['loc']), status=http422)\r\n for err in exc.errors()\r\n ]).dict(),\r\n status_code=http422,\r\n )\r\n\r\n@app.post('/customers',\r\n status_code=status.HTTP_201_CREATED,\r\n response_model=schemas.CustomerDetailsResponse,\r\n response_class=JsonApiResponse,\r\n )\r\ndef customer_create(data: schemas.Customer = Body(..., media_type='application/vnd+json.api', embed=True)):\r\n created_customer = {**data.dict(), **{'id': '1'}}\r\n return {'data': created_customer}\r\n``` \r\n\r\nThe openapi.json will include the unwanted 422 response with the FastAPI default error object definitions:\r\n\r\n```yaml\r\n # ...\r\n '422':\r\n description: Validation Error\r\n content:\r\n application/json:\r\n schema:\r\n \"$ref\": \"#/components/schemas/HTTPValidationError\"\r\n```\r\n\r\n**Expected behavior**\r\nAt least, the media_type of the response_class should be respected. But the best would be if the 422 would not be added to the specification unless requested via the path decorator. Or if the 422 definitions of mine were respected.\r\n\r\n```python\r\n@app.post('/customers',\r\n status_code=status.HTTP_201_CREATED,\r\n response_model=schemas.CustomerDetailsResponse,\r\n response_class=JsonApiResponse,\r\n responses={\r\n 422: {\r\n 'model': schemas.ErrorResponse\r\n },\r\n })\r\ndata: schemas.Customer = Body(..., media_type='application/vnd+json.api', embed=True)):\r\n pass\r\n```\r\n\r\n**Environment:**\r\n - OS: masOS 10.14.6\r\n - Python: 3.6.5\r\n - FastAPI: 0.35.0", "pr_html_url": "https://github.com/fastapi/fastapi/pull/437", "file_loc": {"base_commit": "033bc2a6c9aec3a245eb1f1b4fadb2fbb7a514b8", "files": [{"path": "fastapi/openapi/utils.py", "status": "modified", "Loc": {"(None, 'get_openapi_path', 142)": {"add": [227], "mod": [162, 163, 164, 165, 175, 176, 177, 178, 179, 191, 219, 220]}, "(None, 'get_openapi_operation_parameters', 72)": {"mod": [74, 75, 80, 81, 82, 94]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["fastapi/openapi/utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "d692a72bf3809df35d802041211fcd81d56b1dc6", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/710", "iss_label": "enhancement\nseverity:low", "title": "Tune rate-limit backoff", "body": "**What problem or use case are you trying to solve?**\r\nDue to the AnthropicException error, which indicates that the request limit has been reached, it is necessary to increase the interval between requests. This will prevent system overload and provide a stable service.\r\n\r\n**Describe the UX of the solution you'd like**\r\nFrom a user experience (UX) perspective, the most important aspect is to send requests at an appropriate interval. Sending requests too frequently will cause errors, while sending requests at too long an interval will result in longer response times. Therefore, finding the right balance is crucial. Additionally, informing users about the current status and estimated wait time would also contribute to a good UX.\r\n\r\n**Do you have thoughts on the technical implementation?**\r\nFrom a technical implementation standpoint, a mechanism to monitor and manage request limits is required. For example, tracking the number of requests and the time they were made, and stopping requests for a certain period of time once the limit is reached. Additionally, implementing an algorithm to dynamically adjust the request interval could be more efficient.\r\n\r\n**Additional context**\r\nAn additional consideration is the error handling mechanism. When a request limit error occurs, appropriate exception handling and retry logic should be implemented. Additionally, through logging and monitoring systems, the system's status should be continuously monitored, and issues should be promptly detected.", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/1120", "file_loc": {"base_commit": "d692a72bf3809df35d802041211fcd81d56b1dc6", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [179]}}}, {"path": "agenthub/monologue_agent/utils/memory.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 7, 12]}}}, {"path": "agenthub/monologue_agent/utils/monologue.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6], "mod": [1]}, "('Monologue', 'get_total_length', 44)": {"mod": [56]}, "('Monologue', 'condense', 59)": {"mod": [67, 77, 78]}}}, {"path": "opendevin/config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 42, 49], "mod": [23, 24]}}}, {"path": "opendevin/controller/agent_controller.py", "status": "modified", "Loc": {"('AgentController', 'step', 154)": {"add": [175], "mod": [173, 181, 182, 185, 186, 188, 189, 191]}, "(None, None, None)": {"mod": [2, 6, 7]}}}, {"path": "opendevin/llm/llm.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15], "mod": [3, 4, 8, 13, 14]}, "('LLM', None, 18)": {"add": [18]}, "('LLM', '__init__', 19)": {"add": [25], "mod": [23, 24, 27, 38, 39, 40, 41, 42, 46]}}}, {"path": "opendevin/schema/config.py", "status": "modified", "Loc": {"('ConfigType', None, 4)": {"mod": [17]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["agenthub/monologue_agent/utils/memory.py", "opendevin/schema/config.py", "opendevin/llm/llm.py", "agenthub/monologue_agent/utils/monologue.py", "opendevin/config.py", "opendevin/controller/agent_controller.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "d16396138e8a61f9bc2c3c36ae8c4d7420d23782", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/663", "iss_label": "enhancement\nsweep", "title": "Sweep: Bump the release version in pyproject.toml", "body": "\n\n\n
\nChecklist\n\n- [X] `pyproject.toml`\n> \u2022 Locate the line where the version number is specified. It should be under the [project] section and the line should start with \"version = \".\n> \u2022 Determine the new version number according to the semantic versioning rules. If only minor changes or bug fixes have been made, increment the patch version. If new features have been added in a backwards-compatible manner, increment the minor version. If changes have been made that are not backwards-compatible, increment the major version.\n> \u2022 Update the version number in the pyproject.toml file. Replace the old version number with the new version number.\n> \u2022 Check if there are any dependencies or other parts of the project that rely on the version number. If there are, update these parts of the project as well.\n> \u2022 Commit the changes and push to the repository.\n\n
\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/666", "file_loc": {"base_commit": "d16396138e8a61f9bc2c3c36ae8c4d7420d23782", "files": [{"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["pyproject.toml"], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "e748ca50ca3e83ac703e02538a27236fedd53a7d", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/728", "iss_label": "bug", "title": "get_func_args maximum recursion", "body": "https://github.com/scrapy/scrapy/blob/master/scrapy/utils/python.py#L149\n\nToday I was working on a project were I have to skip the first item of a list, and then join the rest. Instead of writing the typical slice I tried something much more good looking `Compose(itemgetter(slice(1, None)), Join())` but I found out this maximum recursion. I did some research and ask @dangra about it, but nothing came up.\nI think the main problem is that `inspect` isn't able recognize `itemgetter` as `something`.\n\n``` python\n>>> inspect.getmembers(itemgetter(2))\n[('__call__',\n ),\n ('__class__', ),\n ('__delattr__',\n ),\n ('__doc__',\n 'itemgetter(item, ...) --> itemgetter object\\n\\nReturn a callable object that fetches the given item(s) from its operand.\\nAfter, f=itemgetter(2), the call f(r) returns r[2].\\nAfter, g=itemgetter(2,5,3), the call g(r) returns (r[2], r[5], r[3])'),\n ('__format__',\n ),\n ('__getattribute__',\n ),\n ('__hash__',\n ),\n ('__init__',\n ),\n ('__new__', ),\n ('__reduce__',\n ),\n ('__reduce_ex__',\n ),\n ('__repr__',\n ),\n ('__setattr__',\n ),\n ('__sizeof__',\n ),\n ('__str__',\n ),\n ('__subclasshook__',\n )]\n>>> inspect.getargspec(itemgetter(2).__call__)\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/usr/lib/python2.7/inspect.py\", line 815, in getargspec\n raise TypeError('{!r} is not a Python function'.format(func))\nTypeError: is not a Python function\n>>> inspect.getargspec(itemgetter(slice(None, 2)).__init__)\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/usr/lib/python2.7/inspect.py\", line 815, in getargspec\n raise TypeError('{!r} is not a Python function'.format(func))\nTypeError: is not a Python function\n```\n\nEDIT: Looks like the reason was C functions weren't covered by inspect module until Python 3.4 (http://bugs.python.org/issue17481)\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/809", "file_loc": {"base_commit": "e748ca50ca3e83ac703e02538a27236fedd53a7d", "files": [{"path": "scrapy/tests/test_utils_python.py", "status": "modified", "Loc": {"('UtilsPythonTestCase', 'test_get_func_args', 158)": {"add": [195]}}}, {"path": "scrapy/utils/python.py", "status": "modified", "Loc": {"(None, 'get_func_args', 134)": {"add": [149]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["scrapy/utils/python.py"], "doc": [], "test": ["scrapy/tests/test_utils_python.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "626a0a01471accc32ded29ccca3ed93c4995fcd6", "iss_html_url": "https://github.com/huggingface/transformers/issues/9954", "iss_label": "TensorFlow\nTests\nGood First Issue", "title": "[Good first issue] LXMERT TensorFlow Integration tests", "body": "The TensorFlow implementation of the LXMERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.\r\n\r\nThe [test_modeling_tf_lxmert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_lxmert.py) file should be updated to include integration testing.\r\n\r\nAn example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:\r\n\r\nhttps://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387\r\n\r\nSome additional tips:\r\n- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.\r\n- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.\r\n- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time.", "code": null, "pr_html_url": "https://github.com/huggingface/transformers/pull/12497", "commit_html_url": null, "file_loc": {"base_commit": "626a0a01471accc32ded29ccca3ed93c4995fcd6", "files": [{"path": "tests/test_modeling_tf_lxmert.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}, "('TFLxmertModelTest', 'test_saved_model_creation_extended', 710)": {"add": [770]}, "('TFLxmertModelTest', 'test_pt_tf_model_equivalence', 487)": {"mod": [558]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["tests/test_modeling_tf_lxmert.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "710df2140555030e4d86e669d6df2deb852bcaf5", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/24115", "iss_label": "Bug\nDatetime\nAlgos", "title": "DTA/TDA/PA inplace methods should actually be inplace", "body": "At the moment we are using the implementations designed for Index subclasses, which return new objects.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/30505", "file_loc": {"base_commit": "710df2140555030e4d86e669d6df2deb852bcaf5", "files": [{"path": "doc/source/whatsnew/v1.0.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [719]}}}, {"path": "pandas/core/arrays/datetimelike.py", "status": "modified", "Loc": {"('DatetimeLikeArrayMixin', None, 316)": {"mod": [1314]}, "('DatetimeLikeArrayMixin', '__iadd__', 1315)": {"mod": [1316, 1317]}, "('DatetimeLikeArrayMixin', '__isub__', 1319)": {"mod": [1320, 1321]}}}, {"path": "pandas/tests/arrays/test_datetimelike.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [227]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": ["pandas/core/arrays/datetimelike.py"], "doc": ["doc/source/whatsnew/v1.0.0.rst"], "test": ["pandas/tests/arrays/test_datetimelike.py"], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "0092ac9a2a20873c7c077cefc4d68397a6df2ada", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/30", "iss_label": "", "title": "TypeError while running a triangle.py scene", "body": "I got an error when I try to run some of the [old_projects/triangle_of_power/triangle.py](https://github.com/3b1b/manim/blob/master/old_projects/triangle_of_power/triangle.py) scene.\r\nMy command is:\r\n```\r\npython extract_scene.py -p old_projects/triangle_of_power/triangle.py DrawInsideTriangle\r\n```\r\n\r\nBut after that I get:\r\n```\r\nTraceback (most recent call last):\r\n File \"extract_scene.py\", line 187, in main\r\n handle_scene(SceneClass(**scene_kwargs), **config)\r\n File \"/home/loic/Sources/Git/manim/scene/scene.py\", line 47, in __init__\r\n self.construct(*self.construct_args)\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 527, in construct\r\n top = TOP()\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 91, in __init__\r\n VMobject.__init__(self, **kwargs)\r\n File \"/home/loic/Sources/Git/manim/mobject/mobject.py\", line 33, in __init__\r\n self.generate_points()\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 104, in generate_points\r\n self.set_values(self.x, self.y, self.z)\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 108, in set_values\r\n self.set_value(i, mob)\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 111, in set_value\r\n self.values[index] = self.put_on_vertex(index, value)\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 125, in put_on_vertex\r\n value.center()\r\n File \"/home/loic/Sources/Git/manim/mobject/mobject.py\", line 230, in center\r\n self.shift(-self.get_center())\r\n File \"/home/loic/Sources/Git/manim/mobject/mobject.py\", line 124, in shift\r\n mob.points += total_vector\r\nTypeError: Cannot cast ufunc add output from dtype('float64') to dtype('int64') with casting rule 'same_kind'\r\n```\r\nAnd then the fail sound.\r\n\r\nIs there something wrong in what am I doing?", "pr_html_url": "https://github.com/3b1b/manim/pull/31", "file_loc": {"base_commit": "0092ac9a2a20873c7c077cefc4d68397a6df2ada", "files": [{"path": "mobject/mobject.py", "status": "modified", "Loc": {"('Mobject', 'shift', 121)": {"mod": [123]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["mobject/mobject.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "5cae13fd0a9b6e5a6f3f39c798cf693675795d89", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/733", "iss_label": "", "title": "LLM may generate comments inside [CONTENT][/CONTENT] , which causes parsing the JSON to fail.", "body": "**Bug description**\r\n```\r\nparse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting ',' delimiter: line 6 column 27 (char 135)\r\n```\r\n\r\n\r\n**Bug solved method**\r\n\r\n\r\n\r\nPerhaps we could consider adding a constraint to the prompt, indicating not to generate comments inside [CONTENT][/CONTENT], or alternatively, we could trim the comments from the LLM's output.\r\n\r\n**Environment information**\r\n\r\n\r\n- LLM type and model name: OPENAI gpt-4-1106-preview\r\n- System version: macos 12.5.1\r\n- Python version: python 3.9\r\n\r\n\r\n\r\n- packages version: metagpt commit 82a5eec72707dee44174eae8f8ff1490a6819ecd\r\n- installation method: pip install from source\r\n\r\n**Screenshots or logs**\r\n\r\n\r\n```\r\n[CONTENT]\r\n{\r\n \"Required Python packages\": [\r\n \"numpy==1.21.2\",\r\n \"Kivy==2.0.0\",\r\n \"pygame==2.0.1\",\r\n \"sqlite3==2.6.0\" # sqlite3 is included in Python's standard library, but versioning is for consistency\r\n ],\r\n \"Required Other language third-party packages\": [\r\n \"No third-party dependencies required\"\r\n ],\r\n \"Logic Analysis\": [\r\n [\r\n \"game.py\",\r\n \"Contains Game class with core game logic, uses numpy for array manipulation, and interacts with UI and Storage classes\"\r\n ],\r\n [\r\n \"main.py\",\r\n \"Contains main function, initializes the game by calling start_new_game() from Game class\"\r\n ],\r\n [\r\n \"ui.py\",\r\n \"Contains UI class for user interface, uses Kivy for rendering, and interacts with Game class\"\r\n ],\r\n [\r\n \"storage.py\",\r\n \"Contains Storage class for saving and loading high scores using SQLite\"\r\n ]\r\n ],\r\n \"Task list\": [\r\n \"storage.py\",\r\n \"game.py\",\r\n \"ui.py\",\r\n \"main.py\"\r\n ],\r\n \"Full API spec\": \"\",\r\n \"Shared Knowledge\": \"'game.py' contains the Game class which is central to the game logic and is used by both 'ui.py' for rendering the game state and 'storage.py' for saving the high score.\",\r\n \"Anything UNCLEAR\": \"The monetization strategy for the game is not specified. Will the game include ads, in-app purchases, or be a paid app? This will affect the design of the user interface and potentially the choice of libraries or frameworks.\"\r\n}\r\n[/CONTENT]\r\n2024-01-10 14:58:53.419 | INFO | metagpt.utils.cost_manager:update_cost:48 - Total running cost: $0.199 | Max budget: $10.000 | Current cost: $0.021, prompt_tokens: 1021, completion_tokens: 352\r\n2024-01-10 14:58:53.423 | WARNING | metagpt.utils.repair_llm_raw_output:run_and_passon:235 - parse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting ',' delimiter: line 6 column 27 (char 135)\r\n2024-01-10 14:58:53.424 | INFO | metagpt.utils.repair_llm_raw_output:repair_invalid_json:204 - repair_invalid_json, raw error: Expecting ',' delimiter: line 6 column 27 (char 135)\r\n2024-01-10 14:58:53.424 | ERROR | metagpt.utils.common:log_it:438 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 222.144(s), this was the 6th time calling it. exp: RetryError[]\r\n2024-01-10 14:58:53.424 | WARNING | metagpt.utils.common:wrapper:510 - There is a exception in role's execution, in order to resume, we delete the newest role communication message in the role's memory.\r\n2024-01-10 14:58:53.430 | ERROR | metagpt.utils.common:wrapper:492 - Exception occurs, start to serialize the project, exp:\r\n```\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/963", "file_loc": {"base_commit": "5cae13fd0a9b6e5a6f3f39c798cf693675795d89", "files": [{"path": "config/config2.example.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [15], "mod": [6]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.example.yaml"], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "da1d0d404f05523d37b37207a4c1ff419cc1f47f", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/26809", "iss_label": "Feature request", "title": "Add Mistral Models to Flax", "body": "### Feature request\r\n\r\nI would like to implement the ~~Llama~~ Mistral model in flax\r\n\r\n### Motivation\r\n\r\nI've been trying to get familiar with jax and as such I started migrating the llama model, and I think I am at a point where both models are quite comparable in outcome\r\n\r\n### Your contribution\r\n\r\nYes I could submit a PR with the model implementation", "pr_html_url": "https://github.com/huggingface/transformers/pull/24587", "file_loc": {"base_commit": "da1d0d404f05523d37b37207a4c1ff419cc1f47f", "files": [{"path": "docs/source/en/index.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [97, 170, 171]}}}, {"path": "docs/source/en/model_doc/llama.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [52, 114]}}}, {"path": "src/transformers/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4556, 8633]}}}, {"path": "src/transformers/modeling_flax_utils.py", "status": "modified", "Loc": {"(None, 'append_call_sample_docstring', 1270)": {"add": [1277], "mod": [1270]}}}, {"path": "src/transformers/models/auto/modeling_flax_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [45, 148]}}}, {"path": "src/transformers/models/bloom/modeling_bloom.py", "status": "modified", "Loc": {"('BloomPreTrainedModel', '_convert_to_bloom_cache', 491)": {"mod": [492]}}}, {"path": "src/transformers/models/fuyu/image_processing_fuyu.py", "status": "modified", "Loc": {"(None, 'make_list_of_list_of_images', 56)": {"mod": [57]}}}, {"path": "src/transformers/models/llama/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18, 57, 85]}}}, {"path": "src/transformers/models/mpt/modeling_mpt.py", "status": "modified", "Loc": {"('MptPreTrainedModel', '_convert_to_mpt_cache', 267)": {"mod": [268]}}}, {"path": "src/transformers/utils/dummy_flax_objects.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [802]}}}, {"path": "tests/models/llama/test_modeling_llama.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [36]}, "('LlamaModelTester', 'prepare_config_and_inputs', 103)": {"mod": [108]}}}, {"path": "tests/models/mistral/test_modeling_mistral.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [37]}, "('MistralModelTester', 'prepare_config_and_inputs', 105)": {"mod": [110]}}}, {"path": "tests/models/persimmon/test_modeling_persimmon.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [35]}, "('PersimmonModelTester', 'prepare_config_and_inputs', 102)": {"mod": [107]}}}, {"path": "tests/models/phi/test_modeling_phi.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [41]}}}, {"path": "utils/check_docstrings.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [235]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["utils/check_docstrings.py", "src/transformers/__init__.py", "src/transformers/utils/dummy_flax_objects.py", "src/transformers/modeling_flax_utils.py", "src/transformers/models/mpt/modeling_mpt.py", "src/transformers/models/bloom/modeling_bloom.py", "src/transformers/models/fuyu/image_processing_fuyu.py", "src/transformers/models/auto/modeling_flax_auto.py", "src/transformers/models/llama/__init__.py"], "doc": ["docs/source/en/model_doc/llama.md", "docs/source/en/index.md"], "test": ["tests/models/mistral/test_modeling_mistral.py", "tests/models/phi/test_modeling_phi.py", "tests/models/persimmon/test_modeling_persimmon.py", "tests/models/llama/test_modeling_llama.py"], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "0aa58fa7a62cd0ee7ec27fa87122425aeff0467d", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/91043", "iss_label": "build\n3.11", "title": "./Programs/_freeze_module fails with MSAN: Uninitialized value was created by an allocation of 'stat.i'", "body": "BPO | [46887](https://bugs.python.org/issue46887)\n--- | :---\nNosy | @vstinner\nPRs |
  • python/cpython#31633
  • \n\n*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*\n\n
    Show more details

    \n\nGitHub fields:\n```python\nassignee = None\nclosed_at = None\ncreated_at = \nlabels = ['build', '3.11']\ntitle = \"./Programs/_freeze_module fails with MSAN: Uninitialized value was created by an allocation of 'stat.i'\"\nupdated_at = \nuser = 'https://github.com/vstinner'\n```\n\nbugs.python.org fields:\n```python\nactivity = \nactor = 'vstinner'\nassignee = 'none'\nclosed = False\nclosed_date = None\ncloser = None\ncomponents = ['Build']\ncreation = \ncreator = 'vstinner'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 46887\nkeywords = ['patch']\nmessage_count = 6.0\nmessages = ['414249', '414264', '414267', '414268', '414269', '414271']\nnosy_count = 1.0\nnosy_names = ['vstinner']\npr_nums = ['31633']\npriority = 'normal'\nresolution = None\nstage = 'patch review'\nstatus = 'open'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue46887'\nversions = ['Python 3.11']\n```\n\n

    \n", "pr_html_url": "https://github.com/python/cpython/pull/102510", "file_loc": {"base_commit": "0aa58fa7a62cd0ee7ec27fa87122425aeff0467d", "files": [{"path": "Objects/longobject.c", "status": "modified", "Loc": {"(None, None, 140)": {"add": [165]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["Objects/longobject.c"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "0b74c72e1c7fe320440fa97a3d256107ea329307", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/6403", "iss_label": "Bug\nIO Excel", "title": "ExcelFile parse of empty sheet fails with \"IndexError: list index out of range\"", "body": "Using pandas 0.13.1 on OS X Mavericks to parse a blank Excel spreadsheet causes \"IndexError: list index out of range\". Apparently the default header=0 in `_parse_excel` causes the execution of `_trim_excel_header(data[header])`. Perhaps when nrows==0 this should not be executed.\n\n``` python\nimport pandas as pd\nxl_file = pd.ExcelFile('blank.xlsx')\nxl_file.parse('Sheet1') #Sheet1 has no data\n```\n\nSTDERR:\n\n```\nTraceback (most recent call last):\n File \"/Users/myourshaw/lab/pypeline/python2/excel_example.py\", line 10, in \n xl_file.parse('Sheet1')\n File \"/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/io/excel.py\", line 208, in parse\n **kwds)\n File \"/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/io/excel.py\", line 291, in _parse_excel\n data[header] = _trim_excel_header(data[header])\nIndexError: list index out of range\n```\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/10376", "file_loc": {"base_commit": "0b74c72e1c7fe320440fa97a3d256107ea329307", "files": [{"path": "ci/requirements-3.4.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}}}, {"path": "ci/requirements-3.4_SLOW.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}}}, {"path": "doc/source/install.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [252, 255]}}}, {"path": "doc/source/io.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [2184], "mod": [2133]}}}, {"path": "doc/source/whatsnew/v0.17.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [40, 55, 63]}}}, {"path": "pandas/core/frame.py", "status": "modified", "Loc": {"('DataFrame', 'to_excel', 1194)": {"add": [1248]}}}, {"path": "pandas/io/excel.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11], "mod": [16]}, "('ExcelFile', '_parse_excel', 322)": {"add": [420]}, "(None, '_conv_value', 467)": {"add": [476]}, "('ExcelWriter', None, 482)": {"add": [499]}, "('_XlwtWriter', '__init__', 1159)": {"add": [1162]}, "('_XlsxWriter', 'write_cells', 1300)": {"add": [1313], "mod": [1339]}, "('ExcelWriter', '__new__', 522)": {"mod": [524, 526]}, "('ExcelWriter', '__init__', 574)": {"mod": [577]}}}, {"path": "pandas/io/tests/test_excel.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [522, 1220], "mod": [3]}, "('ExcelReaderTests', 'test_creating_and_reading_multiple_sheets', 455)": {"mod": [474]}}}, {"path": "vb_suite/packers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 208]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/frame.py", "vb_suite/packers.py", "pandas/io/excel.py"], "doc": ["doc/source/install.rst", "doc/source/io.rst", "doc/source/whatsnew/v0.17.0.txt"], "test": ["pandas/io/tests/test_excel.py"], "config": ["ci/requirements-3.4.txt", "ci/requirements-3.4_SLOW.txt"], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "395c2d7372dffcf1d4f9577f623a2966183595d9", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/2126", "iss_label": "bug", "title": "Error in the Code Export: Boolean values are in the incorrect syntax. 'false' should be changed to 'False', 'true' should be changed to 'True'.", "body": "Error in the Code Export: Boolean values are in the incorrect syntax. 'false' should be changed to 'False', 'true' should be changed to 'True'.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\nclick to export code, and turn on tweaks\r\n\r\n**Screenshots**\r\n\"Screenshot\r\n\r\n\r\n\r\n", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/2130", "file_loc": {"base_commit": "395c2d7372dffcf1d4f9577f623a2966183595d9", "files": [{"path": "src/frontend/src/modals/apiModal/utils/get-python-api-code.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [14], "mod": [37]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/frontend/src/modals/apiModal/utils/get-python-api-code.tsx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "2cf3d4dbf9da66cbff30f54a032b9c60d6e6073c", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/401", "iss_label": "", "title": "The video doesn't concatenate, I can only get partial videos", "body": "I have only the partial videos with the next error:\r\n\r\n\"[concat @ 000001ff22102900] Impossible to open '0.mp4'\r\nmedia\\videos\\example_scenes\\480p15\\partial_movie_files\\WriteStuff\\partial_movie_file_list.txt: No such file or directory\r\nFile ready at media\\videos\\example_scenes\\480p15\\WriteStuff.mp4\"\r\n\r\nBut I don't have the video WriteStuff.mp4.\r\n\r\nPlease help me", "pr_html_url": "https://github.com/3b1b/manim/pull/402", "file_loc": {"base_commit": "2cf3d4dbf9da66cbff30f54a032b9c60d6e6073c", "files": [{"path": "manimlib/scene/scene.py", "status": "modified", "Loc": {"('Scene', 'combine_movie_files', 758)": {"add": [782, 799], "mod": [798]}}}, {"path": "manimlib/utils/output_directory_getters.py", "status": "modified", "Loc": {"(None, 'guarantee_existance', 15)": {"mod": [18]}, "(None, 'get_sorted_integer_files', 53)": {"mod": [81]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["manimlib/scene/scene.py", "manimlib/utils/output_directory_getters.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "2ee992b17ef5ff3c34f89545b0d57ad4690a64fc", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2422", "iss_label": "Needs triage", "title": "[BUG] Databricks is not identified as Jupyter", "body": "You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).\r\n\r\n**Describe the bug**\r\n\r\nDatabricks is not considered as \"Jupyter\", therefore `JUPYTER_LINES` and `JUPYTER_COLUMNS` has no effect on the console log\r\n\r\nProvide a minimal code example that demonstrates the issue if you can. If the issue is visual in nature, consider posting a screenshot.\r\n\r\nDatabricks has a Ipython type `InteractiveShell` which is neither `Ipython` or `ZMQInteractiveShell`\r\n\r\n![image](https://user-images.githubusercontent.com/18221871/181251880-531dbfc5-0f35-44ba-a1c2-c07e0a075cc7.png)\r\n\r\n\r\n```python\r\ndef _is_jupyter() -> bool: # pragma: no cover\r\n \"\"\"Check if we're running in a Jupyter notebook.\"\"\"\r\n try:\r\n get_ipython # type: ignore[name-defined]\r\n except NameError:\r\n return False\r\n ipython = get_ipython() # type: ignore[name-defined]\r\n shell = ipython.__class__.__name__\r\n if \"google.colab\" in str(ipython.__class__) or shell == \"ZMQInteractiveShell\":\r\n return True # Jupyter notebook or qtconsole\r\n elif shell == \"TerminalInteractiveShell\":\r\n return False # Terminal running IPython\r\n else:\r\n return False # Other type (?)\r\n```\r\n\r\nIf you're using Rich in a terminal:\r\n\r\n```\r\npython -m rich.diagnose\r\npip freeze | grep rich\r\n```\r\n\r\nIf you're using Rich in a Jupyter Notebook, run the following snippet in a cell\r\nand paste the output in your bug report.\r\n\r\n```python\r\nfrom rich.diagnose import report\r\nreport()\r\n```\r\n\r\n```\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = None \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = \u2502\r\n\u2502 height = 25 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = False \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = False \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=80, height=25), \u2502\r\n\u2502 legacy_windows=False, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=80, \u2502\r\n\u2502 is_terminal=False, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=25, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=80, height=25) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 80 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'unknown', \u2502\r\n\u2502 'COLORTERM': None, \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': None, \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JUPYTER_COLUMNS': '200', \u2502\r\n\u2502 'JUPYTER_LINES': '50', \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Linux\"\r\n```\r\n\r\n\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/2424", "file_loc": {"base_commit": "2ee992b17ef5ff3c34f89545b0d57ad4690a64fc", "files": [{"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}}}, {"path": "CONTRIBUTORS.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [16]}}}, {"path": "rich/console.py", "status": "modified", "Loc": {"(None, '_is_jupyter', 511)": {"mod": [519]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["rich/console.py"], "doc": ["CONTRIBUTORS.md", "CHANGELOG.md"], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "65d7a9b9902ad85f27b17d759bd13b59c2afc474", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/590", "iss_label": "", "title": "Please update README.md", "body": "I recently tried using it by following the steps in the README.md file and it does not work, please update the file.\r\n\r\nI keep getting this error when i try to export/set the API key\r\n\r\nopenai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored\r\nin a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/592", "file_loc": {"base_commit": "65d7a9b9902ad85f27b17d759bd13b59c2afc474", "files": [{"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "(None, 'load_env_if_needed', 19)": {"add": [21]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt_engineer/main.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "2b6f70fdb4f0238b2cf6afdb6473a764e090060f", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/226", "iss_label": "", "title": "Cannot import name 'BaseLanguageModel' from 'langchain.schema'", "body": "**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**Browser and Version**\r\n - N/A\r\n - macOS 13.3.1 (22E261)\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Install miniconda with Python 3.10.10\r\n2. Install langflow\r\n3. Run langflow\r\n4. See error:\r\nImportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (/Users/user/miniconda3/lib/python3.10/site-packages/langchain/schema.py)\r\n", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/229", "file_loc": {"base_commit": "2b6f70fdb4f0238b2cf6afdb6473a764e090060f", "files": [{"path": "poetry.lock", "status": "modified", "Loc": {"(None, None, None)": {"mod": [706, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 1711, 1717, 1718, 3955, 3961, 3962, 3963, 3964, 3965, 3966, 3967, 3968, 3969, 3970, 3971, 3972, 3973, 3974, 3975, 3976, 3977, 3978, 3979, 3980, 3981, 3982, 3983, 3984, 3985, 3986, 3987, 3988, 3989, 3990, 3991, 3992, 3993, 3994, 3995, 3996, 3997, 3998, 3999, 4000, 4001, 4499, 4505, 4506]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}}}, {"path": "src/backend/langflow/interface/agents/custom.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [31]}}}, {"path": "src/backend/langflow/interface/agents/prebuilt.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6]}}}, {"path": "src/backend/langflow/interface/tools/util.py", "status": "modified", "Loc": {"(None, 'get_func_tool_params', 8)": {"mod": [22, 24, 25, 26]}}}, {"path": "src/backend/langflow/interface/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6]}}}, {"path": "src/backend/langflow/template/nodes.py", "status": "modified", "Loc": {"('ChainFrontendNode', 'format_field', 536)": {"add": [561]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/backend/langflow/interface/agents/custom.py", "src/backend/langflow/interface/utils.py", "src/backend/langflow/template/nodes.py", "src/backend/langflow/interface/tools/util.py", "src/backend/langflow/interface/agents/prebuilt.py"], "doc": [], "test": [], "config": ["pyproject.toml", "poetry.lock"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6", "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/834", "iss_label": "bug", "title": "Old node modules need cleared out (Cannot read properties of null (reading 'edgesOut')", "body": "\r\n#### Describe the bug\r\ntrying to run make build on the latest code and it ends up in this error:\r\n\r\nCannot read properties of null (reading 'edgesOut')\r\n\r\n#### Setup and configuration\r\n**Current version**:\r\n\r\n```\r\ncommit 229fa988c575c291cff6ffc1f9d15814d9d2a884 (HEAD -> main, origin/main, origin/HEAD)\r\nAuthor: Xingyao Wang \r\nDate: Sun Apr 7 01:04:17 2024 +0800\r\n\r\n remove seed=42 to fix #813 (#830)\r\n```\r\n\r\n\r\n**My config.toml and environment vars** (be sure to redact API keys):\r\n```\r\nLLM_API_KEY=\"ollama\"\r\nLLM_MODEL=\"ollama/dolphin-mixtral:latest\"\r\nLLM_EMBEDDING_MODEL=\"local\"\r\nLLM_BASE_URL=\"http://localhost:11434\"\r\nWORKSPACE_DIR=\"./workspace\"\r\n```\r\n\r\n**My model and agent** (you can see these settings in the UI):\r\n* Model:\r\n* Agent:\r\n\r\n**Commands I ran to install and run OpenDevin**:\r\n```\r\nmake build\r\n```\r\n\r\n**Steps to Reproduce**:\r\n1. pull latest code\r\n2. make build\r\n3.\r\n\r\n**Logs, error messages, and screenshots**:\r\n```\r\n142 http fetch GET 200 https://registry.npmjs.org/@swc%2fcore 6ms (cache hit)\r\n143 silly fetch manifest @swc/helpers@^0.5.0\r\n144 http fetch GET 200 https://registry.npmjs.org/@swc%2fhelpers 2ms (cache hit)\r\n145 silly fetch manifest postcss@^8.4.12\r\n146 http fetch GET 200 https://registry.npmjs.org/postcss 6ms (cache hit)\r\n147 silly fetch manifest typescript@>=4.1.0\r\n148 http fetch GET 200 https://registry.npmjs.org/typescript 50ms (cache hit)\r\n149 silly fetch manifest typescript@^4.9.5\r\n150 silly fetch manifest vitest@^0.29.2\r\n151 silly fetch manifest @vitest/browser@*\r\n152 silly fetch manifest vitest@1.4.0\r\n153 silly fetch manifest @types/node@^18.0.0 || >=20.0.0\r\n154 timing idealTree Completed in 4380ms\r\n155 timing command:install Completed in 4385ms\r\n156 verbose stack TypeError: Cannot read properties of null (reading 'edgesOut')\r\n156 verbose stack at #loadPeerSet (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:1313:38)\r\n156 verbose stack at async #buildDepStep (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:924:11)\r\n156 verbose stack at async Arborist.buildIdealTree (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:203:7)\r\n156 verbose stack at async Promise.all (index 1)\r\n156 verbose stack at async Arborist.reify (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:154:5)\r\n156 verbose stack at async Install.exec (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/lib/commands/install.js:153:5)\r\n156 verbose stack at async module.exports (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/lib/cli-entry.js:61:5)\r\n157 verbose cwd /home/atlas/OpenDevin/frontend\r\n158 verbose Linux 6.6.4-060604-generic\r\n159 verbose node v18.20.1\r\n160 verbose npm v10.5.0\r\n161 error Cannot read properties of null (reading 'edgesOut')\r\n162 verbose exit 1\r\n163 timing npm Completed in 4511ms\r\n164 verbose unfinished npm timer reify 1712423688807\r\n165 verbose unfinished npm timer reify:loadTrees 1712423688810\r\n166 verbose unfinished npm timer idealTree:buildDeps 1712423691257\r\n167 verbose unfinished npm timer idealTree:node_modules/.pnpm/@monaco-editor+react@4.6.0_monaco-editor@0.47.0_react-dom@18.2.0_react@18.2.0/node_modules/@monaco-editor/react 1712423692071\r\n168 verbose code 1\r\n169 error A complete log of this run can be found in: /home/atlas/.npm/_logs/2024-04-06T17_14_48_682Z-debug-0.log\r\n\r\n```\r\n#### Additional Context\r\n\r\n", "code": null, "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/867", "commit_html_url": null, "file_loc": {"base_commit": "a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6", "files": [{"path": "opendevin/logging.py", "status": "modified", "Loc": {"(None, 'get_llm_prompt_file_handler', 118)": {"mod": [123]}, "(None, 'get_llm_response_file_handler', 128)": {"mod": [133]}, "(None, None, None)": {"mod": [139, 144]}}}]}, "own_code_loc": [], "ass_file_loc": ["frontend/node_modules"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["opendevin/logging.py"], "doc": [], "test": [], "config": [], "asset": ["frontend/node_modules"]}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "2fe8440b619329891db150e45910e8aaad97b7ce", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/4987", "iss_label": "type: bug\nstatus: triage needed\naws:s3", "title": "bug: The Content-MD5 you specified did not match what we received", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nI started getting the following exception\r\n\r\n```\r\ncom.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received. \r\n(Service: Amazon S3; Status Code: 400; Error Code: BadDigest; Request ID: null; S3 Extended Request ID: null; Proxy: null)\r\n```\r\n\r\nafter upgrade to `localstack/localstack-light:latest`, reverting back to `localstack/localstack-light:0.13.0` fixes it for me.\r\n\n\n### Expected Behavior\n\nNo exception.\n\n### How are you starting LocalStack?\n\nCustom (please describe below)\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n Using https://www.testcontainers.org/ to start the test.\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n```\r\n@Bean\r\npublic AmazonS3 createAmazonS3() {\r\n final DockerImageName diName = DockerImageName.parse(\"localstack/localstack-light:latest\").asCompatibleSubstituteFor(\"localstack/localstack\");\r\n final LocalStackContainer localstack = new LocalStackContainer(diName)\r\n .withServices(S3);\r\n localstack.addEnv(\"AWS_ACCESS_KEY\", \"test\");\r\n localstack.addEnv(\"AWS_SECRET_ACCESS_KEY\", \"567\");\r\n localstack.addEnv(\"AWS_REGION\", \"us-east-1\");\r\n localstack.addEnv(\"LS_LOG\", \"trace\");\r\n localstack.start();\r\n return AmazonS3ClientBuilder\r\n .standard()\r\n .withEndpointConfiguration(localstack.getEndpointConfiguration(S3))\r\n .withCredentials(localstack.getDefaultCredentialsProvider())\r\n .build();\r\n }\r\n```\r\n\r\nthen calling `store` on `org.springframework.core.io.Resource` which is `SimpleStorageResource`.\r\n\n\n### Environment\n\n```markdown\n- OS: macOS Catalina 10.15.7\r\n- LocalStack: latest\n```\n\n\n### Anything else?\n\n`LS_LOG=trace` with `localstack/localstack-light:0.13.0`\r\n\r\n```\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: IN(s3): \"GET /test-bucket-name/test-runtime.properties\" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52476', 'Amz-Sdk-Invocation-Id': '307eaac4-b1b6-d23e-96b1-a6dcff7d5414', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=72f59f88e302656e9e4c77308f1de7925f5b63aec3efec93dd9d5f32ae6a2b6d', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'X-Amz-Date': '20211122T191203Z', 'Content-Length': '0', 'Connection': 'Keep-Alive', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52476', 'x-localstack-edge': 'http://127.0.0.1:52476'} - data: b''\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): \"GET /test-bucket-name/test-runtime.properties\" - status: 404 - response headers: {'x-amzn-requestid': 'UJFL1535CHVAFPN2JLH2ACBUQX026PCCCTNN0RSBF4PJHULNR1AR', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'x-amz-request-id': '3DAD4B54E96B3CA1', 'x-amz-id-2': 'MzRISOwyjmnup3DAD4B54E96B3CA17/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'\\n\\n NoSuchKey\\n The specified key does not exist.\\n \\n 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE\\n'\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): \"GET /test-bucket-name/test-runtime.properties\" - status: 404 - response headers: {'x-amzn-requestid': 'UJFL1535CHVAFPN2JLH2ACBUQX026PCCCTNN0RSBF4PJHULNR1AR', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'x-amz-request-id': '3DAD4B54E96B3CA1', 'x-amz-id-2': 'MzRISOwyjmnup3DAD4B54E96B3CA17/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'\\n\\n NoSuchKey\\n The specified key does not exist.\\n \\n 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE\\n'\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: IN(s3): \"PUT /test-bucket-name/test-runtime.properties\" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52476', 'Amz-Sdk-Invocation-Id': '8a6682d3-1481-f538-4ed4-4ac03c4e4ec3', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=282e9062c19a5a575d49902c3c642928039a210c8d5eb54de069655f10ef94ea', 'Content-Md5': 'pX8KKuGXS1f2VTcuJpqjkw==', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD', 'X-Amz-Date': '20211122T191203Z', 'X-Amz-Decoded-Content-Length': '147', 'Content-Length': '320', 'Connection': 'Keep-Alive', 'Expect': '100-continue', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52476', 'x-localstack-edge': 'http://127.0.0.1:52476'} - data: b'93;chunk-signature=68bf4c0366a3d4c963efb7eaf3426c439ac26f9ca077b6c71e1bd60de69f0259\\r\\n#20211122+0100\\n#Mon Nov 22 20:12:03 CET 2021\\nlast.sync.url.test-space-key=2822a50f-4992-425a-b8fb-923735a9ddff317e3479-5907-46cf-b33a-60da9709274f\\n\\r\\n0;chunk-signature=bf3a6ecc9d3913d2ad6618d420c1db6abefb4f452469693ffc5bbd038ad2f2f0\\r\\n\\r\\n'\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): \"PUT /test-bucket-name/test-runtime.properties\" - status: 200 - response headers: {'ETag': '\"a57f0a2ae1974b57f655372e269aa393\"', 'last-modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Content-Length': '0', 'x-amzn-requestid': '1EYVT7AJ5TJ3JH1SK3ZVTHBBB860EIC4FTOP9VPHCSHR967AFFAP', 'Content-Type': 'text/html; charset=utf-8', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Location': '/test-bucket-name', 'x-amz-request-id': '5BC855D1EAAEFD00', 'x-amz-id-2': 'MzRISOwyjmnup5BC855D1EAAEFD007/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp'} - response: b''\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): \"PUT /test-bucket-name/test-runtime.properties\" - status: 200 - response headers: {'ETag': '\"a57f0a2ae1974b57f655372e269aa393\"', 'last-modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Content-Length': '0', 'x-amzn-requestid': '1EYVT7AJ5TJ3JH1SK3ZVTHBBB860EIC4FTOP9VPHCSHR967AFFAP', 'Content-Type': 'text/html; charset=utf-8', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Location': '/test-bucket-name', 'x-amz-request-id': '5BC855D1EAAEFD00', 'x-amz-id-2': 'MzRISOwyjmnup5BC855D1EAAEFD007/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp'} - response: b''\r\n```\r\n\r\n----\r\n\r\n`LS_LOG=trace` with `localstack/localstack-light:latest`\r\n\r\n```\r\n2021-11-22T19:10:42.097:DEBUG:localstack.services.edge: IN(s3): \"GET /test-bucket-name/test-runtime.properties\" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52438', 'Amz-Sdk-Invocation-Id': '3f452c53-2a97-15f7-8f44-96c3b3d4aa27', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=a8c7d475d338c92c01eca9638e858e8f0e84ae73498435a55520ee04ff655476', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'X-Amz-Date': '20211122T191042Z', 'Content-Length': '0', 'Connection': 'Keep-Alive', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52438', 'x-localstack-edge': 'http://127.0.0.1:52438'} - data: b''\r\n2021-11-22T19:10:42.118:DEBUG:localstack.services.edge: OUT(s3): \"GET /test-bucket-name/test-runtime.properties\" - status: 404 - response headers: {'x-amzn-requestid': 'RMJVBYKAH478ETR8T1G9DQ4TUHEIKKB96892NRKM3PYQYRVUPI8M', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:10:42 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:42 GMT', 'x-amz-request-id': '7D83EFCB204B6EC9', 'x-amz-id-2': 'MzRISOwyjmnup7D83EFCB204B6EC97/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'\\n\\n NoSuchKey\\n The specified key does not exist.\\n \\n 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE\\n'\r\n2021-11-22T19:10:42.119:DEBUG:localstack.services.edge: OUT(s3): \"GET /test-bucket-name/test-runtime.properties\" - status: 404 - response headers: {'x-amzn-requestid': 'RMJVBYKAH478ETR8T1G9DQ4TUHEIKKB96892NRKM3PYQYRVUPI8M', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:10:42 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:42 GMT', 'x-amz-request-id': '7D83EFCB204B6EC9', 'x-amz-id-2': 'MzRISOwyjmnup7D83EFCB204B6EC97/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'\\n\\n NoSuchKey\\n The specified key does not exist.\\n \\n 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE\\n'\r\n2021-11-22T19:10:45.164:DEBUG:localstack.services.edge: IN(s3): \"PUT /test-bucket-name/test-runtime.properties\" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52438', 'Amz-Sdk-Invocation-Id': '3446d18f-08a6-2432-a4dc-f79846c9655e', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=56f95a44e31918932bc863893064a1fcafbf4066d44bc44c8d078cf420316011', 'Content-Md5': 'Xi4HEV9K00jfK4+6lHxpDA==', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD', 'X-Amz-Date': '20211122T191045Z', 'X-Amz-Decoded-Content-Length': '147', 'Content-Length': '320', 'Connection': 'Keep-Alive', 'Expect': '100-continue', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52438', 'x-localstack-edge': 'http://127.0.0.1:52438'} - data: b'93;chunk-signature=5be6b2d473e96bb9f297444da60bdf0ff8f5d2e211e1d551b3cf3646c0946641\\r\\n#20211122+0100\\n#Mon Nov 22 20:10:44 CET 2021\\nlast.sync.url.test-space-key=2822a50f-4992-425a-b8fb-923735a9ddff317e3479-5907-46cf-b33a-60da9709274f\\n\\r\\n0;chunk-signature=bd5c830b94346b57ddc8805ba26c44a122256c207014433bf6579b0985f21df7\\r\\n\\r\\n'\r\n2021-11-22T19:10:45.167:DEBUG:localstack.services.edge: OUT(s3): \"PUT /test-bucket-name/test-runtime.properties\" - status: 400 - response headers: {'Content-Type': 'application/xml', 'Location': '/test-bucket-name', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:45 GMT', 'x-amz-request-id': '20278550A22502FB', 'x-amz-id-2': 'MzRISOwyjmnup20278550A22502FB7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'Content-Length': '156'} - response: \r\nBadDigestThe Content-MD5 you specified did not match what we received.\r\n2021-11-22T19:10:45.168:DEBUG:localstack.services.edge: OUT(s3): \"PUT /test-bucket-name/test-runtime.properties\" - status: 400 - response headers: {'Content-Type': 'application/xml', 'Location': '/test-bucket-name', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:45 GMT', 'x-amz-request-id': '20278550A22502FB', 'x-amz-id-2': 'MzRISOwyjmnup20278550A22502FB7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'Content-Length': '156'} - response: \r\nBadDigestThe Content-MD5 you specified did not match what we received.\r\n```", "pr_html_url": "https://github.com/localstack/localstack/pull/5001", "file_loc": {"base_commit": "2fe8440b619329891db150e45910e8aaad97b7ce", "files": [{"path": "localstack/services/s3/s3_listener.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 883], "mod": [61, 62]}, "(None, 'check_content_md5', 884)": {"add": [884]}}}, {"path": "tests/integration/test_s3.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 2, 51]}, "(None, 'test_cors_with_allowed_origins', 2662)": {"add": [2779]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack/services/s3/s3_listener.py"], "doc": [], "test": ["tests/integration/test_s3.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "8c9d9b0475247f667a0f184f2fbc6d66b955749f", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/11696", "iss_label": "type: bug\nstatus: resolved/fixed\naws:apigateway", "title": "bug: API Gateway does not persist correctly when you restart the localstack docker container", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nI have a working api gateway created with localstack. When I restart the container and try to query the same url, I get this message:\r\n`{\"message\": \"The API id '0e0cf92f' does not correspond to a deployed API Gateway API\"}`.\r\n\r\n# Details:\r\nFirst I create my API and confirm it works:\r\n```\r\n$ awslocal apigatewayv2 get-apis\r\n{\r\n \"Items\": [\r\n {\r\n \"ApiEndpoint\": \"http://0e0cf92f.execute-api.localhost.localstack.cloud:4566\",\r\n \"ApiId\": \"0e0cf92f\",\r\n \"ApiKeySelectionExpression\": \"$request.header.x-api-key\",\r\n \"CorsConfiguration\": {\r\n \"AllowHeaders\": [\r\n \"*\"\r\n ],\r\n \"AllowMethods\": [\r\n \"*\"\r\n ],\r\n \"AllowOrigins\": [\r\n \"*\"\r\n ],\r\n \"ExposeHeaders\": [\r\n \"*\"\r\n ]\r\n },\r\n \"CreatedDate\": \"2024-10-16T05:24:49.452000+00:00\",\r\n \"DisableExecuteApiEndpoint\": false,\r\n \"Name\": \"XpedigoAPI_v2\",\r\n \"ProtocolType\": \"HTTP\",\r\n \"RouteSelectionExpression\": \"$request.method $request.path\",\r\n \"Tags\": {},\r\n \"Version\": \"2024-09-25 01:18:37UTC\"\r\n }\r\n ]\r\n}\r\n```\r\n```\r\n$ awslocal apigatewayv2 get-stages --api-id=0e0cf92f\r\n{\r\n \"Items\": [\r\n {\r\n \"CreatedDate\": \"2024-10-16T05:24:49.524619+00:00\",\r\n \"DefaultRouteSettings\": {\r\n \"DetailedMetricsEnabled\": false\r\n },\r\n \"DeploymentId\": \"4d3d207f\",\r\n \"LastUpdatedDate\": \"2024-10-16T05:24:49.524619+00:00\",\r\n \"RouteSettings\": {},\r\n \"StageName\": \"local\",\r\n \"StageVariables\": {\r\n \"baseurl\": \"alb-localstack-bdowson.ngrok.io\",\r\n \"env\": \"local\"\r\n },\r\n \"Tags\": {}\r\n }\r\n ]\r\n}\r\n```\r\n```\r\n$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f\r\n{\r\n \"Items\": [\r\n {\r\n \"AutoDeployed\": false,\r\n \"CreatedDate\": \"2024-10-16T05:24:49.529068+00:00\",\r\n \"DeploymentId\": \"4d3d207f\",\r\n \"DeploymentStatus\": \"DEPLOYED\"\r\n }\r\n ]\r\n}\r\n```\r\nConfirm it works:\r\n```\r\n$ curl -v https://0e0cf92f.execute-api.localhost.localstack.cloud:4566/local/accounts/health\r\n* Trying 127.0.0.1:4566...\r\n* TCP_NODELAY set\r\n* Connected to 0e0cf92f.execute-api.localhost.localstack.cloud (127.0.0.1) port 4566 (#0)\r\n* ALPN, offering h2\r\n* ALPN, offering http/1.1\r\n* successfully set certificate verify locations:\r\n* CAfile: /etc/ssl/certs/ca-certificates.crt\r\n CApath: /etc/ssl/certs\r\n* TLSv1.3 (OUT), TLS handshake, Client hello (1):\r\n* TLSv1.3 (IN), TLS handshake, Server hello (2):\r\n* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):\r\n* TLSv1.3 (IN), TLS handshake, Certificate (11):\r\n* TLSv1.3 (IN), TLS handshake, CERT verify (15):\r\n* TLSv1.3 (IN), TLS handshake, Finished (20):\r\n* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):\r\n* TLSv1.3 (OUT), TLS handshake, Finished (20):\r\n* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384\r\n* ALPN, server accepted to use h2\r\n* Server certificate:\r\n* subject: CN=localhost.localstack.cloud\r\n* start date: Sep 6 00:00:00 2024 GMT\r\n* expire date: Dec 5 23:59:59 2024 GMT\r\n* subjectAltName: host \"0e0cf92f.execute-api.localhost.localstack.cloud\" matched cert's \"*.execute-api.localhost.localstack.cloud\"\r\n* issuer: C=AT; O=ZeroSSL; CN=ZeroSSL RSA Domain Secure Site CA\r\n* SSL certificate verify ok.\r\n* Using HTTP2, server supports multi-use\r\n* Connection state changed (HTTP/2 confirmed)\r\n* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0\r\n* Using Stream ID: 1 (easy handle 0x5b8d78082650)\r\n> GET /local/accounts/health HTTP/2\r\n> Host: 0e0cf92f.execute-api.localhost.localstack.cloud:4566\r\n> user-agent: curl/7.68.0\r\n> accept: */*\r\n> \r\n* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):\r\n* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):\r\n* old SSL session ID is stale, removing\r\n* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!\r\n< HTTP/2 200 \r\n< server: TwistedWeb/24.3.0\r\n< date: Wed, 16 Oct 2024 05:25:16 GMT\r\n< content-type: text/html; charset=UTF-8\r\n< cache-control: private, must-revalidate\r\n< expires: -1\r\n< pragma: no-cache\r\n< x-powered-by: PHP/8.1.9RC1\r\n< content-length: 2\r\n< apigw-requestid: 5f9a3aa7\r\n< \r\n* Connection #0 to host 0e0cf92f.execute-api.localhost.localstack.cloud left intact\r\nOK\r\n```\r\n\r\nNow I stop localstack, and restart it with `docker-compose up`. The api gateway no longer works correctly:\r\n```\r\n$ curl -v https://0e0cf92f.execute-api.localhost.localstack.cloud:4566/local/accounts/health\r\n* Trying 127.0.0.1:4566...\r\n* TCP_NODELAY set\r\n* Connected to 0e0cf92f.execute-api.localhost.localstack.cloud (127.0.0.1) port 4566 (#0)\r\n* ALPN, offering h2\r\n* ALPN, offering http/1.1\r\n* successfully set certificate verify locations:\r\n* CAfile: /etc/ssl/certs/ca-certificates.crt\r\n CApath: /etc/ssl/certs\r\n* TLSv1.3 (OUT), TLS handshake, Client hello (1):\r\n* TLSv1.3 (IN), TLS handshake, Server hello (2):\r\n* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):\r\n* TLSv1.3 (IN), TLS handshake, Certificate (11):\r\n* TLSv1.3 (IN), TLS handshake, CERT verify (15):\r\n* TLSv1.3 (IN), TLS handshake, Finished (20):\r\n* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):\r\n* TLSv1.3 (OUT), TLS handshake, Finished (20):\r\n* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384\r\n* ALPN, server accepted to use h2\r\n* Server certificate:\r\n* subject: CN=localhost.localstack.cloud\r\n* start date: Sep 6 00:00:00 2024 GMT\r\n* expire date: Dec 5 23:59:59 2024 GMT\r\n* subjectAltName: host \"0e0cf92f.execute-api.localhost.localstack.cloud\" matched cert's \"*.execute-api.localhost.localstack.cloud\"\r\n* issuer: C=AT; O=ZeroSSL; CN=ZeroSSL RSA Domain Secure Site CA\r\n* SSL certificate verify ok.\r\n* Using HTTP2, server supports multi-use\r\n* Connection state changed (HTTP/2 confirmed)\r\n* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0\r\n* Using Stream ID: 1 (easy handle 0x6550ac6c5650)\r\n> GET /local/accounts/health HTTP/2\r\n> Host: 0e0cf92f.execute-api.localhost.localstack.cloud:4566\r\n> user-agent: curl/7.68.0\r\n> accept: */*\r\n> \r\n* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):\r\n* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):\r\n* old SSL session ID is stale, removing\r\n* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!\r\n< HTTP/2 404 \r\n< server: TwistedWeb/24.3.0\r\n< date: Wed, 16 Oct 2024 05:29:09 GMT\r\n< content-type: application/json\r\n< content-length: 86\r\n< \r\n* Connection #0 to host 0e0cf92f.execute-api.localhost.localstack.cloud left intact\r\n{\"message\": \"The API id '0e0cf92f' does not correspond to a deployed API Gateway API\"}\r\n```\r\n\r\nBut the configurations are all the same as before:\r\n```\r\n$ awslocal apigatewayv2 get-apis\r\n{\r\n \"Items\": [\r\n {\r\n \"ApiEndpoint\": \"http://0e0cf92f.execute-api.localhost.localstack.cloud:4566\",\r\n \"ApiId\": \"0e0cf92f\",\r\n \"ApiKeySelectionExpression\": \"$request.header.x-api-key\",\r\n \"CorsConfiguration\": {\r\n \"AllowHeaders\": [\r\n \"*\"\r\n ],\r\n \"AllowMethods\": [\r\n \"*\"\r\n ],\r\n \"AllowOrigins\": [\r\n \"*\"\r\n ],\r\n \"ExposeHeaders\": [\r\n \"*\"\r\n ]\r\n },\r\n \"CreatedDate\": \"2024-10-16T05:24:49.452000+00:00\",\r\n \"DisableExecuteApiEndpoint\": false,\r\n \"Name\": \"XpedigoAPI_v2\",\r\n \"ProtocolType\": \"HTTP\",\r\n \"RouteSelectionExpression\": \"$request.method $request.path\",\r\n \"Tags\": {},\r\n \"Version\": \"2024-09-25 01:18:37UTC\"\r\n }\r\n ]\r\n}\r\n\r\n$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f\r\n{\r\n \"Items\": [\r\n {\r\n \"AutoDeployed\": false,\r\n \"CreatedDate\": \"2024-10-16T05:24:49.529068+00:00\",\r\n \"DeploymentId\": \"4d3d207f\",\r\n \"DeploymentStatus\": \"DEPLOYED\"\r\n }\r\n ]\r\n}\r\n\r\n$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f\r\n{\r\n \"Items\": [\r\n {\r\n \"CreatedDate\": \"2024-10-16T05:24:49.524619+00:00\",\r\n \"DefaultRouteSettings\": {\r\n \"DetailedMetricsEnabled\": false\r\n },\r\n \"DeploymentId\": \"4d3d207f\",\r\n \"LastUpdatedDate\": \"2024-10-16T05:24:49.524619+00:00\",\r\n \"RouteSettings\": {},\r\n \"StageName\": \"local\",\r\n \"StageVariables\": {\r\n \"baseurl\": \"alb-localstack-bdowson.ngrok.io\",\r\n \"env\": \"local\"\r\n },\r\n \"Tags\": {}\r\n }\r\n ]\r\n}\r\n```\r\n\r\n\r\n### Expected Behavior\r\n\r\nAPI gateway should work correctly even after a localstack container restart.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\ndocker-compose.yml:\r\n```\r\nlocalstack:\r\n container_name: localstack\r\n image: localstack/localstack-pro:latest\r\n ports:\r\n - 4566:4566\r\n - 4510-4559:4510-4559\r\n environment:\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n - DEBUG=1\r\n - PERSISTENCE=1\r\n - SNAPSHOT_LOAD_STRATEGY=ON_STARTUP\r\n - LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY}\r\n - PROVIDER_OVERRIDE_APIGATEWAY=next_gen\r\n networks:\r\n app_network:\r\n ipv4_address: 10.0.2.20\r\n volumes:\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n - \"/localstack-data:/var/lib/localstack\"\r\n```\r\n\r\n1. `docker-compose up localstack`\r\n2. Import API Gateway with `awslocal apigatewayv2 import-api --body file://t.json`\r\n3. Create stage with `awslocal apigatewayv2 create-stage --api-id 54ae753d --stage-name local --auto-deploy`\r\n4. Confirm it works with `curl -v https://[gateway url]/local/whatever`\r\n5. Stop localstack\r\n6. Run `docker-compose up localstack` again\r\n7. Try and curl the api again and you will get an error\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Ubuntu 20.04.5 LTS\r\n- LocalStack: \r\n LocalStack version: 3.8.2.dev33\r\n LocalStack Docker image sha: localstack/localstack-pro@sha256:b533e1bcfbe8f5462483725276a0e7f8fbd9ded32b1be2dac5ec9cee5e822023\r\n LocalStack build date: 2024-10-15\r\n LocalStack build git hash: 318e1adc\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\nAfter this error appears, even if I delete the API and recreate it I still get the message `{\"message\": \"The API id 'xxxx' does not correspond to a deployed API Gateway API\"}`. The only way for me to resolve it is to delete my local locastack snapshot folder and rebuild everything.", "pr_html_url": "https://github.com/localstack/localstack/pull/11702", "file_loc": {"base_commit": "8c9d9b0475247f667a0f184f2fbc6d66b955749f", "files": [{"path": "localstack-core/localstack/services/apigateway/next_gen/execute_api/router.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12]}, "('ApiGatewayEndpoint', None, 34)": {"mod": [41]}, "('ApiGatewayEndpoint', '__init__', 41)": {"mod": [44, 45, 46]}}}, {"path": "localstack-core/localstack/services/apigateway/next_gen/provider.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [21]}, "('ApigatewayNextGenProvider', '__init__', 46)": {"mod": [50, 51]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack-core/localstack/services/apigateway/next_gen/execute_api/router.py", "localstack-core/localstack/services/apigateway/next_gen/provider.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "d865e5213515cef6344f16f4c77386be9ce8f223", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/23814", "iss_label": "Performance\nCategorical\ngood first issue", "title": "equality comparison with a scalar is slow for category (performance regression)", "body": "Are the following 2 ways to compare a series to a scalar equivalent (ignore missing values)? I have to write the hard way in order to take advantage of the category properties.\r\n\r\n ```python\r\n x = pd.Series(list('abcd') * 1000000).astype('category')\r\n %timeit x == 'a'\r\n # 10 loops, best of 3: 25.2 ms per loop\r\n %timeit x.cat.codes == x.cat.categories.get_loc('a')\r\n # 1000 loops, best of 3: 750 \u00b5s per loop\r\n ```", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/23888", "file_loc": {"base_commit": "d865e5213515cef6344f16f4c77386be9ce8f223", "files": [{"path": "asv_bench/benchmarks/categoricals.py", "status": "modified", "Loc": {"('Constructor', 'setup', 33)": {"add": [48]}, "(None, None, None)": {"add": [70]}}}, {"path": "doc/source/whatsnew/v0.24.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1153]}}}, {"path": "pandas/core/arrays/categorical.py", "status": "modified", "Loc": {"('Categorical', '__init__', 314)": {"add": [349]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/arrays/categorical.py", "asv_bench/benchmarks/categoricals.py"], "doc": ["doc/source/whatsnew/v0.24.0.rst"], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "ef5304961edbc194148bc5fbdb4591d2f27c2cfc", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/795", "iss_label": "", "title": "Human Engagement\u4e0d\u751f\u6548", "body": "![image](https://github.com/geekan/MetaGPT/assets/152952909/51eaddc4-fd89-49a6-a9d2-f4617c1a6f7b)\r\n\u6211\u5c1d\u8bd5\u8fd0\u884c\u535a\u5ba2\u7ad9\u5173\u4e8eHuman Engagement\u7684\u6e90\u4ee3\u7801\uff0c\u5728\u8fd0\u884c\u5230\r\nteam.hire(\r\n [\r\n SimpleCoder(),\r\n SimpleTester(),\r\n SimpleReviewer(),\r\n SimpleReviewer(is_human=True)\r\n ]\r\n )\r\n\r\n\u4e2d\u7684SimpleReviewer(is_human=True) \u7cfb\u7edf\u5e76\u6ca1\u6709\u505c\u6b62\u8fdb\u7a0b\u63d0\u4f9b\u7528\u6237\u8f93\u5165\uff0c\u800c\u662f\u76f4\u63a5\u4f7f\u7528\u4e86 PROMPT_TEMPLATE: str = \"\"\"\r\n Context: {context}\r\n Review the test cases and provide one critical comments:\r\n \"\"\"\r\n\r\n name: str = \"SimpleWriteReview\"\r\n \u9ed8\u8ba4\u7684prompt\u8bf7\u6c42llm", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/717", "file_loc": {"base_commit": "ef5304961edbc194148bc5fbdb4591d2f27c2cfc", "files": [{"path": "metagpt/roles/role.py", "status": "modified", "Loc": {"('Role', '__init__', 160)": {"add": [168]}}}, {"path": "tests/metagpt/roles/test_role.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 14]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/roles/role.py"], "doc": [], "test": ["tests/metagpt/roles/test_role.py"], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f201b2f5f32c2d48eab6632bf103e9b3a92fc999", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1213", "iss_label": "", "title": "RAG Faiss AssertionError", "body": "**Environment information**\r\n\r\n\r\n- LLM type and model name: ollama ,nomic-embed-text\r\n- System version:win 11 \r\n- Python version:3.9\r\n- MetaGPT version or branch:0.8\r\n\r\n**Bug description**\r\n\r\nrun code as below\r\n```\r\nimport asyncio\r\nfrom metagpt.rag.engines import SimpleEngine\r\nfrom metagpt.rag.schema import FAISSRetrieverConfig\r\nfrom metagpt.const import EXAMPLE_DATA_PATH\r\nDOC_PATH = EXAMPLE_DATA_PATH / \"rag/travel.txt\"\r\nasync def main():\r\n engine = SimpleEngine.from_docs(input_files=[DOC_PATH], retriever_configs=[FAISSRetrieverConfig()])\r\n answer = await engine.aquery(\"What does Bob like?\")\r\n print(answer)\r\n\r\nif __name__ == \"__main__\":\r\n asyncio.run(main())\r\n```\r\noccur AssertionError\r\n```\r\nTraceback (most recent call last):\r\n File \"E:\\MyTask\\Metagpt\\MetaGPT-0.8.0\\examples\\rag_test.py\", line 25, in \r\n asyncio.run(main())\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\asyncio\\runners.py\", line 44, in run\r\n return loop.run_until_complete(main)\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\asyncio\\base_events.py\", line 649, in run_until_complete\r\n return future.result()\r\n File \"E:\\MyTask\\Metagpt\\MetaGPT-0.8.0\\examples\\rag_test.py\", line 15, in main\r\n SimpleEngine.from_docs(input_files=[DOC_PATH], retriever_configs=retriever_configs).persist(persist_dir)\r\n File \"e:\\mytask\\metagpt\\metagpt-0.8.0\\metagpt\\rag\\engines\\simple.py\", line 111, in from_docs\r\n return cls._from_index(index, llm=llm, retriever_configs=retriever_configs, ranker_configs=ranker_configs)\r\n File \"e:\\mytask\\metagpt\\metagpt-0.8.0\\metagpt\\rag\\engines\\simple.py\", line 211, in _from_index\r\n retriever = get_retriever(configs=retriever_configs, index=index) # Default index.as_retriever\r\n File \"e:\\mytask\\metagpt\\metagpt-0.8.0\\metagpt\\rag\\factories\\retriever.py\", line 52, in get_retriever\r\n retrievers = super().get_instances(configs, **kwargs)\r\n File \"e:\\mytask\\metagpt\\metagpt-0.8.0\\metagpt\\rag\\factories\\base.py\", line 18, in get_instances\r\n return [self.get_instance(key, **kwargs) for key in keys]\r\n File \"e:\\mytask\\metagpt\\metagpt-0.8.0\\metagpt\\rag\\factories\\base.py\", line 18, in \r\n return [self.get_instance(key, **kwargs) for key in keys]\r\n File \"e:\\mytask\\metagpt\\metagpt-0.8.0\\metagpt\\rag\\factories\\base.py\", line 45, in get_instance\r\n return creator(key, **kwargs)\r\n File \"e:\\mytask\\metagpt\\metagpt-0.8.0\\metagpt\\rag\\factories\\retriever.py\", line 61, in _create_faiss_retriever\r\n config.index = self._build_index_from_vector_store(config, vector_store, **kwargs)\r\n File \"e:\\mytask\\metagpt\\metagpt-0.8.0\\metagpt\\rag\\factories\\retriever.py\", line 93, in _build_index_from_vector_store\r\n new_index = VectorStoreIndex(\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\site-packages\\llama_index\\core\\indices\\vector_store\\base.py\", line 74, in __init__\r\n super().__init__(\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\site-packages\\llama_index\\core\\indices\\base.py\", line 91, in __init__\r\n index_struct = self.build_index_from_nodes(\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\site-packages\\llama_index\\core\\indices\\vector_store\\base.py\", line 307, in build_index_from_nodes\r\n return self._build_index_from_nodes(nodes, **insert_kwargs)\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\site-packages\\llama_index\\core\\indices\\vector_store\\base.py\", line 279, in _build_index_from_nodes\r\n self._add_nodes_to_index(\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\site-packages\\llama_index\\core\\indices\\vector_store\\base.py\", line 233, in _add_nodes_to_index\r\n new_ids = self._vector_store.add(nodes_batch, **insert_kwargs)\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\site-packages\\llama_index\\vector_stores\\faiss\\base.py\", line 121, in add\r\n self._faiss_index.add(text_embedding_np)\r\n File \"D:\\Dev_Software\\Anaconda\\envs\\metagpt\\lib\\site-packages\\faiss\\class_wrappers.py\", line 228, in replacement_add\r\n assert d == self.d\r\nAssertionError\r\n```\r\nBut when using BM25 instead of Faiss, it runs well.\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1241", "file_loc": {"base_commit": "f201b2f5f32c2d48eab6632bf103e9b3a92fc999", "files": [{"path": "config/config2.example.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [20]}}}, {"path": "metagpt/configs/embedding_config.py", "status": "modified", "Loc": {"('EmbeddingConfig', None, 16)": {"add": [22, 27, 34, 43]}}}, {"path": "metagpt/rag/schema.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}, "('FAISSRetrieverConfig', 'check_dimensions', 45)": {"mod": [47]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/rag/schema.py", "metagpt/configs/embedding_config.py"], "doc": [], "test": [], "config": ["config/config2.example.yaml"], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "fe7043a648eac1e0ec0af772a21b283566ecd020", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3903", "iss_label": "enhancement", "title": "Can I get remote server's ip address via response?", "body": "Can I get remote server's ip address via response?\r\nFor some reason. I'll need get remote site's ip address when parsing response. I looked the document but found nothing.\r\nAny one know that?\r\nThanks!", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3940", "file_loc": {"base_commit": "fe7043a648eac1e0ec0af772a21b283566ecd020", "files": [{"path": "conftest.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}}}, {"path": "docs/topics/request-response.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [618, 707], "mod": [39]}}}, {"path": "scrapy/core/downloader/__init__.py", "status": "modified", "Loc": {"('Downloader', '_download', 160)": {"mod": [176]}}}, {"path": "scrapy/core/downloader/handlers/http11.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "('_ResponseReader', '__init__', 440)": {"add": [451]}, "('_ResponseReader', None, 438)": {"add": [457]}, "('ScrapyAgent', '_cb_bodyready', 373)": {"mod": [376]}, "('ScrapyAgent', '_cb_bodydone', 411)": {"mod": [412, 413, 414, 415, 416, 417]}, "('_ResponseReader', 'connectionLost', 483)": {"mod": [489, 493, 498]}}}, {"path": "scrapy/http/response/__init__.py", "status": "modified", "Loc": {"('Response', '__init__', 20)": {"add": [27]}, "('Response', None, 18)": {"mod": [20]}, "('Response', 'replace', 86)": {"mod": [90]}}}, {"path": "tests/mockserver.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 20, 226], "mod": [9, 10, 13, 14, 16, 17, 241, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 253]}, "('MockServer', None, 201)": {"mod": [201]}, "('MockServer', '__enter__', 203)": {"mod": [204, 206]}}}, {"path": "tests/test_crawl.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "('CrawlTestCase', 'test_response_ssl_certificate_empty_response', 431)": {"add": [438]}}}, {"path": "tests/test_crawler.py", "status": "modified", "Loc": {"('CrawlerProcessSubprocess', None, 277)": {"add": [287], "mod": [277, 278]}, "('CrawlerProcessSubprocess', 'test_reactor_asyncio', 331)": {"add": [334]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": ["scrapy/http/response/__init__.py", "scrapy/core/downloader/handlers/http11.py", "scrapy/core/downloader/__init__.py", "tests/mockserver.py", "conftest.py"], "doc": ["docs/topics/request-response.rst"], "test": ["tests/test_crawler.py", "tests/test_crawl.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "7eaa5ee37f2ef0fb37dc6e9efbead726665810b4", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/3659", "iss_label": "", "title": "URL proxy auth with empty passwords doesn't emit auth header.", "body": "I'm using a proxy that requires authentication to send request that receives 302 response with Location header. I would like python.requests to follow this redirect and make request via proxy with specified credentials. But it seems like this doesn't happen, if I provide credentials in HTTPProxyAuth they will work ok for 200 responses but will fail for 302. See below code sample:\r\n\r\n```python\r\n\r\nimport requests\r\nfrom requests.auth import HTTPProxyAuth\r\n\r\nsess = requests.Session()\r\nurl1 = 'http://httpbin.org/'\r\nurl2 = 'http://httpbin.org/redirect/2'\r\nauth = HTTPProxyAuth('frank', 'hunter2')\r\nproxies = {\r\n \"http\": \"http://localhost:9000\"\r\n}\r\nresponse1 = sess.get(url1, proxies=proxies, auth=auth)\r\nresponse1.raise_for_status()\r\nresponse2 = sess.get(url2, proxies=proxies, auth=auth)\r\nresponse2.raise_for_status()\r\n```\r\nNow launch MITM proxy on localhost\r\n\r\n```\r\n> mitmproxy -p 9000 --singleuser=frank:hunter2\r\n```\r\n\r\nThis fails with 407 for me, and proxy logs only two requests\r\n\r\n```\r\n response2.raise_for_status()\r\n File \"----------\", line 862, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 407 Client Error: Proxy Authentication Required for url: http://httpbin.org/relative-redirect/1\r\n```\r\n\r\n```\r\n>> GET http://httpbin.org/\r\n \u2190 200 text/html 11.87kB 3.57MB/s\r\n GET http://httpbin.org/redirect/2\r\n \u2190 302 text/html 247B 76.59kB/s\r\n\r\n```\r\nit does not log request to `Location`. \r\n\r\nI see that putting credentials in proxies dictionary somehow fixes this issue when I use MITM proxy but it doesn't fix it for my production proxy (can't share code or proxy details here, need to check closer why it doesn't work for my proxy). I guess some details in setup of proxies might vary.\r\n\r\nIs this a bug? I see some issues for proxy auth but they are mostly about HTTPS, not sure if someone reported this thing I describe here. Should this be fixed?\r\n\r\nEDIT:\r\n\r\nIt looks like this always fails if proxy password is empty string.\r\n\r\nchange auth to \r\n\r\n```python\r\nauth = HTTPProxyAuth('frank', '')\r\n\r\nproxies = {\r\n \"http\": \"http://frank:@localhost:9000\"\r\n}\r\n```\r\n\r\nwill now always fail on redirect.\r\n\r\n```python\r\nauth = HTTPProxyAuth('frank', 'hunter2')\r\nproxies = {\r\n \"http\": \"http://frank:hunter2@localhost:9000\"\r\n}\r\n```\r\nworks fine on redirects, but seems somewhat duplicated.\r\n\r\nI noticed this on Ubuntu 14.04, requests 2.11.1, python 2.7.6, mitmproxy 0.10.1", "pr_html_url": "https://github.com/psf/requests/pull/3660", "file_loc": {"base_commit": "7eaa5ee37f2ef0fb37dc6e9efbead726665810b4", "files": [{"path": "requests/adapters.py", "status": "modified", "Loc": {"('HTTPAdapter', 'proxy_headers', 353)": {"mod": [369]}}}, {"path": "tests/test_requests.py", "status": "modified", "Loc": {"('TestRequests', None, 55)": {"add": [1474]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/adapters.py"], "doc": [], "test": ["tests/test_requests.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "923ac2bdee409e4fa8c47414b07f52e036bb21bc", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/25828", "iss_label": "Docs\ngood first issue", "title": "Use Substitution Decorator for CustomBusinessMonthEnd", "body": "This is a follow up to https://github.com/pandas-dev/pandas/pull/21093/files#r188805397 which wasn't working with Py27. Now that that is a thing of the past we should be able to use the more idiomatic Substitution approach to generating this docstring\r\n\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/25868", "file_loc": {"base_commit": "923ac2bdee409e4fa8c47414b07f52e036bb21bc", "files": [{"path": "pandas/tseries/offsets.py", "status": "modified", "Loc": {"('_CustomBusinessMonth', None, 972)": {"add": [979, 987, 988], "mod": [974, 975, 981, 983, 985, 986]}, "(None, None, None)": {"add": [1054, 1061], "mod": [18]}, "('CustomBusinessMonthEnd', None, 1055)": {"mod": [1056, 1057, 1058]}, "('CustomBusinessMonthBegin', None, 1062)": {"mod": [1063, 1064, 1065, 1066]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/tseries/offsets.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "59a240cd311f5cedbcd5e12421f1d3bd596d9070", "iss_html_url": "https://github.com/ansible/ansible/issues/71254", "iss_label": "easyfix\nsupport:core\ndocs\naffects_2.11", "title": "Files contain broken references 404", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\nFiles contain broken references (return 404):\r\n\r\n- [ ] docs/docsite/rst/user_guide/collections_using.rst https://docs.ansible.com/collections/\r\n\r\n- [x] docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_requirements.rst https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_host_config_manager.py~\r\n\r\n- [x] docs/docsite/rst/dev_guide/testing_units.rst https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py\r\n\r\n- [x] docs/docsite/rst/porting_guides/porting_guide_base_2.11.rst\r\nhttps://github.com/ansible/ansible/blob/stable-2.11/changelogs/CHANGELOG-v2.11.rst\r\n\r\n- [x] docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_external_doc_links.rst https://github.com/vmware/pyvmomi/tree/master/docs\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.in\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.ini\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/cobbler.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_infoblox.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_infoblox.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.yaml\r\n\r\n- [ ] docs/docsite/rst/scenario_guides/guide_packet.rst https://support.packet.com/kb/articles/user-data\r\n\r\n\r\n##### ISSUE TYPE\r\n- Documentation Report\r\n\r\n##### ANSIBLE VERSION\r\n```\r\ndevel\r\n```", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/71705", "commit_html_url": null, "file_loc": {"base_commit": "59a240cd311f5cedbcd5e12421f1d3bd596d9070", "files": [{"path": "docs/docsite/rst/scenario_guides/guide_packet.rst", "status": "modified", "Loc": {"(None, None, 126)": {"mod": [126]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docs/docsite/rst/scenario_guides/guide_packet.rst"], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "f8464b4f66e627ed2778c9a27dbe4a8642482baf", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2226", "iss_label": "bug", "title": "Yolov5 crashes with RTSP stream analysis", "body": "## \ud83d\udc1b Bug\r\n\r\nIf I want to analyze an rtsp stream with Yolov5 in a docker container, regardless the latest or the v4.0 version, it crashes.\r\n\r\n## To Reproduce (REQUIRED)\r\n\r\nInput:\r\n```\r\ndocker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server\r\n\r\nffmpeg -i video.mp4 -s 640x480 -c:v libx264 -f rtsp -rtsp_transport tcp rtsp://localhost:8554/analysis\r\n\r\ndocker run -it ultralytics/yolov5:latest\r\n\r\npython3 detect.py --source rtsp://host.docker.internal:8554/analysis --weights yolov5s.pt --conf 0.25 --save-txt\r\n```\r\n\r\nOutput:\r\n```\r\nNamespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=True, source='rtsp://host.docker.internal:8554/analysis', update=False, view_img=False, weights=['yolov5s.pt'])\r\n/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:100.)\r\n return torch._C._cuda_getDeviceCount() > 0\r\nYOLOv5 v4.0-80-gf8464b4 torch 1.8.0a0+1606899 CPU\r\n\r\nFusing layers...\r\nModel Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPS\r\n[h264 @ 0x55e674656100] co located POCs unavailable\r\n[h264 @ 0x55e674656100] mmco: unref short failure\r\n[h264 @ 0x55e675117cc0] co located POCs unavailable\r\n[h264 @ 0x55e674dbb300] mmco: unref short failure\r\n[h264 @ 0x55e674ec09c0] co located POCs unavailable\r\n1/1: rtsp://host.docker.internal:8554/analysis... success (640x480 at 30.00 FPS).\r\n\r\n0: 480x640 13 persons, 1 tennis racket, Done. (2.089s)\r\nqt.qpa.xcb: could not connect to display\r\nqt.qpa.plugin: Could not load the Qt platform plugin \"xcb\" in \"/opt/conda/lib/python3.8/site-packages/cv2/qt/plugins\" even though it was found.\r\nThis application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.\r\n\r\nAvailable platform plugins are: xcb.\r\n\r\nAborted\r\n```\r\n\r\n\r\n## Expected behavior\r\n\r\nDoing the analysis\r\n\r\n## Environment\r\n\r\n - OS: Yolov5 docker container on macos Catalina\r\n - GPU none\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2231", "file_loc": {"base_commit": "f8464b4f66e627ed2778c9a27dbe4a8642482baf", "files": [{"path": "detect.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [12, 13]}, "(None, 'detect', 18)": {"mod": [48, 121]}}}, {"path": "utils/general.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [97]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["utils/general.py", "detect.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "8fcdf3b60b2930a4273cab4e3df22b77680ff41d", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/6515", "iss_label": "bug", "title": "GPU Memory Leak on Loading Pre-Trained Checkpoint", "body": "### Search before asking\r\n\r\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.\r\n\r\n\r\n### YOLOv5 Component\r\n\r\nTraining\r\n\r\n### Bug\r\n\r\nTraining YOLO from a checkpoint (*.pt) consumes more GPU memory than training from a pre-trained weight (i.e. yolov5l).\r\n\r\n### Environment\r\n\r\n- YOLO: YOLOv5 (latest; how to check the yolo version?)\r\n- CUDA: 11.6 (Tesla T4, 15360MiB)\r\n- OS: Ubuntu 18.04.6 LTS (Bionic Beaver)\r\n- Python: 3.8.12\r\n\r\n### Minimal Reproducible Example\r\n\r\nIn the below training command, case 2 requires more GPU memory than case 1.\r\n```\r\n# 1. train from pre-trained model\r\ntrain.py ... --weights yolov5l\r\n\r\n# 2. train from pre-trained checkpoint\r\ntrain.py ... --weights pre_trained_checkpoint.pt\r\n```\r\n\r\n### Additional\r\n\r\nAs reported on the pytorch forum[1], loading state dict on CUDA device causes memory leak. We should load it on CPU memory:\r\n\r\n```python\r\nstate_dict = torch.load(directory, map_location=lambda storage, loc: storage)\r\n```\r\n\r\n- [1] https://discuss.pytorch.org/t/load-state-dict-causes-memory-leak/36189/5?u=bilzrd\r\n\r\n### Are you willing to submit a PR?\r\n\r\n- [X] Yes I'd like to help by submitting a PR!", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/6516", "file_loc": {"base_commit": "8fcdf3b60b2930a4273cab4e3df22b77680ff41d", "files": [{"path": "train.py", "status": "modified", "Loc": {"(None, 'train', 65)": {"mod": [123]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "2a9297f2444f912c354168c6c0df1c782edace0e", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1189", "iss_label": "bug", "title": "Sites Giving 404 error or no profile", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n- [x] I'm reporting a bug in Sherlock's functionality\r\n- [ ] The bug I'm reporting is not a false positive or a false negative\r\n- [ ] I've verified that I'm running the latest version of Sherlock\r\n- [ ] I've checked for similar bug reports including closed ones\r\n- [ ] I've checked for pull requests that attempt to fix this bug\r\n\r\n## Description\r\n\r\n\r\nThere are some sites which comes in result of matched Usernames but tends to give No Profile Page or a 404 Error,\r\nthose sites are below..\r\n\r\n[+] Anilist: https://anilist.co/user/\r\n[+] Coil: https://coil.com/u/\r\n[+] RuneScape: https://apps.runescape.com/runemetrics/app/overview/player/\r\n[+] TrackmaniaLadder: http://en.tm-ladder.com/_rech.php\r\n[+] babyblogRU: https://www.babyblog.ru/user/info", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/1192", "file_loc": {"base_commit": "2a9297f2444f912c354168c6c0df1c782edace0e", "files": [{"path": "removed_sites.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [1255]}}}, {"path": "sherlock/resources/data.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [68, 69, 70, 71, 72, 73, 74, 75, 387, 388, 389, 390, 391, 392, 393, 394]}}}, {"path": "sites.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [106], "mod": [1, 11, 52]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sherlock/resources/data.json"], "doc": ["removed_sites.md", "sites.md"], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "9e41a37284b8796bf3a190fe4bd2a4aee8616ec2", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/55095", "iss_label": "integration: honeywell", "title": "Rate limiting in Honeywell TCC", "body": "### The problem\r\n\r\nMultiple Honeywell TCC users are reporting rate limit errors in #53981. Restarting HomeAssistant seems to temporarily clear it up\r\n\r\n### What is version of Home Assistant Core has the issue?\r\n\r\n2021.8.8\r\n\r\n### What was the last working version of Home Assistant Core?\r\n\r\n_No response_\r\n\r\n### What type of installation are you running?\r\n\r\nHome Assistant Container\r\n\r\n### Integration causing the issue\r\n\r\nHoneywell Total Connect Comfort (US)\r\n\r\n### Link to integration documentation on our website\r\n\r\nhttps://www.home-assistant.io/integrations/honeywell\r\n\r\n### Example YAML snippet\r\n\r\n_No response_\r\n\r\n### Anything in the logs that might be useful for us?\r\n\r\n```txt\r\n2021-08-23 11:08:44 ERROR (MainThread) [homeassistant.helpers.entity] Update for climate.downstairs fails\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/components/honeywell/__init__.py\", line 113, in update\r\n await self._hass.async_add_executor_job(device.refresh)\r\n File \"/usr/local/lib/python3.9/concurrent/futures/thread.py\", line 52, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/somecomfort/client.py\", line 87, in refresh\r\n data = self._client._get_thermostat_data(self.deviceid)\r\n File \"/usr/local/lib/python3.9/site-packages/somecomfort/client.py\", line 468, in _get_thermostat_data\r\n return self._get_json(url)\r\n File \"/usr/local/lib/python3.9/site-packages/somecomfort/client.py\", line 444, in _get_json\r\n return self._request_json('get', *args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/somecomfort/client.py\", line 436, in _request_json\r\n raise APIRateLimited()\r\nsomecomfort.client.APIRateLimited: You are being rate-limited. Try waiting a bit.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 446, in async_update_ha_state\r\n await self.async_device_update()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 654, in async_device_update\r\n raise exc\r\n File \"/usr/src/homeassistant/homeassistant/components/honeywell/climate.py\", line 385, in async_update\r\n await self._data.update()\r\n File \"/usr/src/homeassistant/homeassistant/components/honeywell/__init__.py\", line 124, in update\r\n result = await self._hass.async_add_executor_job(self._retry())\r\n File \"/usr/local/lib/python3.9/concurrent/futures/thread.py\", line 52, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\nTypeError: 'coroutine' object is not callable\r\n```\r\n```\r\n\r\n\r\n### Additional information\r\n\r\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/55304", "file_loc": {"base_commit": "9e41a37284b8796bf3a190fe4bd2a4aee8616ec2", "files": [{"path": "homeassistant/components/honeywell/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1], "mod": [12]}, "(None, 'async_setup_entry', 16)": {"mod": [45]}, "('HoneywellData', None, 68)": {"mod": [105, 111]}, "('HoneywellData', '_refresh_devices', 105)": {"mod": [108]}, "('HoneywellData', 'update', 111)": {"mod": [116, 127]}}}, {"path": "homeassistant/components/honeywell/climate.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [109]}, "('HoneywellUSThermostat', 'async_update', 385)": {"mod": [387]}}}, {"path": "tests/components/honeywell/test_init.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 8, 17]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["homeassistant/components/honeywell/__init__.py", "homeassistant/components/honeywell/climate.py"], "doc": [], "test": ["tests/components/honeywell/test_init.py"], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "f542c58a48e87878028b7639a3c0296bdb351071", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/3", "iss_label": "dev\nadvuser", "title": "Improve command line usage", "body": "Adding a command line args parsing with an help would be great !\r\n\r\nPreferably with `argparse`", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/13", "file_loc": {"base_commit": "f542c58a48e87878028b7639a3c0296bdb351071", "files": [{"path": "extract.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2], "mod": [1, 4, 6, 7, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 63, 64, 65, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 78]}}}, {"path": "lib/faces_detect.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 4]}, "('FullPaths', None, 10)": {"mod": [10, 11, 12, 13]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["extract.py", "lib/utils.py", "lib/faces_detect.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xai-org", "repo_name": "grok-1", "base_commit": "e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8", "iss_has_pr": 1, "iss_html_url": "https://github.com/xai-org/grok-1/issues/14", "iss_label": "", "title": "Grok implementation details", "body": "not an issue but would be nice if it was in the readme/model.py header:\r\n314B parameters\r\nMixture of 8 Experts\r\n2 experts used per token\r\n64 layers\r\n48 attention heads for queries\r\n8 attention heads for keys/values\r\nembeddings size: 6,144\r\nrotary embeddings (RoPE)\r\nSentencePiece tokenizer; 131,072 tokens\r\nSupports activation sharding and 8-bit quantization\r\nMax seq length (context): 8,192 tokens", "pr_html_url": "https://github.com/xai-org/grok-1/pull/27", "file_loc": {"base_commit": "e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "a63524684d02131aef4f2e9d2cea7bfe210abc96", "iss_html_url": "https://github.com/pytorch/pytorch/issues/84408", "iss_label": "module: onnx\ntriaged\ntopic: bug fixes", "title": "Exporting the operator ::col2im to ONNX opset version 11 is not supported", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWhen I converted the model in \u201c.pt\u201d format to onnx format, I received an error that the operator col2im is not supported.\r\n\r\n## code\r\n\r\n import torch\r\n from cvnets import get_model\r\n from options.opts import get_segmentation_eval_arguments\r\n \r\n def pt2onnx():\r\n opts = get_segmentation_eval_arguments()\r\n model = get_model(opts)\r\n model.eval()\r\n onnx_save_path = \"model/mobilevit.onnx\"\r\n in_data = torch.randn(1, 3, 512, 512)\r\n torch.onnx.export(model, \r\n in_data, \r\n onnx_save_path, \r\n opset_version=11, \r\n do_constant_folding=True, \r\n input_names=[\"in\"],\r\n output_names=[\"out\"])\r\n return\r\n\r\n if __name__ == '__main__':\r\n pt2onnx()\r\n\r\n## error\r\nTraceback (most recent call last):\r\n File \"/home/sunseeker/project/robot_seg/code/mobilevit_seg/demo.py\", line 20, in \r\n pt2onnx()\r\n File \"/home/sunseeker/project/robot_seg/code/mobilevit_seg/demo.py\", line 13, in pt2onnx\r\n torch.onnx.export(model, in_data, onnx_save_path, opset_version=11, do_constant_folding=True, input_names=[\"in\"],\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/__init__.py\", line 350, in export\r\n return utils.export(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 163, in export\r\n _export(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 1074, in _export\r\n graph, params_dict, torch_out = _model_to_graph(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 731, in _model_to_graph\r\n graph = _optimize_graph(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 308, in _optimize_graph\r\n graph = _C._jit_pass_onnx(graph, operator_export_type)\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/__init__.py\", line 416, in _run_symbolic_function\r\n return utils._run_symbolic_function(*args, **kwargs)\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 1421, in _run_symbolic_function\r\n raise symbolic_registry.UnsupportedOperatorError(\r\n**torch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::col2im to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.**\r\n\r\n## ENV\r\nPyTorch version: 1.12.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.1 LTS (x86_64)\r\nGCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0\r\nClang version: Could not collect\r\nCMake version: version 3.22.1\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-47-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050\r\nNvidia driver version: 510.85.02\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.23.1\r\n[pip3] pytorchvideo==0.1.5\r\n[pip3] torch==1.12.1\r\n[pip3] torchaudio==0.12.1\r\n[pip3] torchvision==0.13.1\r\n[conda] blas 1.0 mkl \r\n[conda] cudatoolkit 10.2.89 hfd86e86_1 \r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] mkl 2021.4.0 h06a4308_640 \r\n[conda] mkl-service 2.4.0 py310h7f8727e_0 \r\n[conda] mkl_fft 1.3.1 py310hd6ae3a3_0 \r\n[conda] mkl_random 1.2.2 py310h00e6091_0 \r\n[conda] numpy 1.23.1 py310h1794996_0 \r\n[conda] numpy-base 1.23.1 py310hcba007f_0 \r\n[conda] pytorch 1.12.1 py3.10_cuda10.2_cudnn7.6.5_0 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] pytorchvideo 0.1.5 pypi_0 pypi\r\n[conda] torchaudio 0.12.1 py310_cu102 pytorch\r\n[conda] torchvision 0.13.1 py310_cu102 pytorch\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/a63524684d02131aef4f2e9d2cea7bfe210abc96", "file_loc": {"base_commit": "a63524684d02131aef4f2e9d2cea7bfe210abc96", "files": [{"path": "test/onnx/test_pytorch_onnx_no_runtime.py", "status": "modified", "Loc": {"('TestONNXExport', None, 79)": {"add": [1158]}}}, {"path": "test/onnx/test_pytorch_onnx_onnxruntime.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [47]}}}, {"path": "torch/csrc/jit/serialization/export.cpp", "status": "modified", "Loc": {"(None, None, None)": {"add": [84], "mod": [62]}}}, {"path": "torch/onnx/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27, 64]}}}, {"path": "torch/onnx/_constants.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["torch/onnx/_constants.py", "torch/onnx/__init__.py", "torch/csrc/jit/serialization/export.cpp"], "doc": [], "test": ["test/onnx/test_pytorch_onnx_onnxruntime.py", "test/onnx/test_pytorch_onnx_no_runtime.py"], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "64a46031b9c22e2a0526d0216eef627a91da880d", "iss_has_pr": 1, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/384", "iss_label": "", "title": "install error", "body": "Traceback (most recent call last):\r\n File \"/usr/share/hackingtool/hackingtool.py\", line 106, in \r\n os.mkdir(archive)\r\nFileNotFoundError: [Errno 2] No such file or directory: ''\r\n \r\n \r\n and i was in root mode also but this showing what to do help", "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/387", "file_loc": {"base_commit": "64a46031b9c22e2a0526d0216eef627a91da880d", "files": [{"path": "hackingtool.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [105, 106]}}}, {"path": "tools/others/socialmedia.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}, "('Faceshell', 'run', 48)": {"mod": [51]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["hackingtool.py", "tools/others/socialmedia.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "b754525e99ca62424c484fe529b6142f6bab939e", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/5160", "iss_label": "bug\nStale", "title": "Docker Multi-GPU DDP training hang on `destroy_process_group()` with `wandb` option 3", "body": "Hello, when I try to training using multi gpu based on docker file images. I got the below error. I use Ubuntu 18.04, python 3.8.\r\n<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\r\n\r\n```\r\nroot@5a70a5f2d489:/usr/src/app# python -m torch.distributed.run --nproc_per_node 2 train.py --batch 64 --data data.yaml --weights yolov5s.pt --device 0,1\r\nWARNING:__main__:*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\n*****************************************\r\nTraceback (most recent call last):\r\n File \"train.py\", line 620, in \r\n main(opt)\r\n File \"train.py\", line 497, in main\r\n check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks\r\n File \"/usr/src/app/utils/general.py\", line 326, in check_file\r\n assert len(files), f'File not found: {file}' # assert file was found\r\nAssertionError: File not found: data.yaml\r\nwandb: (1) Create a W&B account\r\nwandb: (2) Use an existing W&B account\r\nwandb: (3) Don't visualize my results\r\nwandb: Enter your choice: (30 second timeout) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 405) of binary: /opt/conda/bin/python\r\n/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py:367: UserWarning: \r\n\r\n**********************************************************************\r\n CHILD PROCESS FAILED WITH NO ERROR_FILE \r\n**********************************************************************\r\nCHILD PROCESS FAILED WITH NO ERROR_FILE\r\nChild process 405 (local_rank 1) FAILED (exitcode 1)\r\nError msg: Process failed with exitcode 1\r\nWithout writing an error file to .\r\nWhile this DOES NOT affect the correctness of your application,\r\nno trace information about the error will be available for inspection.\r\nConsider decorating your top level entrypoint function with\r\ntorch.distributed.elastic.multiprocessing.errors.record. Example:\r\n\r\n from torch.distributed.elastic.multiprocessing.errors import record\r\n\r\n @record\r\n def trainer_main(args):\r\n # do train\r\n**********************************************************************\r\n warnings.warn(_no_error_file_warning_msg(rank, failure))\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/opt/conda/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py\", line 702, in \r\n main()\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 361, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py\", line 698, in main\r\n run(args)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py\", line 689, in run\r\n elastic_launch(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 116, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 244, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n***************************************\r\n train.py FAILED \r\n=======================================\r\nRoot Cause:\r\n[0]:\r\n time: 2021-10-13_04:30:25\r\n rank: 1 (local_rank: 1)\r\n exitcode: 1 (pid: 405)\r\n error_file: \r\n msg: \"Process failed with exitcode 1\"\r\n=======================================\r\nOther Failures:\r\n \r\n***************************************\r\n\r\nroot@5a70a5f2d489:/usr/src/app#\r\n\r\n```", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/5163", "file_loc": {"base_commit": "b754525e99ca62424c484fe529b6142f6bab939e", "files": [{"path": "utils/loggers/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 8, 17, 22]}}}, {"path": "utils/loggers/wandb/wandb_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24, 25, 27, 28, 29, 30, 31]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["utils/loggers/wandb/wandb_utils.py", "utils/loggers/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "4c77f62f806567644571b6b3f496f7b332b12327", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/656", "iss_label": "", "title": "Remove unnecessary configs such as: tdd, tdd_plus, clarify, respec", "body": "If we have time: benchmark them and store insights before deletion", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/737", "file_loc": {"base_commit": "4c77f62f806567644571b6b3f496f7b332b12327", "files": [{"path": "gpt_engineer/preprompts/fix_code", "status": "removed", "Loc": {}}, {"path": "gpt_engineer/preprompts/spec", "status": "removed", "Loc": {}}, {"path": "gpt_engineer/preprompts/unit_tests", "status": "removed", "Loc": {}}, {"path": "gpt_engineer/steps.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [60, 395, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438]}, "(None, 'gen_spec', 121)": {"mod": [121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 133, 135, 138, 139, 140, 141, 142, 143, 144, 145, 146, 148, 150, 151, 153]}, "(None, 'gen_code_after_unit_tests', 175)": {"mod": [175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189]}, "(None, 'fix_code', 354)": {"mod": [354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367]}, "('Config', None, 378)": {"mod": [383, 384]}}}, {"path": "tests/test_collect.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [11, 12]}, "(None, 'test_collect_learnings', 15)": {"mod": [21, 30, 31, 32, 33, 34]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt_engineer/steps.py"], "doc": [], "test": ["tests/test_collect.py"], "config": [], "asset": ["gpt_engineer/preprompts/unit_tests", "gpt_engineer/preprompts/fix_code", "gpt_engineer/preprompts/spec"]}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "d57ed889c27d5e95e39ea7db59fe518b5f18f942", "iss_has_pr": 1, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/209", "iss_label": "Bug", "title": "UnicodeDecodeError - help will be appriciate! ", "body": "_Exception in thread Thread-1 (save_and_display_stream):\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\ziv\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\threading.py\", line 1038, in _bootstrap_inner\r\n self.run()\r\n File \"C:\\Users\\ziv\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\threading.py\", line 975, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"C:\\Users\\ziv\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\code_interpreter.py\", line\r\n293, in save_and_display_stream\r\n for line in iter(stream.readline, ''):\r\n File \"C:\\Users\\ziv\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\encodings\\cp1255.py\", line 23, in decode\r\n return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x8e in position 3284: character maps to _\r\n\r\n\r\nI am a Windows User, running with Conda, on Python version 3.11.2\r\n should I change the encoding? ", "pr_html_url": "https://github.com/OpenInterpreter/open-interpreter/pull/742", "file_loc": {"base_commit": "d57ed889c27d5e95e39ea7db59fe518b5f18f942", "files": [{"path": "interpreter/code_interpreters/subprocess_code_interpreter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('SubprocessCodeInterpreter', 'start_process', 39)": {"add": [42, 50]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["interpreter/code_interpreters/subprocess_code_interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "d9e798b48f62fdc2b604a84c36eb83c985f87754", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/82683", "iss_label": "bug\nhas_pr\nP3\naffects_2.13\naffects_2.16", "title": "ansible fact_cache permissions changed after ansible-core update", "body": "### Summary\r\n\r\nAfter update to ansible-core 2.13.2 or higher (It is still an issue with 2.16.3), the default permission of ansible fact cache files changed.\r\n\r\nansible-core 2.13.1 is OK and uses 0644 on the fact files. 2.13.2 and higher uses 0600.\r\n\r\nI could not figure out how to change the behavior back.\r\nWe need read permission for the group per default. \r\nThis is a breaking change for us.\r\n\r\nI did not find a hint in the release notes, so I assume this is a bug\r\n\r\nhttps://github.com/ansible/ansible/compare/v2.13.1...v2.13.2\r\n\r\n\r\nWe have a multi user system, and now ansible user cannot read the cache if a ansible run has been executed by another user.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ncache\r\n\r\n### Ansible Version\r\n(EDIT: section updated for bot to detect latest version, tested with 2.13.1,2.13.2 and 2.16.3)\r\n```console\r\n$ ansible --version\r\n\r\n(ansible-venv-old) xxx:~> ansible --version\r\nansible [core 2.16.3]\r\n config file = None\r\n configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/xxx/ansible-venv-old/lib64/python3.11/site-packages/ansible\r\n ansible collection location = /home/xxx/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /home/xxx/ansible-venv-old/bin/ansible\r\n python version = 3.11.5 (main, Sep 06 2023, 11:21:05) [GCC] (/home/xxx/ansible-venv-old/bin/python3.11)\r\n jinja version = 3.1.3\r\n libyaml = True\r\n```\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\n\r\nCACHE_PLUGIN(env: ANSIBLE_CACHE_PLUGIN) = jsonfile\r\nCACHE_PLUGIN_CONNECTION(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /home/xxx/facts_cache\r\n\r\nCACHE:\r\n=====\r\n\r\njsonfile:\r\n________\r\n_uri(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /home/xxx/facts_cache\r\n```\r\n\r\n### OS / Environment\r\n\r\nIt is reproducible in with python3.11 venv \r\nLinux\r\n\r\n### Steps to Reproduce\r\n\r\n\r\nAfter update to 2.13.2, the facts files have 600.\r\n```\r\n(ansible-venv-old) xxx:~> pip install ansible-core==2.13.2\r\n...\r\n\r\nxxx:~> rm -r \"$HOME/facts_cache\"\r\nxxx:~> export ANSIBLE_CACHE_PLUGIN=jsonfile\r\nxxx:~> export ANSIBLE_CACHE_PLUGIN_CONNECTION=\"$HOME/facts_cache\"\r\nxxx:~> ansible -m setup localhost > /dev/null\r\nxxx:~> ls -lisa facts_cache/\r\ntotal 64\r\n535518 0 drwxr-xr-x 2 xxx yyy 23 Feb 8 13:00 .\r\n262283 0 drwx------ 6 xxx yyy 247 Feb 8 13:00 ..\r\n535519 64 -rw------- 1 xxx yyy 65091 Feb 8 13:00 localhost\r\n```\r\n\r\n\r\n\r\n### Expected Results\r\n\r\nWith 2.13.1, the permission on the fact file are 644:\r\n```\r\n(ansible-venv-old) xxx:~> ansible --version | head -1\r\nansible [core 2.13.1]\r\n\r\nxxx:~> rm -r \"$HOME/facts_cache\"\r\nxxx:~> export ANSIBLE_CACHE_PLUGIN=jsonfile\r\nxxx:~> export ANSIBLE_CACHE_PLUGIN_CONNECTION=\"$HOME/facts_cache\"\r\nxxx:~> ansible -m setup localhost > /dev/null\r\nxxx:~> ls -lisa facts_cache/\r\ntotal 64\r\n535518 0 drwxr-xr-x 2 xxx yyy 23 Feb 8 12:54 .\r\n262283 0 drwx------ 6 xxx yyy 247 Feb 8 12:54 ..\r\n535519 64 -rw-r--r-- 1 xxx yyy 63445 Feb 8 12:54 localhost\r\n```\r\n\r\n### Actual Results\r\n\r\nAfter update to 2.13.2 or higher (even latest 2.16.3), the facts files have 600.\r\nSee steps to reproduce\r\n\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "pr_html_url": "https://github.com/ansible/ansible/pull/82761", "file_loc": {"base_commit": "d9e798b48f62fdc2b604a84c36eb83c985f87754", "files": [{"path": "lib/ansible/plugins/cache/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [30]}, "('BaseFileCacheModule', 'set', 154)": {"add": [166]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/plugins/cache/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "98947d173e3f1667eba29c904f681047dea9de90", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6010", "iss_label": "bug-report", "title": "[Bug]: Extension Updates Overwrite with a git reset --hard", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nI can't rely on users config files not being overwritten. If I use `install.py` to rename them, `install.py` does not run until next cold boot. This causes the extension to not run when first installed. I can probably come with another workaround, like hardcoding the modifications in the apps script. \r\nI shouldn't be having to try workarounds when someone else can't bother to just chmod their files.\r\n\r\nhttps://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4646#issuecomment-1364629164\n\n### Steps to reproduce the problem\n\nIt's in the code:\r\n\r\n![image](https://user-images.githubusercontent.com/9631031/209453808-85c862f3-3802-4961-a082-09a6fcde823a.png)\r\n\n\n### What should have happened?\n\nConfig files to run the extensions should not be overwritten.\n\n### Commit where the problem happens\n\ncurrent to aee611adb874fbabcdeea154a35908ae1f9a4bbf\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox, Google Chrome\n\n### Command Line Arguments\n\n_No response_\n\n### Additional information, context and logs\n\nhttps://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4646 \r\nhttps://github.com/Gerschel/sd_web_ui_preset_utils/issues/23", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4646", "file_loc": {"base_commit": "98947d173e3f1667eba29c904f681047dea9de90", "files": [{"path": "modules/extensions.py", "status": "modified", "Loc": {"('Extension', None, 17)": {"mod": [68]}, "('Extension', 'pull', 68)": {"mod": [70]}}}, {"path": "modules/ui_extensions.py", "status": "modified", "Loc": {"(None, 'apply_and_restart', 23)": {"mod": [39, 41]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["modules/extensions.py", "modules/ui_extensions.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "f8d20f970f16806aee1ef555f9f2db115cec7f34", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/36293", "iss_label": "cloud\naws\nmodule\naffects_2.4\nsupport:core\nbug", "title": "Add support for Timeout (--timeout-in-minutes) parameter in Cloudformation module", "body": "##### ISSUE TYPE\r\n \r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\nCloudformation module\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```\r\nansible 2.4.2.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.12 (default, Nov 20 2017, 18:23:56) [GCC 5.4.0 20160609]\r\n\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\nNo changes\r\n\r\n##### OS / ENVIRONMENT\r\n\r\nUbuntu 16.04\r\n\r\n##### SUMMARY\r\n\r\nI believe this is a bug, that Ansible Cloudformation module does not support important Timeout parameter (--timeout-in-minutes key in aws-cli 'create-stack' call).\r\n\r\n(Documentation:\r\nhttps://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-add-tags.html )\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nCheck module documentation\r\n\r\n##### EXPECTED RESULTS\r\n\r\nTimeout parameter is supported\r\n\r\n##### ACTUAL RESULTS\r\n\r\nTimeout parameter is not supported", "pr_html_url": "https://github.com/ansible/ansible/pull/36445", "file_loc": {"base_commit": "f8d20f970f16806aee1ef555f9f2db115cec7f34", "files": [{"path": "lib/ansible/modules/cloud/amazon/cloudformation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [32, 209]}, "(None, 'create_stack', 300)": {"add": [306], "mod": [304, 305]}, "(None, 'main', 535)": {"add": [544]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/modules/cloud/amazon/cloudformation.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "abb31d0a7ca769a1e6406553a58a7fb0bd3b259a", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/4744", "iss_label": "Bug", "title": "Bug with using TreeClassifier with OOB score and sparse matrices", "body": "When using the ExtraTreesClassifier (and likely other classes that are derived from BaseTreeClassifier), there is a problem when using sparsematrices: `ValueError: X should be in csr_matrix format, got `.\n\nI tracked the issue down to the following lines:\n\nOn line 195 of forest.py the sparse matrix is changed to a csc matrix:\n`X = check_array(X, dtype=DTYPE, accept_sparse=\"csc\")`\n\nHowever on line 369 of forest.py, the following is call is made with `check_input=false`:\n`p_estimator = estimator.predict_proba(X[mask_indices, :], check_input=False)`\n\nThis leads to a ValueError in predict `ValueError: X should be in csr_matrix format, got `.\n\nChanging check_input to True seems to fix the issue. It's probably best to also include a test case for this problem, I just made a quick PR with only the False -> True fix.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/4954", "file_loc": {"base_commit": "abb31d0a7ca769a1e6406553a58a7fb0bd3b259a", "files": [{"path": "doc/whats_new.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [114]}}}, {"path": "sklearn/ensemble/forest.py", "status": "modified", "Loc": {"('ForestClassifier', '_set_oob_score', 374)": {"add": [375]}, "('ForestRegressor', '_set_oob_score', 659)": {"add": [660]}}}, {"path": "sklearn/ensemble/tests/test_forest.py", "status": "modified", "Loc": {"(None, 'test_oob_score', 261)": {"add": [264]}, "(None, None, None)": {"add": [270]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/ensemble/forest.py"], "doc": ["doc/whats_new.rst"], "test": ["sklearn/ensemble/tests/test_forest.py"], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "543ef7753aff639ad3aed7c153e42f719e361d38", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/737", "iss_label": "bug\nanswered\nreviewed", "title": "dependency_overrides does not play well with scopes", "body": "**Describe the bug**\r\nWhen working with `Security()` dependencies, the scopes disappear when `app.dependency_overrides` is executed. The callable dealing with the scopes gets an empty list instead of the scopes.\r\n\r\n**To Reproduce**\r\n\r\n```python\r\nfrom fastapi import FastAPI, Header, Security, Depends\r\nfrom fastapi.security import SecurityScopes\r\n\r\nfrom starlette.testclient import TestClient\r\n\r\napp = FastAPI()\r\n\r\ndef get_user(required_scopes: SecurityScopes):\r\n print(required_scopes.scopes)\r\n\r\n return \"John Doe\"\r\n\r\ndef data():\r\n return [1,2,3]\r\n\r\ndef other_data():\r\n return [3,4,5]\r\n\r\n\r\n@app.get(\"/test\")\r\ndef test(user: str = Security(get_user, scopes=[\"foo\", \"bar\"]), data = Depends(data)):\r\n return data\r\n\r\nclient = TestClient(app)\r\nresponse = client.get(\"/test\")\r\n\r\napp.dependency_overrides[data] = other_data\r\nresponse = client.get(\"/test\")\r\n\r\n# prints: [\"foo\", \"bar\"] and [] instead of [\"foo\", \"bar\"] and [\"foo\", \"bar\"]\r\n```\r\n\r\n**Expected behavior**\r\nIn the above example I expect `get_user()` to print the same scopes twice. Instead, before the `dependency_overrides` it prints the correct scpoes, but an empty list afterwards.\r\n\r\n**Environment:**\r\n - OS: Linux\r\n - FastAPI Version 0.43.0\r\n- Python 3.7.4\r\n", "pr_html_url": "https://github.com/fastapi/fastapi/pull/1549", "file_loc": {"base_commit": "543ef7753aff639ad3aed7c153e42f719e361d38", "files": [{"path": "fastapi/dependencies/utils.py", "status": "modified", "Loc": {"(None, 'solve_dependencies', 432)": {"add": [480]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["fastapi/dependencies/utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "1e95337f3aec4c12244802bb6e493b07b27aa795", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/459", "iss_label": "bug", "title": "custom anchors get flushed when loading pretrain weights", "body": "Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following, otherwise it is non-actionable, and we can not help you:\r\n - **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo\r\n - **Common dataset**: coco.yaml or coco128.yaml\r\n - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#reproduce-our-environment\r\n \r\nIf this is a custom dataset/training question you **must include** your `train*.jpg`, `test*.jpg` and `results.png` figures, or we can not help you. You can generate these with `utils.plot_results()`.\r\n\r\n\r\n## \ud83d\udc1b Bug\r\nin train.py , the anchors set by user in yaml file are flushed by pretrain weights.\r\n```\r\n\r\n if weights.endswith('.pt'): # pytorch format\r\n ckpt = torch.load(weights, map_location=device) # load checkpoint\r\n\r\n # load model\r\n try:\r\n ckpt['model'] = {k: v for k, v in ckpt['model'].float().state_dict().items()\r\n if model.state_dict()[k].shape == v.shape} # to FP32, filter\r\n #print(ckpt['model'].keys())\r\n **#ckpt['model'].pop('model.27.anchors') \r\n #ckpt['model'].pop('model.27.anchor_grid')**\r\n \r\n model.load_state_dict(ckpt['model'], strict=False)\r\n except KeyError as e:\r\n s = \"%s is not compatible with %s. This may be due to model differences or %s may be out of date. \" \\\r\n \"Please delete or update %s and try again, or use --weights '' to train from scratch.\" \\\r\n % (opt.weights, opt.cfg, opt.weights, opt.weights)\r\n raise KeyError(s) from e\r\n\r\n```\r\n## To Reproduce (REQUIRED)\r\n\r\nInput:\r\nin ./model/yolov5x.yaml\r\nchange anchors' shape to any other than default.\r\n\r\nOutput:\r\nthe anchors set in yaml file didn't activated .\r\n\r\n\r\n## Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n\r\n## Environment\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n - OS: [Ubuntu]\r\n - GPU [2080 Ti]\r\n\r\n\r\n## Additional context\r\nif the anchors set by user in yaml file, is more than 9 anchors, the bug didn't get triggered because it did not match the pretrain weight's anchors' shape.\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/462", "file_loc": {"base_commit": "1e95337f3aec4c12244802bb6e493b07b27aa795", "files": [{"path": "train.py", "status": "modified", "Loc": {"(None, 'train', 46)": {"add": [132, 135], "mod": [134]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "65d7a9b9902ad85f27b17d759bd13b59c2afc474", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/589", "iss_label": "", "title": "\"No API key provided\" - altough it is provided in the .env file", "body": "## Expected Behavior\r\n\r\nIf the OpenAI API key is provided in the .env file, it should be recognized and used.\r\n\r\n## Current Behavior\r\n\r\nRuntime error message: openai.error.AuthenticationError: No API key provided.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Set the key in the .env file\r\n2. Run the app with gpt-engineer projects/my-new-project\r\n\r\n### Solution\r\n\r\nWhen I added the line `openai.api_key = os.getenv(\"OPENAI_API_KEY\")` to the end of the function `load_env_if_needed()` in the file `main.py`, as well as `import openai` at the beginning of this file _(thanks, engerlina, for reminder)_, the issue was resolved.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/592", "file_loc": {"base_commit": "65d7a9b9902ad85f27b17d759bd13b59c2afc474", "files": [{"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "(None, 'load_env_if_needed', 19)": {"add": [21]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt_engineer/main.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "57dc58123b98e2026025cc87bdee474bf0656dcb", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4976", "iss_label": "bug\nWindows", "title": "Fix and document asyncio reactor problems on Windows", "body": "As described in https://twistedmatrix.com/trac/ticket/9766 you cannot just enable AsyncioSelectorReactor on Windows with recent Python, you either need fixed Twisted (which is not released yet, the merged fix is https://github.com/twisted/twisted/pull/1338) or, supposedly, add some manual fix as documented [here](https://github.com/twisted/twisted/blob/09b96850c2ebcb635f448ed3f9bbf5f157be3693/src/twisted/internet/asyncioreactor.py#L35-L44). So if it's possible to add this code to Scrapy we should probably do that, at least until the next Twisted release, and even after it we should document that new enough Twisted is needed in this use case. ", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5315", "file_loc": {"base_commit": "57dc58123b98e2026025cc87bdee474bf0656dcb", "files": [{"path": ".github/workflows/tests-windows.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [25]}}}, {"path": "docs/topics/asyncio.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [38]}}}, {"path": "scrapy/utils/reactor.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}, "(None, 'install_reactor', 53)": {"add": [59]}}}, {"path": "tests/CrawlerProcess/asyncio_enabled_reactor.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 3]}}}, {"path": "tests/test_commands.py", "status": "modified", "Loc": {"('RunSpiderCommandTest', None, 557)": {"mod": [677, 678, 679, 702, 703, 704]}, "('RunSpiderCommandTest', 'test_custom_asyncio_loop_enabled_false', 705)": {"mod": [710]}}}, {"path": "tests/test_crawler.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7]}, "('CrawlerRunnerHasSpider', None, 231)": {"mod": [287, 288, 289]}, "('CrawlerProcessSubprocess', None, 323)": {"mod": [331, 332, 333, 339, 340, 341, 380, 381, 382, 407, 408, 409]}}}, {"path": "tests/test_downloader_handlers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "('HttpTestCase', None, 209)": {"add": [289, 298]}, "('FTPTestCase', None, 1055)": {"add": [1057]}}}, {"path": "tests/test_utils_asyncio.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2, 3]}, "('AsyncioTest', None, 11)": {"mod": [17, 18, 19]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/utils/reactor.py", "tests/CrawlerProcess/asyncio_enabled_reactor.py"], "doc": ["docs/topics/asyncio.rst"], "test": ["tests/test_utils_asyncio.py", "tests/test_crawler.py", "tests/test_commands.py", "tests/test_downloader_handlers.py"], "config": [".github/workflows/tests-windows.yml"], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "9b4dfa195e3f23d81389745c26bff8e0087e74b0", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/22046", "iss_label": "Bug\nIndexing", "title": "Replacing multiple columns (or just one) with iloc does not work", "body": "#### Code Sample, a copy-pastable example if possible\r\n\r\n```python\r\nimport pandas\r\n\r\ncolumns = pandas.DataFrame({'a2': [11, 12, 13], 'b2': [14, 15, 16]})\r\ninputs = pandas.DataFrame({'a1': [1, 2, 3], 'b1': [4, 5, 6], 'c1': [7, 8, 9]})\r\n\r\ninputs.iloc[:, [1]] = columns.iloc[:, [0]]\r\n\r\nprint(inputs)\r\n```\r\n\r\n#### Problem description\r\n\r\nI have a code which is replacing a set of columns with another set of columns, based on column indices. To make things done without a special case, I assumes I could just use `iloc` to both select and set columns in a DataFrame. But it seems that this not work and fails in strange ways.\r\n\r\n#### Expected Output\r\n\r\n```\r\n a1 b1 c1\r\n0 1 11 7\r\n1 2 12 8\r\n2 3 13 9\r\n```\r\n\r\nBut in reality, you get:\r\n\r\n```\r\n a1 b1 c1\r\n0 1.0 NaN 7.0\r\n1 2.0 NaN 8.0\r\n2 3.0 NaN 9.0\r\n```\r\n\r\nSee how values converted to float and how column is `NaN`s?\r\n\r\nBut, if I do the following I get expected results:\r\n\r\n```\r\ninputs.iloc[:, [1]] = [[11], [12], [13]]\r\n```\r\n\r\nThis also works:\r\n\r\n```\r\ninputs.iloc[:, [1]] = columns.iloc[:, [0]].values\r\n```\r\n\r\nSo if it works with lists and ndarrays, one would assume it would also work with DataFrames themselves. But it does not.\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n
    \r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.6.3.final.0\r\npython-bits: 64\r\nOS: Linux\r\nOS-release: 4.13.0-46-generic\r\nmachine: x86_64\r\nprocessor: x86_64\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.23.3\r\npytest: None\r\npip: 18.0\r\nsetuptools: 40.0.0\r\nCython: None\r\nnumpy: 1.15.0\r\nscipy: None\r\npyarrow: None\r\nxarray: None\r\nIPython: None\r\nsphinx: None\r\npatsy: None\r\ndateutil: 2.7.3\r\npytz: 2018.5\r\nblosc: None\r\nbottleneck: None\r\ntables: None\r\nnumexpr: None\r\nfeather: None\r\nmatplotlib: None\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: None\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: None\r\njinja2: None\r\ns3fs: None\r\nfastparquet: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n
    \r\n", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/37728", "commit_html_url": null, "file_loc": {"base_commit": "9b4dfa195e3f23d81389745c26bff8e0087e74b0", "files": [{"path": "doc/source/whatsnew/v1.2.0.rst", "status": "modified", "Loc": {"(None, None, 591)": {"add": [591]}}}, {"path": "pandas/core/indexing.py", "status": "modified", "Loc": {"('_LocationIndexer', '__setitem__', 675)": {"mod": [684]}, "('_iLocIndexer', None, 1322)": {"mod": [1520, 1631, 1717, 1790]}, "('_iLocIndexer', '_setitem_with_indexer', 1520)": {"mod": [1596, 1627, 1629]}, "('_iLocIndexer', '_setitem_with_indexer_split_path', 1631)": {"mod": [1645, 1660]}, "('_iLocIndexer', '_setitem_with_indexer_frame_value', 1717)": {"mod": [1727]}, "('_iLocIndexer', '_setitem_single_block', 1790)": {"mod": [1819, 1825]}, "('_iLocIndexer', '_setitem_with_indexer_missing', 1836)": {"mod": [1857]}}}, {"path": "pandas/tests/frame/indexing/test_setitem.py", "status": "modified", "Loc": {"('TestDataFrameSetItem', None, 24)": {"mod": [292, 293, 294, 295, 296, 297, 298, 299]}}}, {"path": "pandas/tests/indexing/test_iloc.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [803]}, "('TestILocSeries', 'test_iloc_getitem_nonunique', 966)": {"add": [968]}}}, {"path": "pandas/tests/indexing/test_indexing.py", "status": "modified", "Loc": {"('TestMisc', 'test_rhs_alignment', 668)": {"mod": [671, 690, 696, 697, 700, 703, 707]}, "('TestMisc', 'run_tests', 671)": {"mod": [678, 682, 686]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/indexing.py"], "doc": ["doc/source/whatsnew/v1.2.0.rst"], "test": ["pandas/tests/frame/indexing/test_setitem.py", "pandas/tests/indexing/test_indexing.py", "pandas/tests/indexing/test_iloc.py"], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "5446c7e490e7203c61b2ff31181551b2c0f4a86b", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1430", "iss_label": "", "title": "DO NOT FORCE VALIDATE '{'Required Python packages'}' by default", "body": "**Bug description**\r\n`metagpt\\actions\\action_node.py\", line 432, in _aask_v1\r\n instruct_content = output_class(**parsed_data)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"..........\\Lib\\site-packages\\pydantic\\main.py\", line 171, in __init__\r\n self.__pydantic_validator__.validate_python(data, self_instance=self)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Required Python packages'} `\r\n\r\n**Bug solved method**\r\nDO NOT VALIDATE THIS FIELD. user may ask the agents to do non py related stuff,why would we force this validate and introduce a hard error? Seems silly.\r\n\r\n**Environment information**\r\nirrelevant\r\n\r\n- LLM type and model name:\r\n- MetaGPT version or branch:0.8.1\r\n\r\n\r\n**Screenshots or logs**\r\n`action_node.py\", line 432, in _aask_v1\r\n instruct_content = output_class(**parsed_data)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \".........\\Lib\\site-packages\\pydantic\\main.py\", line 171, in __init__\r\n self.__pydantic_validator__.validate_python(data, self_instance=self)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Required Python packages'} [type=value_error, input_value={'Required Rust packages'...ption for backup data.'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.6/v/value_error\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):`", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1435", "commit_html_url": null, "file_loc": {"base_commit": "5446c7e490e7203c61b2ff31181551b2c0f4a86b", "files": [{"path": "metagpt/actions/design_api_an.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [47], "mod": [8, 50, 69]}}}, {"path": "metagpt/actions/project_management_an.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8, 14]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/actions/design_api_an.py", "metagpt/actions/project_management_an.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/447", "iss_label": "dependencies", "title": "Pytorch synthesizer", "body": "Splitting this off from #370, which will remain for tensorflow2 conversion. I would prefer this route if we can get it to work. Asking for help from the community on this one.\r\n\r\nOne example of a pytorch-based tacotron is: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2\r\n\r\nAnother option is to manually convert the code and pretrained models which would be extremely time-consuming, but also an awesome learning experience.", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472", "file_loc": {"base_commit": "5425557efe30863267f805851f918124191e0be0", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [18, 23, 24, 65, 66, 68, 70]}}}, {"path": "demo_cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13, 43, 162], "mod": [24, 25, 26, 30, 31, 32, 70, 76]}}}, {"path": "demo_toolbox.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 32], "mod": [23, 24, 25]}}}, {"path": "encoder/audio.py", "status": "modified", "Loc": {"(None, 'preprocess_wav', 19)": {"mod": [20, 43, 44]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [16], "mod": [1]}}}, {"path": "requirements_gpu.txt", "status": "removed", "Loc": {}}, {"path": "synthesizer/LICENSE.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 4]}}}, {"path": "synthesizer/audio.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}}}, {"path": "synthesizer/feeder.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/hparams.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [348], "mod": [1, 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 121, 122, 123, 124, 125, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 149, 150, 151, 152, 153, 154, 155, 157, 158, 159, 160, 161, 162, 164, 165, 166, 167, 168, 169, 170, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 190, 191, 192, 193, 194, 196, 197, 198, 199, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 231, 232, 233, 234, 235, 237, 238, 239, 240, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 342, 343, 344, 345, 347]}, "(None, 'hparams_debug_string', 350)": {"mod": [351, 352, 353]}}}, {"path": "synthesizer/inference.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6], "mod": [1, 2, 3, 4, 5, 9, 11]}, "('Synthesizer', '__init__', 19)": {"add": [33], "mod": [21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59]}, "('Synthesizer', 'griffin_lim', 149)": {"add": [154]}, "('Synthesizer', None, 15)": {"mod": [19, 106, 107, 108, 109, 110, 111, 113, 114, 116, 117, 118, 119, 121]}, "('Synthesizer', 'is_loaded', 61)": {"mod": [63]}, "('Synthesizer', 'load', 67)": {"mod": [69, 70, 71, 72, 73, 74, 75]}, "('Synthesizer', 'synthesize_spectrograms', 77)": {"mod": [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 104]}}}, {"path": "synthesizer/infolog.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/__init__.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/architecture_wrappers.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/attention.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/custom_decoder.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/helpers.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/modules.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/tacotron.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11], "mod": [1, 2, 3, 4, 5, 6, 7, 8, 9]}, "(None, 'split_func', 14)": {"mod": [14, 15, 16, 17, 18, 19, 20, 21, 24, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 79, 81, 82, 84, 86, 87, 88, 89, 90, 91, 93, 94, 95, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 134, 135, 136, 137, 139, 140, 141, 142, 143, 145, 147, 148, 151, 153, 154, 155, 156, 157, 158, 160, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 198, 199, 200, 201, 202, 203, 205, 206, 207, 209, 210, 212, 213, 214, 215, 216, 217, 218, 220, 221, 222, 223, 225, 226, 228, 229, 230, 232, 233, 234, 235, 237, 238, 240, 241, 242, 243, 244, 245, 246, 247, 249, 250, 252, 253, 254, 256, 257, 259, 260, 261, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 277, 278, 279, 280, 281, 282, 283, 284, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 312, 313, 314, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 330, 331, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 369, 370, 371, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 385, 386, 387, 388, 389, 390, 391, 392, 394, 395, 396, 397, 398, 399, 400, 402, 403, 404, 405, 406, 407, 409, 410, 412, 413, 414, 415, 416, 417, 418, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 435, 436, 437, 439, 441, 442, 443, 444, 445, 446, 447, 448, 449, 451, 452, 454, 455, 456, 457, 458, 459, 460, 461, 462, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 479, 480, 481, 483, 484, 485, 486, 487, 488, 489, 491, 492, 493, 494, 495, 497, 498, 499, 501, 502, 504, 505, 507, 508, 509, 510, 512, 513, 514, 515, 516, 517, 518, 520, 521]}}}, {"path": "synthesizer/preprocess.py", "status": "modified", "Loc": {"(None, 'process_utterance', 185)": {"add": [204]}}}, {"path": "synthesizer/synthesize.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [82], "mod": [1, 3, 4, 6, 7]}, "(None, 'run_eval', 10)": {"mod": [10, 11, 12, 14, 15, 16, 17, 18, 20, 21, 23, 24, 25, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37]}, "(None, 'run_synthesis', 39)": {"mod": [40, 41, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 62, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 77, 78, 80, 81]}}}, {"path": "synthesizer/tacotron2.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/train.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 79, 83], "mod": [3, 4, 5, 6, 7, 9, 10, 12, 14, 16, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 31, 32, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78]}, "(None, 'model_train_mode', 85)": {"mod": [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 133, 134, 135, 136, 138, 139, 141, 142, 143, 144, 146, 147, 148, 149]}, "(None, 'train', 110)": {"mod": [151, 152, 153, 154, 155, 156, 157, 159, 161, 167, 169, 171, 172, 173, 174, 176, 177, 178, 179, 181, 183, 184, 185, 186, 187, 189, 190, 191, 192, 194, 195, 196, 198, 199, 201, 202, 204, 205, 207, 208, 210, 212, 213, 214, 215, 216, 218, 219, 220, 222, 223, 224, 226, 227, 228, 230, 231, 232, 233, 234, 235, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 327, 328, 329, 330, 332, 333, 334, 335, 336, 337, 338, 339, 341, 342, 343, 344, 346, 347, 348, 349, 350, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 374, 375, 376, 377, 378, 379, 381, 382, 383, 385, 386, 387, 388, 391, 392]}}}, {"path": "synthesizer/utils/__init__.py", "status": "modified", "Loc": {"('ValueWindow', None, 1)": {"add": [0]}}}, {"path": "synthesizer_train.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2, 4, 6, 9, 10, 11, 12, 13, 14, 15, 16, 21, 22, 23, 24, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 53, 55]}}}, {"path": "toolbox/__init__.py", "status": "modified", "Loc": {"('Toolbox', 'init_encoder', 325)": {"add": [333]}, "('Toolbox', None, 42)": {"mod": [43]}, "('Toolbox', '__init__', 43)": {"mod": [54]}, "('Toolbox', 'synthesize', 207)": {"mod": [211, 212, 213, 214, 215, 216, 217, 221, 224, 228]}, "('Toolbox', 'vocode', 237)": {"mod": [243]}}}, {"path": "toolbox/ui.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [41]}, "('UI', None, 53)": {"mod": [331]}, "('UI', 'populate_models', 338)": {"mod": [347, 348, 349, 350, 351, 352, 353]}}}, {"path": "vocoder_preprocess.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [32, 40], "mod": [20]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["synthesizer/models/modules.py", "synthesizer/models/tacotron.py", "synthesizer/train.py", "synthesizer/models/attention.py", "synthesizer_train.py", "demo_cli.py", "toolbox/__init__.py", "demo_toolbox.py", "synthesizer/models/architecture_wrappers.py", "synthesizer/audio.py", "synthesizer/preprocess.py", "synthesizer/tacotron2.py", "synthesizer/hparams.py", "synthesizer/utils/__init__.py", "synthesizer/synthesize.py", "toolbox/ui.py", "encoder/audio.py", "synthesizer/feeder.py", "synthesizer/models/helpers.py", "synthesizer/models/__init__.py", "synthesizer/inference.py", "vocoder_preprocess.py", "synthesizer/models/custom_decoder.py", "synthesizer/infolog.py"], "doc": ["synthesizer/LICENSE.txt", "README.md"], "test": [], "config": ["requirements_gpu.txt", "requirements.txt"], "asset": []}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "3c922603c0a7d1ad4113245a3d2bcd23bf4b1619", "iss_has_pr": 1, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/875", "iss_label": "Bug", "title": "NameError: name 'computer' is not defined ", "body": "### Describe the bug\n\nWhen I run `interpreter --os`\r\n\r\nAnd then attempt a command like:\r\n`Play a boiler room set on youtube`\r\n\r\nI get a `NameError`:\r\n\r\n```\r\n\u258c OS Control enabled \r\n\r\nTo find items on the screen, Open Interpreter has been instructed to send screenshots to api.openinterpreter.com (we do not store them). Add --offline to attempt this locally. \r\n\r\nMake sure that screen recording permissions are enabled for your Terminal or Python environment. \r\n\r\nWarning: In this mode, Open Interpreter will not require approval before performing actions. Be ready to close your terminal. \r\n\r\n> Play a boiler room set on youtube\r\n \r\n The first step is to open YouTube. We can do this by using a keyboard hotkey to open Spotlight and then typing \"Safari\" to open the browser. Once Safari is open, we can navigate \r\n to YouTube by writing the URL in the address bar. Let's start by opening Safari. \r\n \r\nExpecting value: line 1 column 1 (char 0)\r\nContinuing.\r\n\r\n \r\n computer.keyboard.hotkey(\"space\", \"command\") # Opens spotlight \r\n computer.keyboard.write(\"Safari\") \r\n computer.keyboard.press(\"enter\") # To open Safari \r\n \r\n \r\n --------------------------------------------------------------------------- \r\n NameError Traceback (most recent call last) \r\n Input In [4], in () \r\n 1 print('##active_line1##') \r\n ----> 2 computer.keyboard.hotkey('space', 'command') \r\n 3 print('##active_line2##') \r\n 4 computer.keyboard.write('Safari') \r\n \r\n NameError: name 'computer' is not defined \r\n \r\n \r\n It seems there was a mistake in my attempt to run the code. \"computer\" is not recognized as defined within the current execution environment. Let's correct this by using the \r\n appropriate syntax to execute the code on your machine. We will try to open Safari using the correct approach. Let's run the correct code. \r\n \r\nExpecting value: line 1 column 1 (char 0)\r\nContinuing.\r\n\r\n \r\n # Opens spotlight and types Safari to open the browser \r\n computer.keyboard.hotkey(\"space\", \"command\") \r\n computer.keyboard.write(\"Safari\") \r\n computer.keyboard.press(\"enter\") \r\n \r\n \r\n --------------------------------------------------------------------------- \r\n NameError Traceback (most recent call last) \r\n Input In [7], in () \r\n 2 pass \r\n 3 print('##active_line2##') \r\n ----> 4 computer.keyboard.hotkey('space', 'command') \r\n 5 print('##active_line3##') \r\n 6 computer.keyboard.write('Safari') \r\n \r\n NameError: name 'computer' is not defined \r\n```\r\n\r\nAnd it just gets stuck in this loop where computer is not defined.\r\n\r\n\r\n \n\n### Reproduce\n\n1. `interpreter --os` \r\n2. `Play a boiler room set on youtube`\r\n\n\n### Expected behavior\n\nFor it to be able to open Safari or my default web browser without a Name Error of computer.\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.2.0\n\n### Python version\n\n3.9.6\n\n### Operating System name and version\n\nmacOS 14.0\n\n### Additional context\n\nI have 2 python versions installed. 3.9.6 and 3.10.8. I installed interpreter on both. ", "pr_html_url": "https://github.com/OpenInterpreter/open-interpreter/pull/937", "file_loc": {"base_commit": "3c922603c0a7d1ad4113245a3d2bcd23bf4b1619", "files": [{"path": "interpreter/core/computer/terminal/terminal.py", "status": "modified", "Loc": {"('Terminal', 'run', 36)": {"mod": [40]}}}, {"path": "interpreter/terminal_interface/start_terminal_interface.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}, "(None, 'start_terminal_interface', 19)": {"mod": [303, 544, 545, 546, 548, 593, 603, 608, 633]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["interpreter/core/computer/terminal/terminal.py", "interpreter/terminal_interface/start_terminal_interface.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "ad14f0e49929d426560413c0b9de19986cbeac9e", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/461", "iss_label": "bug", "title": "SileroTTS creates new audio file for each token", "body": "### Describe the bug\r\n\r\nI've just performed a fresh install to confirm this.\r\n\r\nUnless i turn on no stream, SileroTTS will attempt to create an audio file for each word / token. \r\n\r\nSilero should not attempt to create audio until the response is complete.\r\n\r\nSilero extension output directory is being filled up with audio clips that only add one word to the previous file. Is this known to be broken like this?\r\n\r\nTurning off stream works, but it means that the text stream doesn't work. Is there a way to turn off streaming for Silero only?\r\n\r\n\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n1. Enable Silero Extension\r\n2. Disable Auto Play\r\n3. Start Chat\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nN/A\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nWindows 11 / Firefox or Edge\r\n```\r\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/192", "file_loc": {"base_commit": "ad14f0e49929d426560413c0b9de19986cbeac9e", "files": [{"path": "extensions/silero_tts/script.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 5, 14, 35], "mod": [10, 18]}, "(None, 'input_modifier', 36)": {"add": [41]}, "(None, 'output_modifier', 44)": {"add": [59, 65, 67], "mod": [49, 69, 70, 72, 73]}, "(None, 'ui', 86)": {"add": [92, 93], "mod": [88, 89]}}}, {"path": "modules/shared.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13]}}}, {"path": "modules/text_generation.py", "status": "modified", "Loc": {"(None, 'generate_reply', 88)": {"add": [189, 202, 205, 219, 224], "mod": [199, 216]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["extensions/silero_tts/script.py", "modules/shared.py", "modules/text_generation.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "896256ee02273bebf723428ee41cab31930a69f4", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/41423", "iss_label": "Docs\ngood first issue", "title": "DOC: pandas.Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)", "body": "No proper information on \"copy\" is present under [Documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html)", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/41514", "file_loc": {"base_commit": "896256ee02273bebf723428ee41cab31930a69f4", "files": [{"path": "pandas/core/series.py", "status": "modified", "Loc": {"('Series', None, 194)": {"add": [253], "mod": [226]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/series.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "917acaa4524e0195c52a636fccf6a0de4eedd37b", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1170", "iss_label": "docker", "title": "CUDA version incorrect in Dockerfile.gpu ", "body": "The Dockerfile.gpu doesn't work for me. The built doesn't use GPU at all.\r\n\r\nI found that tensorflow cannot find shared library file libXXXX.so.11.0 (If I remember correctly, it's libcudart.so.11.0). I realize that the tensorflow version installed needs CUDA 11.0. But the original Dockerfile.gpu installs the CUDA 10.1. \r\n\r\nIf someone had similar issue, please modify the Dockerfile with:\r\n\r\nFROM nvidia/cuda:11.0.3-cudnn8-devel-ubuntu16.04\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/1232", "file_loc": {"base_commit": "917acaa4524e0195c52a636fccf6a0de4eedd37b", "files": [{"path": "Dockerfile.gpu", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 22]}}}, {"path": "INSTALL.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [39, 279, 285], "mod": [237, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 251, 252, 254, 255, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 268, 269, 270, 272, 281, 282, 283, 284, 287]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["INSTALL.md"], "test": [], "config": ["Dockerfile.gpu"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "9908e1b28525fe96394446be95fcb00785d0ca0c", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/5365", "iss_label": "bug", "title": "[Bug]: Editing Error \"No replacement was performed\" is not informative enough", "body": "### Is there an existing issue for the same bug?\n\n- [X] I have checked the existing issues.\n\n### Describe the bug and reproduction steps\n\nThe agent got this error:\r\n```\r\nERROR:\r\nNo replacement was performed. Multiple occurrences of old_str ` output_path = Path.joinpath(self._output_dir, \"recipe_state.pt\")\r\n torch.save(state_dict, output_path)\r\n logger.info(\r\n \"Recipe checkpoint of size \"\r\n f\"{os.path.getsize(output_path) / 1000**3:.2f} GB \"\r\n f\"saved to {output_path}\"\r\n )` in lines []. Please ensure it is unique.\r\n```\r\n\r\n`in lines []. Please ensure it is unique.` does look right? Should we give out the specific line number?\r\n\r\n\r\nFull trajectory: https://www.all-hands.dev/share?share_id=7c05665906ffb699d93426129b1ee8c50c3cc5c7dcb5e164de9c54f6468e7876\r\n\r\ncc @ryanhoangt\n\n### OpenHands Installation\n\nDocker command in README\n\n### OpenHands Version\n\n_No response_\n\n### Operating System\n\nNone\n\n### Logs, Errors, Screenshots, and Additional Context\n\n_No response_", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/5397", "file_loc": {"base_commit": "9908e1b28525fe96394446be95fcb00785d0ca0c", "files": [{"path": "openhands/runtime/action_execution_server.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 13]}, "('ActionExecutor', 'run_ipython', 178)": {"add": [201]}}}, {"path": "poetry.lock", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 5486, 5491, 5492, 10090]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [67]}}}, {"path": "tests/unit/test_agent_skill.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [720, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 735, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 768, 769, 770, 771, 772, 773, 775, 777, 778, 779, 780, 781, 782, 783, 784, 786, 787, 788, 789, 790, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["openhands/runtime/action_execution_server.py"], "doc": [], "test": ["tests/unit/test_agent_skill.py"], "config": ["poetry.lock", "pyproject.toml"], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "fa78ea801392f4f0d37ea7ddbbfe44e9c8c102bd", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/49647", "iss_label": "Code Style\ngood first issue", "title": "STYLE place standard library imports at top of file", "body": "Imports should typically be placed at the top of files. Sometimes, imports are placed inside functions to:\r\n- avoid circular imports\r\n- avoid `ImportError` if it's an optional dependency\r\n\r\nStandard library imports should really always be at the top of files.\r\n\r\nNoticed in https://github.com/pandas-dev/pandas/pull/49645 that this is often not the case\r\n\r\nI've made a script to automate detecting when this is the case. So the task is:\r\n```\r\ngit checkout -b standard-library-imports main\r\ngit pull git@github.com:MarcoGorelli/pandas.git standard-library-imports\r\ngit reset --hard FETCH_HEAD\r\npre-commit run stdlib-imports --all-files\r\n```\r\nThen, fixup any errors that are reported. Finally, stage your changes, commit them, push them to your fork, and open a pull request\r\n\r\nFeel free to reach out if you into any issues along the way\r\n\r\nIf any wants to take this, it would be a nice and welcome clean up!\r\n\r\n---\r\n\r\nEDIT: after going through a PR, I'm not sure it's worth introducing a check for this - but we can still take some of the cleanups it found", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/50116", "file_loc": {"base_commit": "fa78ea801392f4f0d37ea7ddbbfe44e9c8c102bd", "files": [{"path": "pandas/tests/apply/test_series_apply.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "(None, 'test_apply', 35)": {"mod": [40]}, "(None, 'test_map_decimal', 527)": {"mod": [528]}}}, {"path": "pandas/tests/arrays/test_datetimelike.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, 'array_likes', 1337)": {"mod": [1349, 1350]}}}, {"path": "pandas/tests/frame/indexing/test_indexing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "('TestDataFrameIndexing', 'test_setitem_ambig', 468)": {"mod": [470]}}}, {"path": "pandas/tests/frame/methods/test_to_records.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}, "('TestDataFrameToRecords', 'test_to_records_with_Mapping_type', 60)": {"mod": [61, 62]}}}, {"path": "pandas/tests/frame/test_constructors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 3, 4, 10]}, "('TestDataFrameConstructors', 'test_constructor_ordereddict', 468)": {"mod": [469]}, "('TestDataFrameConstructors', 'test_constructor_defaultdict', 719)": {"mod": [721]}, "('TestDataFrameConstructors', 'test_constructor_stdlib_array', 1343)": {"mod": [1346]}, "('TestDataFrameConstructors', 'test_constructor_list_of_namedtuples', 1545)": {"mod": [1547]}, "('TestDataFrameConstructors', 'test_constructor_list_of_dataclasses', 1560)": {"mod": [1562]}, "('TestDataFrameConstructors', 'test_constructor_list_of_dataclasses_with_varying_types', 1571)": {"mod": [1573]}, "('TestDataFrameConstructors', 'test_constructor_list_of_dataclasses_error_thrown', 1587)": {"mod": [1589]}}}, {"path": "pandas/tests/groupby/test_filters.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'test_filter_against_workaround', 173)": {"mod": [195]}}}, {"path": "pandas/tests/groupby/test_grouping.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "('TestGrouping', 'test_grouper_multilevel_freq', 169)": {"mod": [173, 174, 175, 176]}}}, {"path": "pandas/tests/groupby/test_timegrouper.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 3]}, "('TestGroupBy', 'test_first_last_max_min_on_time_data', 762)": {"mod": [766, 777]}}}, {"path": "pandas/tests/indexes/test_common.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "('TestCommon', 'test_copy_and_deepcopy', 134)": {"mod": [135, 136, 137, 138]}}}, {"path": "pandas/tests/indexing/multiindex/test_slice.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('TestMultiIndexSlicers', 'test_multiindex_slicers_datetimelike', 247)": {"mod": [251, 253, 254, 255, 256]}}}, {"path": "pandas/tests/io/excel/test_readers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}, "('TestReaders', 'test_read_from_file_url', 890)": {"mod": [900]}}}, {"path": "pandas/tests/io/formats/test_printing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'test_repr_binary_type', 21)": {"mod": [22]}}}, {"path": "pandas/tests/io/formats/test_to_csv.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "('TestToCSV', 'test_to_csv_doublequote', 84)": {"mod": [97]}}}, {"path": "pandas/tests/io/json/test_pandas.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}, "('TestPandasContainer', 'test_to_s3', 1732)": {"mod": [1733]}}}, {"path": "pandas/tests/io/parser/test_c_parser_only.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}, "(None, 'test_precise_conversion', 171)": {"mod": [172]}}}, {"path": "pandas/tests/io/parser/test_encoding.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}, "(None, 'test_utf16_bom_skiprows', 47)": {"mod": [62]}}}, {"path": "pandas/tests/io/parser/test_python_parser_only.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12]}, "(None, 'test_sniff_delimiter_encoding', 100)": {"mod": [111]}}}, {"path": "pandas/tests/io/pytables/test_store.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3], "mod": [1]}, "(None, 'test_repr', 110)": {"mod": [129, 130]}, "(None, 'test_table_mixed_dtypes', 431)": {"mod": [444, 445]}, "(None, 'test_calendar_roundtrip_issue', 454)": {"mod": [461, 467, 468]}, "(None, 'test_same_name_scoping', 524)": {"mod": [537]}, "(None, 'test_store_index_name_numpy_str', 558)": {"mod": [561, 565]}, "(None, 'do_copy', 878)": {"mod": [880]}}}, {"path": "pandas/tests/io/test_orc.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, 'test_orc_reader_decimal', 100)": {"mod": [101]}}}, {"path": "pandas/tests/io/xml/test_xml.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}, "(None, 'test_empty_string_etree', 493)": {"mod": [494]}, "(None, 'test_wrong_file_path_etree', 513)": {"mod": [514]}}}, {"path": "pandas/tests/plotting/frame/test_frame.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 9]}, "('TestDataFramePlots', 'test_memory_leak', 1783)": {"mod": [1785, 1786]}}}, {"path": "pandas/tests/reshape/concat/test_concat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "('TestConcatenate', 'test_dtype_coerceion', 337)": {"mod": [346, 348, 349, 350]}}}, {"path": "pandas/tests/reshape/concat/test_index.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('TestMultiIndexConcat', 'test_concat_multiindex_dfs_with_deepcopy', 241)": {"mod": [243]}}}, {"path": "pandas/tests/reshape/test_get_dummies.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}, "('TestGetDummies', 'test_get_dummies_unicode', 165)": {"mod": [167]}}}, {"path": "pandas/tests/series/test_arithmetic.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 4]}, "('TestSeriesArithmetic', 'test_add_na_handling', 224)": {"mod": [225, 226]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["pandas/tests/arrays/test_datetimelike.py", "pandas/tests/frame/methods/test_to_records.py", "pandas/tests/indexes/test_common.py", "pandas/tests/groupby/test_timegrouper.py", "pandas/tests/reshape/test_get_dummies.py", "pandas/tests/groupby/test_grouping.py", "pandas/tests/reshape/concat/test_concat.py", "pandas/tests/frame/test_constructors.py", "pandas/tests/indexing/multiindex/test_slice.py", "pandas/tests/io/test_orc.py", "pandas/tests/io/parser/test_encoding.py", "pandas/tests/plotting/frame/test_frame.py", "pandas/tests/io/formats/test_printing.py", "pandas/tests/io/formats/test_to_csv.py", "pandas/tests/io/json/test_pandas.py", "pandas/tests/reshape/concat/test_index.py", "pandas/tests/io/excel/test_readers.py", "pandas/tests/series/test_arithmetic.py", "pandas/tests/io/xml/test_xml.py", "pandas/tests/io/pytables/test_store.py", "pandas/tests/io/parser/test_python_parser_only.py", "pandas/tests/groupby/test_filters.py", "pandas/tests/frame/indexing/test_indexing.py", "pandas/tests/io/parser/test_c_parser_only.py", "pandas/tests/apply/test_series_apply.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "024f0d384cf5bb65c76ac59f8ddce464b2dc2ca1", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/3555", "iss_label": "json", "title": "Remove simplejson", "body": "In modern Python it's unlikely to be significantly better than the built-in `json`. The module used by `JSONMixin` is overridable, so users can plug it in again if they want.\r\n\r\nSee pallets/itsdangerous#146 and pallets/werkzeug#1766.", "pr_html_url": "https://github.com/pallets/flask/pull/3562", "file_loc": {"base_commit": "024f0d384cf5bb65c76ac59f8ddce464b2dc2ca1", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [8]}}}, {"path": "docs/api.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [287, 288, 289, 290, 291, 293, 295, 296, 297, 298, 300, 302, 304, 305, 306, 308, 309, 310, 311, 313, 314, 315, 316, 317, 322, 325, 327, 328, 329, 331, 332]}}}, {"path": "docs/installation.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [42, 43, 44, 51]}}}, {"path": "src/flask/json/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 3], "mod": [1, 7, 8, 20, 21, 22, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 38, 39, 40, 41, 44, 45, 46, 47, 48, 49]}, "(None, 'dumps', 179)": {"add": [196], "mod": [180, 181, 182, 183, 185, 186, 187, 190, 191, 192, 193, 195, 203, 204]}, "(None, 'loads', 217)": {"add": [234], "mod": [218, 219, 220, 221, 223, 224, 225, 228, 229, 230, 231, 233, 239, 240, 241, 242, 243]}, "(None, 'jsonify', 296)": {"add": [331], "mod": [297, 298, 299, 300, 301, 302, 304, 305, 307, 308, 309, 310, 311, 312, 314, 318, 320, 321, 322, 324, 335, 336, 338, 339, 340, 341]}, "('JSONEncoder', None, 52)": {"mod": [53, 54, 55, 57, 58, 60, 61]}, "('JSONEncoder', 'default', 64)": {"mod": [65, 66, 67, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 91]}, "('JSONDecoder', None, 94)": {"mod": [95, 96, 97, 98]}, "(None, '_dump_arg_defaults', 102)": {"mod": [109, 110, 111, 113, 114]}, "(None, '_load_arg_defaults', 122)": {"mod": [129, 130, 131]}, "(None, 'detect_encoding', 136)": {"mod": [136, 137, 139, 140, 141, 143, 144, 145, 146, 148, 149, 151, 152, 154, 155, 157, 158, 160, 161, 162, 164, 165, 167, 168, 170, 171, 173, 174, 176]}, "(None, 'dump', 208)": {"mod": [209, 212, 213]}, "(None, 'load', 247)": {"mod": [248, 250]}, "(None, 'htmlsafe_dumps', 254)": {"mod": [254, 255, 256, 257, 258, 259, 261, 263, 264, 265, 266, 268, 269, 270, 273, 274, 275, 276, 277, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288]}, "(None, 'htmlsafe_dump', 291)": {"mod": [292, 293]}}}, {"path": "src/flask/json/tag.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [48]}, "('TagMarkup', None, 169)": {"mod": [170, 172]}, "('TaggedJSONSerializer', None, 215)": {"mod": [225]}}}, {"path": "tests/test_helpers.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [16]}, "('TestJSON', None, 66)": {"mod": [67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85]}, "('TestJSON', 'test_template_escaping', 252)": {"mod": [256]}}}, {"path": "tox.ini", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4, 27]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/flask/json/__init__.py", "src/flask/json/tag.py"], "doc": ["docs/api.rst", "docs/installation.rst", "CHANGES.rst"], "test": ["tests/test_helpers.py"], "config": ["tox.ini"], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "384895b9a8da0fcdb3b92868fb5965c5e6de1ed5", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/293", "iss_label": "", "title": "Outdated DockerFile dependencies", "body": "The DockerFile inside the manim-master still contains the python version 2.7.12. Considering that manim had no longer support the python 2. This could lead to a syntax error. Please fix this issue ASAP.", "pr_html_url": "https://github.com/3b1b/manim/pull/301", "file_loc": {"base_commit": "384895b9a8da0fcdb3b92868fb5965c5e6de1ed5", "files": [{"path": "Dockerfile", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 13], "mod": [1, 2, 3, 4, 6, 9, 10, 11, 12, 15, 16, 18, 19, 20, 22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["Dockerfile"], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "3699eeb67cad333272b14a42dd3843d93fda1a2e", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/9567", "iss_label": "site-bug", "title": "[TikTok] New API fix adds non-playable video codec in available formats", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGlobal\n\n### Provide a description that is worded well enough to be understood\n\nHi! New API fix adds for some videos a new codec bytevc2 which cannot be played by multimedia players (I used VLC). By default yt_dlp chooses normal codec, but I use `-S res:1080,vcodec:avc1,ext:mp4:m4a` format selection, so yt_dlp downloads video with this new codec.\r\n\r\n\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\nuser@host:~$ python3 -m yt_dlp -F https://vm.tiktok.com/ZMMPDNEJL/\r\n[vm.tiktok] Extracting URL: https://vm.tiktok.com/ZMMPDNEJL/\r\n[vm.tiktok] ZMMPDNEJL: Downloading webpage\r\n[TikTok] Extracting URL: https://www.tiktok.com/@soyko_max/video/7351538939712359713?_t=8l5sekUiLWo&_r=1\r\n[TikTok] 7351538939712359713: Downloading video feed\r\n[info] Available formats for 7351538939712359713:\r\nID EXT RESOLUTION \u2502 FILESIZE TBR PROTO \u2502 VCODEC ACODEC MORE INFO\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\ndownload_addr-0 mp4 720x1280 \u2502 1.31MiB https \u2502 h264 aac Download video, watermarked (API)\r\ndownload_addr-1 mp4 720x1280 \u2502 1.31MiB https \u2502 h264 aac Download video, watermarked\r\ndownload_addr-2 mp4 720x1280 \u2502 1.31MiB https \u2502 h264 aac Download video, watermarked\r\nh264_540p_986746-0 mp4 1048x576 \u2502 1.24MiB 986k https \u2502 h264 aac Direct video (API)\r\nh264_540p_986746-1 mp4 1048x576 \u2502 1.24MiB 986k https \u2502 h264 aac Direct video\r\nh264_540p_986746-2 mp4 1048x576 \u2502 1.24MiB 986k https \u2502 h264 aac Direct video\r\nbytevc1_540p_263555-0 mp4 1048x576 \u2502 339.96KiB 263k https \u2502 h265 aac Playback video (API)\r\nbytevc1_540p_263555-1 mp4 1048x576 \u2502 339.96KiB 263k https \u2502 h265 aac Playback video\r\nbytevc1_540p_263555-2 mp4 1048x576 \u2502 339.96KiB 263k https \u2502 h265 aac Playback video\r\nbytevc1_540p_344910-0 mp4 1048x576 \u2502 444.91KiB 344k https \u2502 h265 aac Playback video (API)\r\nbytevc1_540p_344910-1 mp4 1048x576 \u2502 444.91KiB 344k https \u2502 h265 aac Playback video\r\nbytevc1_540p_344910-2 mp4 1048x576 \u2502 444.91KiB 344k https \u2502 h265 aac Playback video\r\nbytevc1_540p_507345-0 mp4 1048x576 \u2502 654.43KiB 507k https \u2502 h265 aac Direct video (API)\r\nbytevc1_540p_507345-1 mp4 1048x576 \u2502 654.43KiB 507k https \u2502 h265 aac Direct video\r\nbytevc1_540p_507345-2 mp4 1048x576 \u2502 654.43KiB 507k https \u2502 h265 aac Direct video\r\nbytevc2_720p_616180-0 mp4 1280x704 \u2502 794.82KiB 616k https \u2502 bytevc2 aac Playback video (API)\r\nbytevc2_720p_616180-1 mp4 1280x704 \u2502 794.82KiB 616k https \u2502 bytevc2 aac Playback video\r\nbytevc2_720p_616180-2 mp4 1280x704 \u2502 794.82KiB 616k https \u2502 bytevc2 aac Playback video\r\nbytevc1_720p_595186-0 mp4 1280x704 \u2502 767.74KiB 595k https \u2502 h265 aac Playback video (API)\r\nbytevc1_720p_595186-1 mp4 1280x704 \u2502 767.74KiB 595k https \u2502 h265 aac Playback video\r\nbytevc1_720p_595186-2 mp4 1280x704 \u2502 767.74KiB 595k https \u2502 h265 aac Playback video\r\n\r\nuser@host:~$ python3 -m yt_dlp -vU -S res:1080,vcodec:avc1,ext:mp4:m4a https://vm.tiktok.com/ZMMPDNEJL/\r\n[debug] Command-line config: ['-vU', '-S', 'res:1080,vcodec:avc1,ext:mp4:m4a', 'https://vm.tiktok.com/ZMMPDNEJL/']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447]\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-101-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)\r\n[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.31.0, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.2.1, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1807 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.03.10 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)\r\n[vm.tiktok] Extracting URL: https://vm.tiktok.com/ZMMPDNEJL/\r\n[vm.tiktok] ZMMPDNEJL: Downloading webpage\r\n[TikTok] Extracting URL: https://www.tiktok.com/@soyko_max/video/7351538939712359713?_t=8l5sekUiLWo&_r=1\r\n[debug] [TikTok] iid=7351149742343391009\r\n[TikTok] 7351538939712359713: Downloading video feed\r\n[debug] Sort order given by user: res:1080, vcodec:avc1, ext:mp4:m4a\r\n[debug] Sort order given by extractor: quality, codec, size, br\r\n[debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), vcodec:avc1(7), vext:mp4(6), aext:m4a(8), quality, acodec, size, br, lang, fps, hdr:12(7), channels, asr, proto, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] 7351538939712359713: Downloading 1 format(s): bytevc2_720p_616180-2\r\n[debug] Invoking http downloader on \"https://v16m.byteicdn.com/366e989edb75b43b4e545b0f6f94180e/6607a32c/video/tos/useast2a/tos-useast2a-ve-0068-euttp/oIEDMCMe2rmeIExnh6mGk70AEkerG1aIRgfFj6/?a=0&bti=OHYpOTY0Zik3OjlmOm01MzE6ZDQ0MDo%3D&ch=0&cr=13&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1202&bt=601&cs=5&ds=3&ft=teSL~8QLodzR12NvvEh3hIxR34DaRq_45SY&mime_type=video_mp4&qs=14&rc=aTwzOGdkOGRnMzw1NDQ1OUBpam9pbng5cjx4cjMzZjczM0AvL18yYmBgXjMxXy4tLy5eYSNsa2FnMmRrZy9gLS1kMWNzcw%3D%3D&vvpl=1&l=20240329232904ECF8D33243F1CE142F0B&btag=e00088000&cc=10\"\r\n[download] Destination: \u041e\u0440\u0438\u0433\u0438\u043d\u0430\u043b) [7351538939712359713].mp4\r\n[download] 100% of 794.82KiB in 00:00:00 at 7.49MiB/s\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/9575", "file_loc": {"base_commit": "3699eeb67cad333272b14a42dd3843d93fda1a2e", "files": [{"path": "yt_dlp/extractor/tiktok.py", "status": "modified", "Loc": {"('TikTokBaseIE', 'extract_addr', 275)": {"add": [276, 288], "mod": [290]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["yt_dlp/extractor/tiktok.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "0d8e4ffa2c0706b0381f53c3985d04255b7170f5", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2173", "iss_label": "bug\nstale", "title": "Disable g4f logging completely", "body": "**Bug description**\r\nIn my script I have my customized logging, but whenever I use it it prints 2 times (one from my logger, one from g4f logger).\r\nHow can I turn off the logger inside the library? Already tried a bunch of stuff with no results.\r\n\r\nP.S. Are you using the root logger maybe? If that is the case, please use it with the module name\r\n\r\nex.\r\n1. Create a new logger in new class\r\n2. Set logging level to DEBUG\r\n3. Log something\r\n4. Enjoy duplicated output\r\n\r\n**Screenshots**\r\n![image](https://github.com/user-attachments/assets/c24a1b65-b513-4449-8842-870552289de0)\r\n\r\n**Environment**\r\n- python version: 3.11\r\n- location ( are you in a cloudfare flagged country ) ? nope\r\n", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/2347", "file_loc": {"base_commit": "0d8e4ffa2c0706b0381f53c3985d04255b7170f5", "files": [{"path": "g4f/Provider/Ai4Chat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}, "('Ai4Chat', 'create_async_generator', 37)": {"mod": [87]}}}, {"path": "g4f/Provider/Mhystical.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [20]}, "('Mhystical', 'filter_response', 81)": {"mod": [87, 88]}}}, {"path": "g4f/Provider/you/har_file.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [14]}, "(None, 'get_telemetry_ids', 79)": {"mod": [84, 91, 115]}}}, {"path": "g4f/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 14]}}}, {"path": "g4f/api/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [24]}, "('Api', 'streaming', 196)": {"mod": [203]}, "('Api', 'chat_completions', 166)": {"mod": [210]}, "('Api', 'generate_image', 214)": {"mod": [225]}}}, {"path": "g4f/api/_logging.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}, "('__InterceptHandler', None, 12)": {"mod": [12, 13, 14, 15, 16, 17, 19, 20, 21, 22, 24, 25, 26]}, "(None, 'hook_logging', 31)": {"mod": [31, 32]}}}, {"path": "g4f/gui/server/api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21]}, "('Api', '_create_response_stream', 138)": {"mod": [158, 168]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [11]}}}, {"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [42]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["g4f/Provider/Ai4Chat.py", "g4f/Provider/Mhystical.py", "g4f/Provider/you/har_file.py", "setup.py", "g4f/api/__init__.py", "g4f/__init__.py", "g4f/gui/server/api.py", "g4f/api/_logging.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "324208eaa66a528f1e88f938c71c2d8efb8304f3", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/5420", "iss_label": "Bug\nDocs\nIndexing", "title": "BUG: loc should not fallback for integer indexing for multi-index", "body": "https://groups.google.com/forum/m/#!topic/pydata/W0e3l0UvNwI\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/7497", "file_loc": {"base_commit": "324208eaa66a528f1e88f938c71c2d8efb8304f3", "files": [{"path": "doc/source/v0.14.1.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [64]}}}, {"path": "pandas/core/index.py", "status": "modified", "Loc": {"('Index', '_convert_list_indexer_for_mixed', 607)": {"mod": [612]}}}, {"path": "pandas/tests/test_indexing.py", "status": "modified", "Loc": {"('TestIndexing', None, 86)": {"add": [808]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/index.py"], "doc": ["doc/source/v0.14.1.txt"], "test": ["pandas/tests/test_indexing.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "6d2c57fa010c12f21f700034b5651519670b9b9d", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/3561", "iss_label": "Bug\nIndexing", "title": "DataFrame.ix losing row ordering when index has duplicates", "body": "``` python\nimport pandas as pd\n\nind = ['A', 'A', 'B', 'C']i\ndf = pd.DataFrame({'test':range(len(ind))}, index=ind)\n\nrows = ['C', 'B']\nres = df.ix[rows]\nassert rows == list(res.index) # fails\n```\n\nThe problem is that the resulting DataFrame keeps the ordering of the `df.index` and not the `rows` key. You'll notice that the `rows` key doesn't reference a duplicate value. \n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/3563", "file_loc": {"base_commit": "6d2c57fa010c12f21f700034b5651519670b9b9d", "files": [{"path": "RELEASE.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [93, 150]}}}, {"path": "doc/source/indexing.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1370]}}}, {"path": "pandas/core/index.py", "status": "modified", "Loc": {"('Index', None, 50)": {"add": [861]}}}, {"path": "pandas/core/indexing.py", "status": "modified", "Loc": {"('_NDFrameIndexer', '_getitem_iterable', 412)": {"mod": [461, 462]}, "('_NDFrameIndexer', '_convert_to_indexer', 464)": {"mod": [572, 573, 574, 575, 576, 577, 578, 579, 581, 582, 584]}}}, {"path": "pandas/index.pyx", "status": "modified", "Loc": {"(None, None, None)": {"add": [269, 270, 271]}}}, {"path": "pandas/lib.pyx", "status": "modified", "Loc": {"(None, None, None)": {"add": [418]}}}, {"path": "pandas/tests/test_frame.py", "status": "modified", "Loc": {"('TestDataFrame', '_check_df', 4667)": {"mod": [4671, 4672]}}}, {"path": "pandas/tests/test_indexing.py", "status": "modified", "Loc": {"('TestIndexing', None, 85)": {"add": [786]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/index.pyx", "pandas/core/index.py", "pandas/core/indexing.py", "pandas/lib.pyx"], "doc": ["doc/source/indexing.rst", "RELEASE.rst"], "test": ["pandas/tests/test_indexing.py", "pandas/tests/test_frame.py"], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "ce8a11a62f8a126ed54dd0ede51cf2c196ed310d", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/2977", "iss_label": "good first issue\nfrontend\nseverity:low\nsmall effort", "title": "Rename and/or properly document the two different `changeAgentState` functions", "body": "There are two `changeAgentState` functions that should probably be renamed and properly documented to avoid confusion for the future.\r\n\r\nhttps://github.com/OpenDevin/OpenDevin/blob/01ce1e35b5b40e57d96b15a7fc9bee4eb8f6966d/frontend/src/state/agentSlice.tsx#L10-L12\r\n\r\nhttps://github.com/OpenDevin/OpenDevin/blob/01ce1e35b5b40e57d96b15a7fc9bee4eb8f6966d/frontend/src/services/agentStateService.ts#L7-L18", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/3050", "file_loc": {"base_commit": "ce8a11a62f8a126ed54dd0ede51cf2c196ed310d", "files": [{"path": "frontend/src/services/observations.ts", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "(None, 'handleObservationMessage', 10)": {"mod": [28]}}}, {"path": "frontend/src/state/agentSlice.tsx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [10, 16]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["frontend/src/state/agentSlice.tsx", "frontend/src/services/observations.ts"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/224", "iss_label": "feature", "title": "Align rotation of input faces for GAN conversions", "body": "Currently, the extractor finds a rotation matrix for each face using umeyama so it can generate a faceset with all the faces mostly upright. Unfortunately this rotation matrix isn't stored in the alignments file, only the bbox (of the un-rotated face) and facial alignments. For the GAN model, when it comes time to convert, the faces aren't rotated upright before being fed through the model so I doubt anyone has been able to get good results for faces that aren't completely upright.\r\n\r\nI propose we store the rotation matrix in the alignments file during extract, then at conversion, re-apply it to the cropped face to make it upright before feeding through the model. The swapped output face then needs to be rotated in the inverse direction to match it with the frame again. Hopefully this is possible.", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/217", "file_loc": {"base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "files": [{"path": "faceswap.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8]}}}, {"path": "lib/ModelAE.py", "status": "removed", "Loc": {}}, {"path": "lib/cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 17]}}}, {"path": "lib/training_data.py", "status": "modified", "Loc": {"('TrainingDataGenerator', '__init__', 9)": {"add": [11]}, "('TrainingDataGenerator', None, 8)": {"mod": [9, 64]}, "('TrainingDataGenerator', 'read_image', 37)": {"mod": [45]}, "('TrainingDataGenerator', 'random_warp', 64)": {"mod": [70, 71, 73, 74, 78, 79, 82]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "('FullHelpArgumentParser', None, 18)": {"mod": [18, 19, 20, 21, 22, 23, 24, 25, 26]}}}, {"path": "plugins/Convert_Adjust.py", "status": "modified", "Loc": {"('Convert', None, 8)": {"mod": [15]}, "('Convert', 'patch_image', 15)": {"mod": [22]}}}, {"path": "plugins/Convert_GAN.py", "status": "removed", "Loc": {}}, {"path": "plugins/Convert_Masked.py", "status": "modified", "Loc": {"('Convert', '__init__', 9)": {"add": [10, 19]}, "('Convert', None, 8)": {"add": [62], "mod": [9, 22, 23]}, "('Convert', 'get_new_face', 63)": {"mod": [66, 68]}}}, {"path": "plugins/Extract_Align.py", "status": "renamed", "Loc": {"('Extract', None, 7)": {"mod": [7]}}}, {"path": "plugins/Model_GAN/Model.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 17]}, "('GANModel', None, 18)": {"add": [23]}, "('GANModel', 'Decoder_ps', 112)": {"add": [121], "mod": [124]}, "('GANModel', '__init__', 24)": {"mod": [32, 33, 34, 36, 37, 41, 42, 44, 45, 46, 48, 49, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64]}, "('GANModel', 'conv_block', 71)": {"mod": [73, 74, 75]}, "('GANModel', 'res_block', 78)": {"mod": [80, 81, 83, 84]}, "('GANModel', 'build_generator', 70)": {"mod": [89, 98, 99, 100, 101, 112, 113, 114, 115, 116, 126, 127, 128, 139]}, "('GANModel', 'block', 90)": {"mod": [91, 92]}, "('GANModel', 'conv_block_d', 142)": {"mod": [144, 145, 147, 148, 149]}, "('GANModel', 'Discriminator', 148)": {"mod": [153, 154, 155]}, "('GANModel', 'build_discriminator', 141)": {"mod": [157, 158]}, "('GANModel', 'save_weights', 174)": {"mod": [176, 177, 178, 179]}}}, {"path": "plugins/Model_GAN/Trainer.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "('Trainer', '__init__', 22)": {"add": [26, 28], "mod": [30]}, "('Trainer', None, 14)": {"add": [33, 95], "mod": [17, 22]}, "('Trainer', 'showG', 101)": {"add": [115, 141], "mod": [118, 119, 123, 124, 127, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 140, 144, 145, 149, 150, 153]}, "('GANTrainingDataGenerator', None, 7)": {"mod": [8, 9]}, "('Trainer', 'train_one_step', 34)": {"mod": [40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 81, 82, 84, 85, 86, 87, 90]}, "('Trainer', 'show_sample', 96)": {"mod": [99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114]}}}, {"path": "plugins/Model_LowMem.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [9]}, "('Model', None, 15)": {"mod": [15]}, "('Trainer', None, 67)": {"mod": [67, 68]}}}, {"path": "plugins/Model_Original.py", "status": "renamed", "Loc": {"(None, None, None)": {"mod": [9]}, "('Model', None, 15)": {"mod": [15]}, "('Model', 'Decoder', 58)": {"mod": [65, 67, 68]}}}, {"path": "scripts/convert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2, 3, 5, 6, 8, 9, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 30, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 63, 64, 65, 66, 67, 68, 70, 71, 72, 73, 74, 75, 77, 78, 79, 80, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 96, 97, 98, 99, 100, 101, 103, 104, 105, 106, 107, 109, 110, 111, 112, 113, 114, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 128, 129, 130, 131, 133, 134, 135, 136, 137, 138, 139, 140, 142, 144, 145, 147, 148, 149, 150, 151, 153, 154, 156, 157, 159, 160, 162, 163, 164, 165, 166, 167, 169, 170, 171, 173, 174, 175, 177, 178, 179, 181, 182, 183, 184, 186, 187, 188, 189, 190, 192, 193, 194, 195, 196, 197, 198, 199, 200]}}}, {"path": "scripts/extract.py", "status": "modified", "Loc": {"('ExtractTrainingData', 'handleImage', 75)": {"mod": [90]}}}, {"path": "scripts/train.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2, 3, 5, 6, 7, 8, 10, 11, 13, 14, 15, 17, 18, 19, 20, 21, 23, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 94, 95, 96, 98, 99, 100, 101, 103, 104, 106, 107, 108, 109, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 125, 126, 127, 129, 130, 131, 133, 134, 135, 136, 137, 138, 140, 141, 142, 143, 145, 146, 148, 150, 152, 154, 155, 157, 158, 159, 161, 162, 163, 165, 166, 167, 168, 169, 170, 171, 172, 173, 175, 176, 177, 178, 179, 180, 181, 183, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/training_data.py", "plugins/Convert_Adjust.py", "plugins/Convert_GAN.py", "plugins/Extract_Align.py", "plugins/Model_GAN/Model.py", "plugins/Model_LowMem.py", "scripts/train.py", "faceswap.py", "plugins/Model_Original.py", "plugins/Convert_Masked.py", "plugins/Model_GAN/Trainer.py", "lib/utils.py", "lib/ModelAE.py", "lib/cli.py", "scripts/convert.py", "scripts/extract.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "2bf09b8a2026b79b11d178d391327035dde9f948", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/710", "iss_label": "", "title": "item_dropped signal should pass response arg as item_scraped does", "body": "I highly use request and response.meta in item_scraped signal handler. \nWhy item_dropped doesn't pass response argument as well as item_scraper does?\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/724", "file_loc": {"base_commit": "2bf09b8a2026b79b11d178d391327035dde9f948", "files": [{"path": "docs/topics/signals.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [98], "mod": [86]}}}, {"path": "scrapy/core/scraper.py", "status": "modified", "Loc": {"('Scraper', '_itemproc_finished', 198)": {"mod": [208]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/core/scraper.py"], "doc": ["docs/topics/signals.rst"], "test": [], "config": [], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f81dbe26e2e363c28ad043db67b59c11bb33f446", "iss_has_pr": 1, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/2851", "iss_label": "", "title": "Differential Diffusion: Giving Each Pixel Its Strength", "body": "Hello,\r\nI would like to suggest implementing my paper: Differential Diffusion: Giving Each Pixel Its Strength.\r\nThe paper allows a user to edit a picture by a change map that describes how much each region should change.\r\nThe editing process is typically guided by textual instructions, although it can also be applied without guidance.\r\nWe support both continuous and discrete editing.\r\nOur framework is training and fine tuning free! And has negligible penalty of the inference time.\r\nOur implementation is diffusers-based.\r\nWe already tested it on 4 different diffusion models (Kadinsky, DeepFloyd IF, SD, SD XL).\r\nWe are confident that the framework can also be ported to other diffusion models, such as SD Turbo, Stable Cascade, and amused.\r\nI notice that you usually stick to white==change convention, which is opposite to the convention we used in the paper.\r\nThe paper can be thought of as a generalization to some of the existing techniques.\r\nA black map is just regular txt2img (\"0\"),\r\nA map of one color (which isn't black) can be thought as img2img,\r\nA map of two colors which one color is white can be thought as inpaint.\r\nAnd the rest? It's completely new!\r\nIn the paper, we suggest some further applications such as soft inpainting and strength visualization.\r\n\r\nSite:\r\nhttps://differential-diffusion.github.io/\r\nPaper:\r\nhttps://differential-diffusion.github.io/paper.pdf\r\nRepo:\r\nhttps://github.com/exx8/differential-diffusion", "pr_html_url": "https://github.com/comfyanonymous/ComfyUI/pull/2876", "file_loc": {"base_commit": "f81dbe26e2e363c28ad043db67b59c11bb33f446", "files": [{"path": "comfy/samplers.py", "status": "modified", "Loc": {"('KSamplerX0Inpaint', 'forward', 277)": {"add": [278]}}}, {"path": "nodes.py", "status": "modified", "Loc": {"(None, 'init_custom_nodes', 1936)": {"add": [1963]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["nodes.py", "comfy/samplers.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "bcc5160b3a5b0fc9c531da194c6bb83619045434", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/18734", "iss_label": "good first issue\nNeeds Tests", "title": "ddof for np.std in df.agg changes depending on how given & lambda expression does not work correctly in a list of functions ", "body": "#### Code Sample, a copy-pastable example if possible\r\n\r\n```python\r\nIn [31]: import numpy as np\r\n\r\nIn [32]: import pandas as pd\r\n\r\nIn [33]: df = pd.DataFrame(np.arange(6).reshape(3, 2), columns=['A', 'B'])\r\n\r\nIn [34]: df\r\nOut[34]:\r\n A B\r\n0 0 1\r\n1 2 3\r\n2 4 5\r\n\r\nIn [35]: df.agg(np.std) # Behavior of ddof=0\r\nOut[35]:\r\nA 1.632993\r\nB 1.632993\r\ndtype: float64\r\n\r\nIn [36]: df.agg([np.std]) # Behavior of ddof=1\r\nOut[36]:\r\n A B\r\nstd 2.0 2.0\r\n\r\nIn [37]: # So how to get the ddof=0 behavior when giving a list of functions?\r\n\r\nIn [39]: df.agg([lambda x: np.std(x)]) # This gives a numerically unexpected result.\r\nOut[39]:\r\n A B\r\n \r\n0 0.0 0.0\r\n1 0.0 0.0\r\n2 0.0 0.0\r\n\r\nIn [40]: df.agg([np.mean, lambda x: np.std(x)]) # This gives an error.\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n in ()\r\n----> 1 df.agg([np.mean, lambda x: np.std(x)])\r\n\r\n/Users/ikeda/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/pandas/core/frame.py in aggregate(self, func, axis, *args, **kwargs)\r\n 4740 if axis == 0:\r\n 4741 try:\r\n-> 4742 result, how = self._aggregate(func, axis=0, *args, **kwargs)\r\n 4743 except TypeError:\r\n 4744 pass\r\n\r\n/Users/ikeda/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/pandas/core/base.py in _aggregate(self, arg, *args, **kwargs)\r\n 537 return self._aggregate_multiple_funcs(arg,\r\n 538 _level=_level,\r\n--> 539 _axis=_axis), None\r\n 540 else:\r\n 541 result = None\r\n\r\n/Users/ikeda/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/pandas/core/base.py in _aggregate_multiple_funcs(self, arg, _level, _axis)\r\n 594 # if we are empty\r\n 595 if not len(results):\r\n--> 596 raise ValueError(\"no results\")\r\n 597\r\n 598 try:\r\n\r\nValueError: no results\r\n\r\n```\r\n#### Problem description\r\n\r\nWhen using, e.g., `df.agg`, the `ddof` (degrees of freedom) value for the function `np.std` changes depending on how the function is given (single function or a list of functions), which may be so confusing for many people. I believe the behavior should be unified in some way.\r\n\r\nFurthermore, I could not find the way to obtain to the `np.std` result with `ddof=0` by supplying it as one of the members of a list of functions. The `lambda` expression does not work well in a list of functions (this gives numerically unexpected results or even gives errors). This prohibits us to use many useful methods like `df.agg`, `df.apply`, and `df.describe` when we hope the `ddof=0` behavior. \r\n\r\nFrom https://github.com/pandas-dev/pandas/issues/13344, I guess Developers prefer the `ddof=1` behavior in pandas. So the expected behavior should be as below.\r\n\r\n#### Expected Output\r\n```\r\nIn [35]: df.agg(np.std) # Behavior of ddof=1\r\nOut[35]:\r\nA 2.0\r\nB 2.0\r\ndtype: float64\r\n\r\nIn [38]: df.agg([lambda x: np.std(x)]) # To obtain the ddof=0 results\r\nOut[38]:\r\n A B\r\n 1.632993 1.632993\r\n\r\nIn [41]: df.agg([np.mean, lambda x: np.std(x)])\r\n A B\r\nmean 2.0 3.0\r\n 1.632993 1.632993\r\n```\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n
    \r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\n\r\npandas: 0.21.0\r\npytest: 3.0.7\r\npip: 9.0.1\r\nsetuptools: 27.2.0\r\nCython: 0.25.2\r\nnumpy: 1.13.3\r\nscipy: 0.19.0\r\npyarrow: None\r\nxarray: None\r\nIPython: 5.3.0\r\nsphinx: 1.5.6\r\npatsy: 0.4.1\r\ndateutil: 2.6.1\r\npytz: 2017.3\r\nblosc: None\r\nbottleneck: 1.2.1\r\ntables: 3.3.0\r\nnumexpr: 2.6.2\r\nfeather: None\r\nmatplotlib: 2.0.2\r\nopenpyxl: 2.4.7\r\nxlrd: 1.0.0\r\nxlwt: 1.2.0\r\nxlsxwriter: 0.9.6\r\nlxml: 3.7.3\r\nbs4: 4.6.0\r\nhtml5lib: 0.999\r\nsqlalchemy: 1.1.9\r\npymysql: None\r\npsycopg2: None\r\njinja2: 2.9.6\r\ns3fs: None\r\nfastparquet: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n
    \r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/52371", "file_loc": {"base_commit": "bcc5160b3a5b0fc9c531da194c6bb83619045434", "files": [{"path": "pandas/tests/apply/test_frame_apply.py", "status": "modified", "Loc": {"(None, 'test_agg_list_like_func_with_args', 1648)": {"add": [1667]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["pandas/tests/apply/test_frame_apply.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "c13703c8dfb7324a05a82e8befe9b203a6590257", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/29742", "iss_label": "Bug\nSprint", "title": "spin docs --no-plot runs the examples", "body": "Seen at the EuroScipy sprint\r\n\r\nCommands run by spin:\r\n```\r\n$ export SPHINXOPTS=-W -D plot_gallery=0 -j auto\r\n$ cd doc\r\n$ make html\r\n```\r\n\r\nLooks like our Makefile does not use SPHINXOPTS the same way as expected:\r\nProbably we have a slightly different way of building the doc\r\n\r\n```\r\n\u276f make html-noplot -n\r\nsphinx-build -D plot_gallery=0 -b html -d _build/doctrees -T . -jauto \\\r\n _build/html/stable\r\necho\r\necho \"Build finished. The HTML pages are in _build/html/stable.\"\r\n```", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/29744", "file_loc": {"base_commit": "c13703c8dfb7324a05a82e8befe9b203a6590257", "files": [{"path": "doc/Makefile", "status": "modified", "Loc": {"(None, None, None)": {"add": [68], "mod": [5]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["doc/Makefile"], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "147c8166852db64de12b851b8307f44c9e8fe0dd", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/15640", "iss_label": "", "title": "Add support for ONNX-TensorRT conversion for GPT-J6B (and possible bug in rotary embedding)", "body": "### Who can help\r\n@patil-suraj \r\n\r\n## Information\r\n\r\nModel I am using: GPT-J\r\n\r\nThe problem arises when using:\r\n* [x] the official example scripts: (give details below)\r\n* [x] my own modified scripts: (give details below)\r\n\r\n## Description\r\nI opened this issue for two reasons:\r\n1. This is not strictly a bug report, rather a change that enables converting this model to ONNX and then parsing it using the current TensorRT ONNX parser.\r\n2. Possible implementation bug in GPT-J.\r\n\r\n## Details\r\n1. When exporting GPT-J to ONNX using the latest version (v4.16.2), one of the ops that is exported is [SplitToSequence](https://github.com/onnx/onnx/blob/main/docs/Operators.md#SplitToSequence) (along with more Sequence* ops) that is currently not supported in the [TensorRT ONNX parser](https://github.com/onnx/onnx-tensorrt/blob/master/docs/operators.md).\r\nThis is entirely due to just 1 line of code that uses `torch.repeat_interleave`. ([relevant line](https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/models/gptj/modeling_gptj.py#L67))\r\n```\r\nsin, cos = map(lambda t: t[None, offset : x.shape[1] + offset, None, :].repeat_interleave(2, 3), sincos)\r\n```\r\nBy replacing `lambda t` with this:\r\n```\r\nlambda t: t.view(-1, 1).repeat(1, 2).view(seq_len, -1)[None, offset : x.shape[1] + offset, None, :]\r\n```\r\nwe get the exact same output tensors but now exporting to ONNX doesn't include any Sequence* ops, and TensorRT can parse it successfully.\r\nThe suggested function is even faster, although probably not critical in this huge model (benched only on CPU):\r\n```\r\noriginal: 106 \u00b5s \u00b1 20.9 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\r\nsuggested: 32.4 \u00b5s \u00b1 6.55 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\r\n```\r\n\r\n2. I was following the implementation in EleutherAI for rotary positional embeddings and I'm trying to understand if this is a bug or I'm simply missing something (would love an explanation if you can spare the time) but there (EleutherAI) they implement this function (rotary positional embedding) using `torch.cat` instead of `torch.repeat_interleave`, as can be seen [here](https://github.com/EleutherAI/gpt-neox/blob/b30afd1d0a1d06220be9b5f2c9c9c1523defba96/megatron/model/positional_embeddings.py#L41).\r\n\r\nIf I'm not missing something, the EleutherAI version transforms a tensor from\r\n```\r\n[[1,2,3],\r\n [4,5,6]]\r\n```\r\nto \r\n```\r\n[[1,2,3,1,2,3],\r\n [4,5,6,4,5,6]]\r\n```\r\nand HF version (using repeat_interleave):\r\n```\r\n[[1,2,3],\r\n [4,5,6]]\r\n```\r\nto \r\n```\r\n[[1,1,2,2,3,3],\r\n [4,4,5,5,6,6]]\r\n```\r\nCan anyone confirm the current implementation is indeed correct? Because otherwise `cat` and `repeat_interleave` are very different, and the rest of the implementation doesn't take it into account.", "pr_html_url": "https://github.com/huggingface/transformers/pull/16492", "file_loc": {"base_commit": "147c8166852db64de12b851b8307f44c9e8fe0dd", "files": [{"path": "src/transformers/models/gptj/modeling_gptj.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [64]}, "(None, 'apply_rotary_pos_emb', 65)": {"mod": [66]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/models/gptj/modeling_gptj.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "5007d8e996cbe6c23dcf2b5792775d8fde104128", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/252", "iss_label": "", "title": "added image sort tool to faceswap", "body": "I added image sort tool to faceswap, which very useful to extract one face from various faces\r\n\r\nExample original aligned folder:\r\n![fsviewer_2018-03-08_20-08-06](https://user-images.githubusercontent.com/8076202/37161724-ecbfac92-230c-11e8-8346-5d71c471edc7.png)\r\n\r\nSort it by similarity:\r\n`python.exe faceswap\\sorttool.py -i %WORKSPACE%\\data_src\\aligned -by similarity`\r\nresult:\r\n![fsviewer_2018-03-08_20-10-27](https://user-images.githubusercontent.com/8076202/37161776-0b94041a-230d-11e8-9841-b77a562f6120.png)\r\n\r\neasy delete faces which you dont need:\r\n![fsviewer_2018-03-08_20-12-52](https://user-images.githubusercontent.com/8076202/37161908-5fae29e0-230d-11e8-88c4-9f89632b05f3.png)\r\n\r\nSort by blur:\r\n`python.exe faceswap\\sorttool.py -i %WORKSPACE%\\data_src\\aligned -by blur`\r\n\r\nmost sharp 00000.png:\r\n![2018-03-08_20-15-42](https://user-images.githubusercontent.com/8076202/37162142-f37376e4-230d-11e8-8eb7-52892f4fd46c.png)\r\n\r\nmost blurred 00140.png:\r\n![fsviewer_2018-03-08_20-15-51](https://user-images.githubusercontent.com/8076202/37162187-0bc53f48-230e-11e8-8c23-646c72898704.png)\r\n\r\n\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/255", "file_loc": {"base_commit": "5007d8e996cbe6c23dcf2b5792775d8fde104128", "files": [{"path": "plugins/PluginLoader.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('PluginLoader', '_import', 20)": {"add": [23]}}}, {"path": "scripts/convert.py", "status": "modified", "Loc": {"('ConvertImage', 'add_optional_arguments', 24)": {"mod": [43, 44]}}}, {"path": "scripts/train.py", "status": "modified", "Loc": {"('TrainingProcessor', 'parse_arguments', 25)": {"mod": [75, 76]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/train.py", "scripts/convert.py", "plugins/PluginLoader.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "b615a95a417d8a857b1f822bd2d2f993737d532a", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1347", "iss_label": "bug", "title": "Bing stopped working", "body": "**Bug description**\r\nYesterday, Bing still worked, but today brings up only:\r\n```\r\nUsing Bing provider\r\n0, message='Attempt to decode JSON with unexpected mimetype: text/html; charset=utf-8', url=URL('https://www.bing.com/turing/conversation/create?bundleVersion=1.1381.8')\r\n127.0.0.1 - - [14/Dec/2023 20:22:32] \"POST /backend-api/v2/conversation HTTP/1.1\" 200 -\r\n```\r\n\r\n**Screenshots**\r\n![image](https://github.com/xtekky/gpt4free/assets/11407417/3f562cfb-2596-4f65-ba75-4efec77d0f3e)\r\n\r\n\r\n**Environment**\r\n- python version: 3.12\r\n- location ( are you in a cloudfare flagged country ) : Ukraine\r\n", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/1356", "file_loc": {"base_commit": "b615a95a417d8a857b1f822bd2d2f993737d532a", "files": [{"path": "g4f/Provider/Bing.py", "status": "modified", "Loc": {"(None, 'stream_generate', 432)": {"mod": [442, 443]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["g4f/Provider/Bing.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "95161ed313db11296c3bd473336340dbb19bb347", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1995", "iss_label": "Planned\nContributor Friendly", "title": "Create an Extra for Better SSL Support", "body": "So right now the SSL connections when you use pyOpenSSL, ndg-httspclient, and pyasn1 are more secure than if you just use the stdlib options. However it's hard to actually remember those three things. It would be cool if requests would add an extra to it's setup.py so that people can install requests with betterssl, something like:\n\n``` python\nsetup(\n extras_require={\n \"betterssl\": [\"pyOpenSSL\", \"ndg-httpsclient\", \"pyasn1\"],\n },\n)\n```\n\nWould make it so people can install requests like `pip install requests[betterssl]` and get all of those dependencies without having to manually track those down. It also means people could depend on `requests[betterssl]` instead of just `requests` in their own setup.py's.\n\nExtra name can of course be bikeshed here :)\n", "pr_html_url": "https://github.com/psf/requests/pull/2195", "file_loc": {"base_commit": "95161ed313db11296c3bd473336340dbb19bb347", "files": [{"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [62]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["setup.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "458eda13211ac3498485f1e5154d90808fbcfb60", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12104", "iss_label": "bug", "title": "[Bug]: Generating using LoRA fails with Runtime Error with `Lora/Networks: use old method` enabled", "body": "### Is there an existing issue for this?\n\n- [x] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nI'm on commit 68f336bd994bed5442ad95bad6b6ad5564a5409a, master HEAD at time of posting.\r\nNone of my LORAs seem to be working anymore. Normal prompting works fine, but as soon as I try generating after adding a LORA to my prompt I receive the following:\r\n\r\n `RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x128)`\r\n\r\nI'm not well versed in python nor the inner workings of stable diffusion, so I can't debug this myself effectively.\r\nI don't think providing the LORA file or prompt is necessary, as I can reproduce this with any combination of checkpoints and LORAs which would previously work fine.\n\n### Steps to reproduce the problem\n\n1. Select any SD checkpoint\r\n2. txt2image tab\r\n3. Any combination of prompt and negative prompt, doesnt seem to matter\r\n4. Add a LORA to the prompt, no need to even add the activation token.\r\n5. Any generation settings (for my tests I'm using Euler a, 20 steps, 512x512, CFG 7, no scripts, no hires. fix, no face restore).\r\n6. Generate\n\n### What should have happened?\n\nI would expect the LORA to perform as it did in earlier versions with the same configuration, at the very least, generate an image. I haven't done a bisect, but I tried a commit from a week ago or so and it worked fine there. Every time I pull I delete venv and repositories folders beforehand.\n\n### Version or Commit where the problem happens\n\n68f336bd994bed5442ad95bad6b6ad5564a5409a\n\n### What Python version are you running on ?\n\nPython 3.10.x\n\n### What platforms do you use to access the UI ?\n\nWindows\n\n### What device are you running WebUI on?\n\nNvidia GPUs (RTX 20 above)\n\n### Cross attention optimization\n\nxformers\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n```Shell\n--xformers --reinstall-xformers --precision full --no-half --skip-torch-cuda-test --opt-split-attention\n```\n\n\n### List of extensions\n\nddetailer, sd-webui-supermerger, stable-diffusion-webui-dataset-tag-editor, stable-diffusion-webui-wd14-tagger\n\n### Console logs\n\n```Shell\nvenv \"C:\\sd\\sdwebui\\webui\\venv\\Scripts\\Python.exe\"\r\nPython 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]\r\nVersion: v1.5.1\r\nCommit hash: 68f336bd994bed5442ad95bad6b6ad5564a5409a\r\nInstalling xformers\r\nCollecting xformers==0.0.20\r\n Using cached xformers-0.0.20-cp310-cp310-win_amd64.whl (97.6 MB)\r\nInstalling collected packages: xformers\r\nSuccessfully installed xformers-0.0.20\r\n\r\n[notice] A new release of pip available: 22.3.1 -> 23.2\r\n[notice] To update, run: C:\\sd\\sdwebui\\webui\\venv\\Scripts\\python.exe -m pip install --upgrade pip\r\n\r\nLaunching Web UI with arguments: --xformers --reinstall-xformers --precision full --no-half --skip-torch-cuda-test --opt-split-attention\r\nCheck config files...\r\nDone\r\nLoading weights [cb15a7187a] from C:\\sd\\sdwebui\\webui\\models\\Stable-diffusion\\Deliberate-inpainting.safetensors\r\nCreating model from config: C:\\sd\\sdwebui\\webui\\configs\\v1-inpainting-inference.yaml\r\nLatentInpaintDiffusion: Running in eps-prediction mode\r\nDiffusionWrapper has 859.54 M params.\r\nRunning on local URL: http://127.0.0.1:7860\r\n\r\nTo create a public link, set `share=True` in `launch()`.\r\nStartup time: 84.3s (launcher: 11.6s, import torch: 31.4s, import gradio: 8.7s, setup paths: 10.6s, other imports: 8.2s, opts onchange: 0.4s, setup codeformer: 0.4s, list SD models: 0.3s, load scripts: 11.1s, create ui: 1.0s, gradio launch: 0.5s).\r\nApplying attention optimization: xformers... done.\r\nModel loaded in 10.0s (load weights from disk: 0.8s, create model: 1.2s, apply weights to model: 6.4s, move model to device: 1.5s).\r\nLoading weights [f36b3ca4d1] from C:\\sd\\sdwebui\\webui\\models\\Stable-diffusion\\edgeOfRealism_edgeOfRealismBakedVAE.safetensors\r\nCreating model from config: C:\\sd\\sdwebui\\webui\\configs\\v1-inference.yaml\r\nLatentDiffusion: Running in eps-prediction mode\r\nDiffusionWrapper has 859.52 M params.\r\nApplying attention optimization: xformers... done.\r\nModel loaded in 8.3s (create model: 0.6s, apply weights to model: 5.9s, move model to device: 1.7s).\r\n*** Error completing request\r\n*** Arguments: ('task(yot3zok0bchp1w0)', 'pov of a beautiful asian woman, formal dress, perfect eyes, petite body, in the forest, colorful, yellow leaves, autumn, hair bun, black hair, facing the viewer, bokeh, soft lighting, perfect face, eye contact, brown eyes, ', 'badhandv4 easynegative ng_deepnegative_v1_75t', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], , 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, '', 'None', 30, 4, 0, 0, False, 'None', '
    ', 'None', 30, 4, 0, 0, 4, 0.4, True, 32) {}\r\n Traceback (most recent call last):\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\call_queue.py\", line 58, in f\r\n res = list(func(*args, **kwargs))\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\call_queue.py\", line 37, in f\r\n res = func(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\txt2img.py\", line 62, in txt2img\r\n processed = processing.process_images(p)\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\processing.py\", line 677, in process_images\r\n res = process_images_inner(p)\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\processing.py\", line 783, in process_images_inner\r\n p.setup_conds()\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\processing.py\", line 1191, in setup_conds\r\n super().setup_conds()\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\processing.py\", line 364, in setup_conds\r\n self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\processing.py\", line 353, in get_conds_with_caching\r\n cache[1] = function(shared.sd_model, required_prompts, steps)\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\prompt_parser.py\", line 163, in get_learned_conditioning\r\n conds = model.get_learned_conditioning(texts)\r\n File \"C:\\sd\\sdwebui\\webui\\repositories\\stable-diffusion-stability-ai\\ldm\\models\\diffusion\\ddpm.py\", line 669, in get_learned_conditioning\r\n c = self.cond_stage_model(c)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\sd_hijack_clip.py\", line 234, in forward\r\n z = self.process_tokens(tokens, multipliers)\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\sd_hijack_clip.py\", line 271, in process_tokens\r\n z = self.encode_with_transformers(tokens)\r\n File \"C:\\sd\\sdwebui\\webui\\modules\\sd_hijack_clip.py\", line 324, in encode_with_transformers\r\n outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\transformers\\models\\clip\\modeling_clip.py\", line 811, in forward\r\n return self.text_model(\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\transformers\\models\\clip\\modeling_clip.py\", line 721, in forward\r\n encoder_outputs = self.encoder(\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\transformers\\models\\clip\\modeling_clip.py\", line 650, in forward\r\n layer_outputs = encoder_layer(\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\transformers\\models\\clip\\modeling_clip.py\", line 389, in forward\r\n hidden_states = self.mlp(hidden_states)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\transformers\\models\\clip\\modeling_clip.py\", line 344, in forward\r\n hidden_states = self.fc1(hidden_states)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\extensions-builtin\\Lora\\networks.py\", line 357, in network_Linear_forward\r\n return network_forward(self, input, torch.nn.Linear_forward_before_network)\r\n File \"C:\\sd\\sdwebui\\webui\\extensions-builtin\\Lora\\networks.py\", line 345, in network_forward\r\n y = module.forward(y, input)\r\n File \"C:\\sd\\sdwebui\\webui\\extensions-builtin\\Lora\\network_lora.py\", line 84, in forward\r\n return y + self.up_model(self.down_model(x)) * self.multiplier() * self.calc_scale()\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"C:\\sd\\sdwebui\\webui\\extensions-builtin\\Lora\\networks.py\", line 357, in network_Linear_forward\r\n return network_forward(self, input, torch.nn.Linear_forward_before_network)\r\n File \"C:\\sd\\sdwebui\\webui\\extensions-builtin\\Lora\\networks.py\", line 337, in network_forward\r\n y = original_forward(module, input)\r\n File \"C:\\sd\\sdwebui\\webui\\venv\\lib\\site-packages\\torch\\nn\\modules\\linear.py\", line 114, in forward\r\n return F.linear(input, self.weight, self.bias)\r\n RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x128)\n```\n\n\n### Additional information\n\nNone of the extensions listed are used in the context of this issue", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12466", "file_loc": {"base_commit": "458eda13211ac3498485f1e5154d90808fbcfb60", "files": [{"path": "extensions-builtin/Lora/networks.py", "status": "modified", "Loc": {"(None, 'network_forward', 338)": {"mod": [360]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["extensions-builtin/Lora/networks.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "14d03f60ed366df942be09ee4bc394a69958e09c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/2185", "iss_label": "Bug\nModerate", "title": "MinibatchKMeans bad center reallocation causes duplicate centers", "body": "For instance have a look at:\n\n http://scikit-learn.org/dev/auto_examples/cluster/plot_dict_face_patches.html\n\nsome of the centroids are duplicated, presumably because of a bug in the bad cluster reallocation heuristic.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/3376", "file_loc": {"base_commit": "14d03f60ed366df942be09ee4bc394a69958e09c", "files": [{"path": "sklearn/cluster/k_means_.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [28]}, "(None, '_labels_inertia_precompute_dense', 399)": {"add": [411], "mod": [399, 402, 403, 409]}, "(None, '_labels_inertia', 416)": {"add": [433, 451], "mod": [418, 420, 443, 444, 449, 458]}, "(None, '_mini_batch_step', 784)": {"add": [862], "mod": [789, 794, 797, 800, 803, 807, 809, 812, 817, 818, 819, 821, 824, 828, 829, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 853, 854, 855, 856]}, "('KMeans', None, 543)": {"mod": [553, 557, 575, 578, 581, 582, 583, 604, 605]}, "('KMeans', 'transform', 718)": {"mod": [719]}, "('MiniBatchKMeans', None, 969)": {"mod": [983, 990, 1010, 1029, 1038]}, "('MiniBatchKMeans', 'fit', 1081)": {"mod": [1162]}, "('MiniBatchKMeans', 'partial_fit', 1242)": {"mod": [1260, 1279]}}}, {"path": "sklearn/cluster/tests/test_k_means.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [314]}, "(None, 'test_minibatch_reassign', 315)": {"add": [357], "mod": [320, 323, 332, 337, 338, 339, 340, 345, 349, 355]}}}, {"path": "sklearn/utils/setup.py", "status": "modified", "Loc": {"(None, 'configuration', 7)": {"mod": [67, 68]}}}, {"path": "sklearn/utils/tests/test_extmath.py", "status": "modified", "Loc": {"(None, 'test_random_weights', 61)": {"mod": [75, 76]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/utils/setup.py", "sklearn/cluster/k_means_.py"], "doc": [], "test": ["sklearn/cluster/tests/test_k_means.py", "sklearn/utils/tests/test_extmath.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "c8f3d07e86dd41074971b5423fb932c2eda6db1e", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3341", "iss_label": "", "title": "Overriding the MailSender class", "body": "I'd like to use the built-in email notification service for when a scraper exceeds a certain memory limit (`MEMUSAGE_NOTIFY_MAIL` setting), but it looks like it's not possible to specify the MailSender class to use to send the email. I don't want to use SMTP, I'd like to use a third-party mail sender (e.g. sendgrid).\r\nIs there a way around this?\r\nThanks", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3346", "file_loc": {"base_commit": "c8f3d07e86dd41074971b5423fb932c2eda6db1e", "files": [{"path": "docs/topics/email.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [70, 108], "mod": [11, 12, 13, 14, 15, 17, 18, 20, 21, 23, 24, 26, 27, 29, 30, 32, 34, 36, 38, 39, 41, 42, 83, 114, 115]}}}, {"path": "docs/topics/settings.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [182]}}}, {"path": "scrapy/extensions/memusage.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [17], "mod": [16]}, "('MemoryUsage', '__init__', 24)": {"mod": [36, 37, 38, 39]}, "('MemoryUsage', '_check_limit', 77)": {"mod": [80, 81, 82, 84, 85]}, "('MemoryUsage', '_check_warning', 96)": {"mod": [97, 101, 102, 103, 105, 106]}, "('MemoryUsage', '_send_report', 111)": {"mod": [114, 115, 116, 118]}}}, {"path": "scrapy/extensions/statsmailer.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9], "mod": [8]}, "('StatsMailer', None, 11)": {"add": [12], "mod": [11]}, "('StatsMailer', 'from_crawler', 19)": {"mod": [23]}}}, {"path": "scrapy/mail.py", "status": "modified", "Loc": {"('MailSender', 'send', 58)": {"add": [100], "mod": [59, 60, 61, 62, 64, 65, 67, 68, 69, 70, 71, 72, 73, 74, 76, 77, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89]}, "('MailSender', '_sendmail', 122)": {"add": [137]}, "('MailSender', None, 39)": {"mod": [39]}, "('MailSender', 'from_settings', 53)": {"mod": [54, 55, 56]}}}, {"path": "scrapy/settings/default_settings.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [51]}}}, {"path": "scrapy/utils/test.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [28]}}}, {"path": "tests/test_mail.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5], "mod": [4, 7, 9, 11, 12, 13, 14, 16, 18, 19, 20, 22, 23, 24, 25, 26, 28, 29, 30, 31, 33, 34, 35, 36, 37, 39, 40, 41, 43]}, "('MailSenderTest', 'test_send_attach', 43)": {"mod": [49, 50, 51, 53, 54, 55, 56, 58, 59, 60]}, "('MailSenderTest', None, 9)": {"mod": [71, 72, 74, 91]}, "('MailSenderTest', 'test_send_utf8', 74)": {"mod": [77, 78, 79, 81, 82, 83, 85, 86]}, "('MailSenderTest', 'test_send_attach_utf8', 91)": {"mod": [99, 100, 101, 102, 104, 105, 106, 108, 109, 111, 112]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/extensions/statsmailer.py", "scrapy/settings/default_settings.py", "scrapy/extensions/memusage.py", "scrapy/mail.py"], "doc": ["docs/topics/settings.rst", "docs/topics/email.rst"], "test": ["tests/test_mail.py", "scrapy/utils/test.py"], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "dbdd7996960ba46ed044a773290b02f17478c760", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/1059", "iss_label": "", "title": " Impossible to open 'CC:/manim/manim_3_feb/media/videos/example_scenes/480p15/partial_movie_files/SquareToCircle/00000.mp4'", "body": "![Screenshot_1](https://user-images.githubusercontent.com/65260808/81756436-479cd380-94ab-11ea-89e1-af386563bfcd.png)\r\nHelp me solve this ", "pr_html_url": "https://github.com/3b1b/manim/pull/1057", "file_loc": {"base_commit": "dbdd7996960ba46ed044a773290b02f17478c760", "files": [{"path": "manimlib/scene/scene_file_writer.py", "status": "modified", "Loc": {"('SceneFileWriter', 'combine_movie_files', 253)": {"mod": [289]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["manimlib/scene/scene_file_writer.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "ae00b810d1d3ad7f1f7e226b02ece37c986330e7", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/104803", "iss_label": "OS-windows", "title": "Allow detecting Dev Drive on Windows", "body": "Windows just announced a new [Dev Drive](https://learn.microsoft.com/en-us/windows/dev-drive/) feature, optimised for high I/O scenarios such as build and test. It also works as a very clear signal that the user is a developer and is doing developer-like tasks.\r\n\r\nWe should add a function to allow querying whether a specific path is on a Dev Drive. The API is relatively low level, and cannot currently be used from Python, but would allow Python apps to detect when the user is operating on a Dev Drive (e.g. installing or compiling something on one), or choose or offer a more performant temporary or cache location than the user directory.\r\n\r\n(For a variety of mostly compatibility reasons, there's no way for Windows to redirect `%TEMP%` onto a Dev Drive, but apps that are aware of it can do it for themselves.)\n\n\n### Linked PRs\n* gh-104805\n* gh-105054\n\n", "pr_html_url": "https://github.com/python/cpython/pull/104805", "file_loc": {"base_commit": "ae00b810d1d3ad7f1f7e226b02ece37c986330e7", "files": [{"path": "Doc/library/os.path.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [306]}}}, {"path": "Lib/ntpath.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [869]}}}, {"path": "Lib/test/test_ntpath.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [994]}}}, {"path": "Modules/clinic/posixmodule.c.h", "status": "modified", "Loc": {"(None, None, None)": {"add": [1717, 11381], "mod": [11925]}}}, {"path": "Modules/posixmodule.c", "status": "modified", "Loc": {"(None, None, None)": {"add": [4532, 15799]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["Modules/clinic/posixmodule.c.h", "Lib/ntpath.py", "Modules/posixmodule.c"], "doc": ["Doc/library/os.path.rst"], "test": ["Lib/test/test_ntpath.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "2814e0e1972fa38151b6800c881d49f50edf9c6b", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5226", "iss_label": "enhancement\ngood first issue\ndocs", "title": "Document Reppy Python version support", "body": "The optional dependency on reppy for one of the built-in robots.txt parsers is [preventing us from running the extra-dependencies CI job with Python 3.9+](https://github.com/seomoz/reppy/issues/122). https://github.com/seomoz/reppy has not have a commit for ~1.5 years.\r\n\r\nSo I think we should deprecate the component.\r\n\r\nIf we don\u2019t, we should document this limitation, and schedule a deprecation for 1 year before Python 3.8 reaches end of life, ~~i.e. in 9 months~~, because once we drop Python 3.8 support we will be forced to remove this component anyway, so giving a deprecation warning 1 year before is probably in the best interest of any user of the component.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5231", "file_loc": {"base_commit": "2814e0e1972fa38151b6800c881d49f50edf9c6b", "files": [{"path": "docs/topics/downloader-middleware.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1072]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docs/topics/downloader-middleware.rst"], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "9968a10fcfad7268b552808c4f8946eecafc956a", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1650", "iss_label": "", "title": "Requests doesn't catch requests.packages.urllib3.exceptions.ProxyError", "body": "Requests doesn't catch requests.packages.urllib3.exceptions.ProxyError and translate it into a requests module specific exception which derives from RequestException as it does for other errors originating from urllib3. This means if trying to catch any exception derived from RequestException so as to treat it specially, the urllib3 ProxyError will be missed.\n", "pr_html_url": "https://github.com/psf/requests/pull/1651", "file_loc": {"base_commit": "9968a10fcfad7268b552808c4f8946eecafc956a", "files": [{"path": "requests/adapters.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [24], "mod": [26]}, "('HTTPAdapter', 'send', 283)": {"add": [355]}}}, {"path": "requests/exceptions.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/adapters.py", "requests/exceptions.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "7f8ab1ee8f304031b3404e25761dd0f4c7be7df8", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/873", "iss_label": "enhancement", "title": "Outpainting script does not save multiple images when using batch sliders", "body": "When using the batch-count slider and the batch-size slider, the outpainting script does not save multiple images, but just the first one.\r\n\r\nLooking at the console window we can see the actual processing is happening for all the N images (batch-count * batch-size), but at the end of the process only the first one is saved to disk.\r\n", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/3244", "file_loc": {"base_commit": "7f8ab1ee8f304031b3404e25761dd0f4c7be7df8", "files": [{"path": "scripts/outpainting_mk_2.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [262]}, "('Script', 'run', 142)": {"mod": [175, 177, 179, 245, 247, 248, 249, 250, 251, 252, 253, 254, 256, 259, 261]}, "('Script', 'expand', 179)": {"mod": [185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 201, 202, 203, 204, 206, 207, 209, 210, 211, 212, 213, 214, 216, 217, 219, 220, 221, 222, 235, 241, 242, 243]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scripts/outpainting_mk_2.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "803eb82362278b755127649e9bb5f385639a23ca", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/613", "iss_label": "good first issue\nsweep", "title": "Add numpy doc strings ", "body": "Add numpy style doc strings to all functions apart from the main.py file. \n\n\n\n\n\n
    \nChecklist\n\n- [X] `gpt_engineer/ai.py`\n> \u2022 For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.\n\n- [X] `gpt_engineer/chat_to_files.py`\n> \u2022 For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.\n\n- [X] `gpt_engineer/collect.py`\n> \u2022 For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.\n\n- [X] `gpt_engineer/db.py`\n> \u2022 For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.\n\n- [X] `gpt_engineer/learning.py`\n> \u2022 For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.\n\n
    \n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/615", "file_loc": {"base_commit": "803eb82362278b755127649e9bb5f385639a23ca", "files": [{"path": "gpt_engineer/ai.py", "status": "modified", "Loc": {"('AI', None, 39)": {"add": [40, 52, 59, 62, 65, 97, 101, 127, 141, 144]}, "('AI', 'next', 68)": {"add": [77]}, "('AI', 'update_token_usage_log', 104)": {"add": [106]}, "(None, 'fallback_model', 156)": {"add": [156]}, "(None, 'create_chat_model', 169)": {"add": [169]}, "(None, 'get_tokenizer', 188)": {"add": [188]}}}, {"path": "gpt_engineer/chat_to_files.py", "status": "modified", "Loc": {"(None, 'to_files', 44)": {"add": [44]}, "(None, 'parse_chat', 7)": {"mod": [9, 10]}, "(None, 'overwrite_files', 52)": {"mod": [54]}, "(None, 'get_code_strings', 69)": {"mod": [71]}, "(None, 'format_file_to_input', 84)": {"mod": [86]}}}, {"path": "gpt_engineer/collect.py", "status": "modified", "Loc": {"(None, 'send_learning', 11)": {"add": [12, 19]}, "(None, 'collect_learnings', 33)": {"add": [33]}, "(None, 'steps_file_hash', 55)": {"add": [55]}}}, {"path": "gpt_engineer/db.py", "status": "modified", "Loc": {"('DB', None, 9)": {"add": [12, 17, 20, 28, 34]}, "(None, 'archive', 56)": {"add": [56]}, "('DB', '__setitem__', 34)": {"mod": [41]}}}, {"path": "gpt_engineer/learning.py", "status": "modified", "Loc": {"(None, 'human_review_input', 54)": {"add": [54]}, "(None, 'check_consent', 98)": {"add": [98]}, "(None, 'collect_consent', 115)": {"add": [115]}, "(None, 'ask_if_can_store', 130)": {"add": [130]}, "(None, 'logs_to_string', 149)": {"add": [149]}, "(None, 'extract_learning', 157)": {"add": [159]}, "(None, 'get_session', 178)": {"mod": [179]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt_engineer/learning.py", "gpt_engineer/db.py", "gpt_engineer/chat_to_files.py", "gpt_engineer/ai.py", "gpt_engineer/collect.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "b9522fede2835b3c3b4728e1d005541087ec2208", "iss_has_pr": 1, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/29", "iss_label": "", "title": "Allow user to open the preview website in a new window", "body": null, "pr_html_url": "https://github.com/abi/screenshot-to-code/pull/99", "file_loc": {"base_commit": "b9522fede2835b3c3b4728e1d005541087ec2208", "files": [{"path": "frontend/src/App.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [99, 316], "mod": [322, 323, 324, 325, 326, 327, 328]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["frontend/src/App.tsx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/8171", "iss_label": "New model", "title": "Need suggestion on contributing TFDPR", "body": "# \ud83c\udf1f New model addition\r\n\r\n## Model description\r\nHi, I would love to try contributing TFDPR . This is the first time to me, so I need some suggestions.\r\nI have followed @sshleifer 's [great PR on TFBart model](https://github.com/huggingface/transformers/commit/829842159efeb1f920cbbb1daf5ad67e0114d0b9) on 4 files :` __init__.py , convert_pytorch_checkpoint_to_tf2.py , utils/dummy_tf_objects.py` and (newly created) `modeling_tf_dpr.py `\r\n\r\nNow the TF model works properly and can load Pytorch's weights successfully the same output as Pytorch's counterparts **except** small random noise (1e-5) which I suspect of some dtypes different , but I could not find the cause. \r\n\r\nI guess I need to add document on docs/source/model_doc/dpr.rst , and that's all ? \r\n**My question is do I need to change / fix any other files ? and/or do I need to do some other thing before making PR ?**\r\n\r\n\r\nTo resolve TF vs. Pytorch naming issues, there's one change regarding `TFBertModel` vs. `TFBertMainLayer` as [discussed here](https://discuss.huggingface.co/t/solved-issue-on-translating-dpr-to-tfdpr-on-loading-pytorch-weights-to-tf-model/1764) .\r\nThanks to @sshleifer for his help to solve the issue.\r\n\r\n## Open source status\r\n\r\n* [X] the model implementation is available: (give details)\r\nYou can see all the modified codes with test run at : \r\nhttps://colab.research.google.com/drive/1lU4fx7zkr-Y3CXa3wmHIY8yJhKdiN3DI?usp=sharing\r\n(to easily navigate the changes, please \u201cfind on page\u201d for e.g. `TFDPRContextEncoder` )\r\n\r\n* [X] the model weights are available: (give details)\r\nAt the moment, I use existing Pytorch weights, but will upload TF weights too.\r\n\r\n* [X] who are the authors: (mention them, if possible by @gh-username)\r\n@ratthachat ", "pr_html_url": "https://github.com/huggingface/transformers/pull/8203", "file_loc": {"base_commit": "eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d", "files": [{"path": "docs/source/model_doc/dpr.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [101]}}}, {"path": "src/transformers/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [408, 715]}}}, {"path": "src/transformers/convert_pytorch_checkpoint_to_tf2.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27, 45, 61, 100, 149]}}}, {"path": "src/transformers/modeling_tf_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [45, 89, 194]}}}, {"path": "src/transformers/utils/dummy_pt_objects.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [737]}}}, {"path": "src/transformers/utils/dummy_tf_objects.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [497]}}}, {"path": "tests/test_modeling_dpr.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [26]}, "('DPRModelTest', 'test_model_from_pretrained', 214)": {"add": [229]}}}, {"path": "utils/check_repo.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [35, 59, 89]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/utils/dummy_pt_objects.py", "src/transformers/utils/dummy_tf_objects.py", "src/transformers/__init__.py", "src/transformers/modeling_tf_auto.py", "utils/check_repo.py", "src/transformers/convert_pytorch_checkpoint_to_tf2.py"], "doc": ["docs/source/model_doc/dpr.rst"], "test": ["tests/test_modeling_dpr.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "9bee9ff5db6e68fb31065898d7e924d07c1eb9c1", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/34390", "iss_label": "bug", "title": "[mask2former] torch.export error for Mask2Former", "body": "### System Info\r\n\r\n- `transformers` version: 4.46.0.dev0\r\n- Platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35\r\n- Python version: 3.11.9\r\n- Huggingface_hub version: 0.25.2\r\n- Safetensors version: 0.4.5\r\n- Accelerate version: not installed\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.4.1+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using distributed or parallel set-up in script?: \r\n- Using GPU in script?: \r\n- GPU type: NVIDIA GeForce RTX 4090\r\n\r\n### Who can help?\r\n\r\n@amyeroberts, @qubvel, @ylacombe\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\n```python\r\nimport torch\r\nfrom transformers import Mask2FormerForUniversalSegmentation\r\n\r\nmodel = Mask2FormerForUniversalSegmentation.from_pretrained(\r\n \"facebook/mask2former-swin-base-coco-panoptic\", torchscript=True\r\n)\r\n\r\nscripted_model = torch.export.export(model, args=(torch.randn(1, 3, 800, 1280),))\r\n```\r\nwhich causes\r\n```\r\nUserError: Could not extract specialized integer from data-dependent expression u0 (unhinted: u0). (Size-like symbols: none)\r\n\r\nPotential framework code culprit (scroll up for full backtrace):\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/_dynamo/utils.py\", line 2132, in run_node\r\n return node.target(*args, **kwargs)\r\n\r\nFor more information, run with TORCH_LOGS=\"dynamic\"\r\nFor extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL=\"u0\"\r\nIf you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1\r\nFor more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing\r\n\r\nUser Stack (most recent call last):\r\n (snipped, see stack below for prefix)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 2499, in forward\r\n outputs = self.model(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 2270, in forward\r\n pixel_level_module_output = self.pixel_level_module(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1395, in forward\r\n decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1319, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1165, in forward\r\n reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=inputs_embeds.device)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1106, in get_reference_points\r\n torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device),\r\n\r\nFor C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1\r\nFor more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example\r\n\r\nfrom user code:\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 2499, in forward\r\n outputs = self.model(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 2270, in forward\r\n pixel_level_module_output = self.pixel_level_module(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1395, in forward\r\n decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1319, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1165, in forward\r\n reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=inputs_embeds.device)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1106, in get_reference_points\r\n torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device),\r\n ```\r\n\r\n### Expected behavior\r\n\r\ntorch.export works for this model.", "pr_html_url": "https://github.com/huggingface/transformers/pull/34393", "file_loc": {"base_commit": "9bee9ff5db6e68fb31065898d7e924d07c1eb9c1", "files": [{"path": "src/transformers/models/mask2former/modeling_mask2former.py", "status": "modified", "Loc": {"('Mask2FormerPixelDecoder', 'forward', 1280)": {"add": [1333], "mod": [1305, 1307, 1323, 1337, 1339, 1341, 1345]}, "('Mask2FormerPixelDecoderEncoderMultiscaleDeformableAttention', 'forward', 921)": {"mod": [929, 939, 960, 973]}, "('Mask2FormerPixelDecoderEncoderLayer', 'forward', 998)": {"mod": [1004, 1018, 1019, 1036]}, "('Mask2FormerPixelDecoderEncoderOnly', None, 1069)": {"mod": [1089]}, "('Mask2FormerPixelDecoderEncoderOnly', 'get_reference_points', 1089)": {"mod": [1094, 1095, 1104]}, "('Mask2FormerPixelDecoderEncoderOnly', 'forward', 1120)": {"mod": [1125, 1143, 1144, 1165, 1179]}, "('Mask2FormerMaskedAttentionDecoder', 'forward', 1792)": {"mod": [1879]}}}, {"path": "tests/models/mask2former/test_modeling_mask2former.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [22]}, "('Mask2FormerModelIntegrationTest', 'test_with_segmentation_maps_and_loss', 466)": {"add": [483]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/models/mask2former/modeling_mask2former.py"], "doc": [], "test": ["tests/models/mask2former/test_modeling_mask2former.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "19d0942c74731d797a3590b1d8d46ece5a6d751f", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3077", "iss_label": "bug\nupstream issue", "title": "scrapy selector fails when large lines are present response", "body": "Originally encoutered when scraping [Amazon restaurant](https://www.amazon.com/restaurants/zzzuszimbos0015gammaloc1name-new-york/d/B01HH7CS44?ref_=amzrst_pnr_cp_b_B01HH7CS44_438). \r\nThis page contains multiple script tag with lines greater then 64,000 character in one line. \r\nThe selector (xpath and css) does not search beyond these lines. \r\n\r\nDue to this the following xpath `'//h1[contains(@class, \"hw-dp-restaurant-name\")]/text()'` to extract name of the restaurant returns empty even though there is a matching tag is present.\r\n\r\n\r\nPFA the response text at [original_response.html.txt.gz](https://github.com/scrapy/scrapy/files/1631425/original_response.html.txt.gz)\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/261", "file_loc": {"base_commit": "19d0942c74731d797a3590b1d8d46ece5a6d751f", "files": [{"path": "docs/contributing.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [76]}}}, {"path": "scrapy/tests/test_utils_url.py", "status": "modified", "Loc": {"('UrlUtilsTest', None, 8)": {"add": [50]}, "(None, None, None)": {"mod": [3, 4]}, "('MySpider', 'test_url_is_from_spider_with_allowed_domains_class_attributes', 52)": {"mod": [54]}}}, {"path": "scrapy/utils/url.py", "status": "modified", "Loc": {"(None, 'url_is_from_spider', 25)": {"mod": [27, 28]}, "(None, 'canonicalize_url', 33)": {"mod": [33]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/utils/url.py"], "doc": ["docs/contributing.rst"], "test": ["scrapy/tests/test_utils_url.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "953757a3e37ffb80570a20a8eca52dae35fc27bb", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/22471", "iss_label": "Testing\nClean\ngood first issue", "title": "TST/CLN: remove TestData from frame-tests; replace with fixtures", "body": "Following review in #22236: \r\n> ok, pls open a new issue that refs this, to remove use of `TestData` in favor of fixtures\r\n\r\nStarted the process in that PR by creating a `conftest.py` that translates all the current attributes of `TestData` to fixtures, with the following \"translation guide\":\r\n\r\n* `frame` -> `float_frame`\r\n* `frame2` -> `float_frame2`\r\n* `intframe` -> `int_frame`\r\n* `tsframe` -> `datetime_frame`\r\n* `mixed_frame` -> `float_string_frame`\r\n* `mixed_float` -> `mixed_float_frame`\r\n* `mixed_float2` -> `mixed_float_frame2`\r\n* `mixed_int` -> `mixed_int_frame`\r\n* `all_mixed` -> `mixed_type_frame`\r\n* `tzframe` -> `timezone_frame`\r\n* `empty` -> `empty_frame`\r\n* `ts1` -> `datetime_series`\r\n* `ts2` -> `datetime_series_short`\r\n* `simple` -> `simple_frame`\r\n\r\nNeed to incrementally replace their usages in `pandas/tests/frame/` (example below).\r\n\r\n- [x] Create `conftest.py` and translate `TestData`-attributes into fixtures (#22236)\r\n- [x] `test_alter_axes.py` (#22236)\r\n- [x] `test_analytics.py` (#22733)\r\n- [x] `test_api.py` (#22738)\r\n- [x] `test_apply.py` (#22735)\r\n- [x] `test_arithmetic.py` (#22736)\r\n- [x] `test_asof.py` (#25628)\r\n- [x] `test_axis_select_reindex.py` (#25627)\r\n- [x] `test_block_internals.py` (#22926)\r\n- [x] `test_combine_concat.py` (#25634)\r\n- [ ] `test_constructors.py` (#25635)\r\n- [ ] `test_convert_to.py`\r\n- [ ] `test_dtypes.py` (#25636)\r\n- [x] `test_duplicates.py`\r\n- [x] `test_indexing.py` (#25633)\r\n- [x] `test_join.py` (#25639)\r\n- [x] `test_missing.py` (#25640)\r\n- [x] `test_mutate_columns.py` (#25642)\r\n- [ ] `test_nonunique_indexes.py`\r\n- [x] `test_operators.py` (#25641)\r\n- [ ] `test_period.py`\r\n- [ ] `test_quantile.py`\r\n- [ ] `test_query_eval.py`\r\n- [ ] `test_rank.py`\r\n- [ ] `test_replace.py`\r\n- [ ] `test_repr_info.py`\r\n- [ ] `test_reshape.py`\r\n- [ ] `test_sort_values_level_as_str.py`\r\n- [ ] `test_sorting.py`\r\n- [ ] `test_subclass.py`\r\n- [ ] `test_timeseries.py`\r\n- [ ] `test_timezones.py`\r\n- [ ] `test_to_csv.py`\r\n- [ ] `test_validate.py`\r\n\r\nThings for follow-ups:\r\n- Remove other class-based test-methods\r\n- Turn tests from class- to function-based\r\n\r\nAn example from #22236 - before:\r\n```\r\ndef test_set_columns(self):\r\n cols = Index(np.arange(len(self.mixed_frame.columns)))\r\n self.mixed_frame.columns = cols\r\n with tm.assert_raises_regex(ValueError, 'Length mismatch'):\r\n self.mixed_frame.columns = cols[::2]\r\n```\r\nAfter:\r\n```\r\ndef test_set_columns(self, float_string_frame):\r\n cols = Index(np.arange(len(float_string_frame.columns)))\r\n float_string_frame.columns = cols\r\n with tm.assert_raises_regex(ValueError, 'Length mismatch'):\r\n float_string_frame.columns = cols[::2]\r\n```\r\n\r\nBasically, it comes down to replacing all the occurrences of `self.` with `translation_guide[]` (and specifying`` as a parameter to the function).\r\n\r\nPS. Note that some fixtures added by #22236 have now been removed by #24885. Please check #24885 which code was removed, in case you should need it for the fixturisation. Alternatively, you can ping me, @jbrockmendel or @jreback.", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/29226", "commit_html_url": null, "file_loc": {"base_commit": "953757a3e37ffb80570a20a8eca52dae35fc27bb", "files": [{"path": "pandas/tests/frame/common.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 3, 5, 6, 8, 9, 11, 12, 13, 15, 17, 18, 21, 22, 23, 24, 26, 27, 28, 30, 31, 32, 33, 35, 36, 37, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 106, 107, 108, 110, 111, 112, 114, 115, 116, 118, 121, 122]}}}, {"path": "pandas/tests/frame/test_indexing.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [28]}, "('TestDataFrameIndexing', None, 39)": {"mod": [39]}, "('TestDataFrameIndexing', 'test_setitem_fancy_mixed_2d', 1166)": {"mod": [1170, 1171]}, "('TestDataFrameIndexingDatetimeWithTZ', None, 3405)": {"mod": [3405]}, "('TestDataFrameIndexingUInt64', None, 3464)": {"mod": [3464]}}}, {"path": "pandas/tests/frame/test_query_eval.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [12]}, "('TestDataFrameQueryNumExprPython', 'setup_class', 703)": {"mod": [707]}, "('TestDataFrameQueryPythonPandas', 'setup_class', 807)": {"mod": [811]}, "('TestDataFrameQueryPythonPython', 'setup_class', 827)": {"mod": [830]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/tests/frame/common.py"], "doc": [], "test": ["pandas/tests/frame/test_indexing.py", "pandas/tests/frame/test_query_eval.py"], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "98efd264560983ed1d383222e3d5d22ed87169be", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/75", "iss_label": "API access", "title": "API Rate Limit Reached with new key", "body": "I just create a new key and it's failing to run:\r\n```\r\nContinue (y/n): y\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\n```\r\n\r\n![image](https://user-images.githubusercontent.com/1215497/229545231-7b463bc9-4630-45d5-a8cc-41df10e4e4be.png)\r\n", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/1304", "file_loc": {"base_commit": "98efd264560983ed1d383222e3d5d22ed87169be", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [147], "mod": [108]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "6a03ad082492268d60fa23ba5f3dcebd1630593e", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/317", "iss_label": "enhancement", "title": "Support for ChatGLM", "body": "**Description**\r\n\r\n[ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B)\r\n\r\nA Chinese chat AI based on GLM was released by THU.\r\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/1256", "file_loc": {"base_commit": "6a03ad082492268d60fa23ba5f3dcebd1630593e", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [221]}}}, {"path": "download-model.py", "status": "modified", "Loc": {"(None, 'get_download_links_from_huggingface', 82)": {"mod": [111]}}}, {"path": "models/config.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [47]}}}, {"path": "modules/chat.py", "status": "modified", "Loc": {"(None, 'generate_chat_prompt', 21)": {"mod": [52, 63]}}}, {"path": "modules/models.py", "status": "modified", "Loc": {"(None, 'load_model', 41)": {"add": [46, 122], "mod": [50, 82, 159, 168, 188]}, "(None, None, None)": {"mod": [13, 14]}}}, {"path": "modules/shared.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [115, 164]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["modules/shared.py", "modules/chat.py", "download-model.py", "modules/models.py"], "doc": ["README.md"], "test": [], "config": ["models/config.yaml"], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "130601e076ec5ca8298b95c3d02122ac5d8cf8eb", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/2372", "iss_label": "Bug\nModerate", "title": "StratifiedKFold should do its best to preserve the dataset dependency structure", "body": "As highlighted in this [notebook](http://nbviewer.ipython.org/urls/raw.github.com/ogrisel/notebooks/master/Non%2520IID%2520cross-validation.ipynb) the current implementation of `StratifiedKFold` (which is used by default by `cross_val_score` and `GridSearchCV` for classification problems) breaks the dependency structure of the dataset by computing the folds based on the sorted labels.\n\nInstead one should probably do an implementation that performs individual dependency preserving KFold on for each possible label value and aggregate the folds to get the `StratifiedKFold` final folds.\n\nThis might incur a refactoring to get rid of the `_BaseKFold` base class. It might also make it easier to implement a `shuffle=True` option for `StratifiedKFold`.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/2463", "file_loc": {"base_commit": "130601e076ec5ca8298b95c3d02122ac5d8cf8eb", "files": [{"path": "doc/modules/cross_validation.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [108, 109, 115, 122, 123, 124, 125, 200, 201, 205, 206, 209, 210]}}}, {"path": "doc/tutorial/statistical_inference/model_selection.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [146, 148, 149, 150, 151, 166, 167]}}}, {"path": "doc/whats_new.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [46, 2290], "mod": [784]}}}, {"path": "sklearn/cross_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}, "('StratifiedKFold', '__init__', 375)": {"add": [385], "mod": [378, 379]}, "('StratifiedKFold', None, 335)": {"mod": [388, 389, 390, 391, 392]}}}, {"path": "sklearn/feature_selection/tests/test_rfe.py", "status": "modified", "Loc": {"(None, 'test_rfecv', 64)": {"add": [78], "mod": [72, 80, 85, 86, 87, 90, 96, 97, 101, 106, 107]}}}, {"path": "sklearn/tests/test_cross_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 24, 93], "mod": [152]}, "(None, 'test_kfold_valueerrors', 95)": {"add": [112], "mod": [103, 104]}, "(None, 'test_kfold_indices', 127)": {"mod": [130, 131, 132, 133, 134, 135, 137, 138]}, "(None, 'test_shuffle_kfold', 153)": {"mod": [156, 157, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 174, 175]}, "(None, 'test_cross_val_score_with_score_func_classification', 376)": {"mod": [382, 388, 394, 399]}, "(None, 'test_permutation_score', 429)": {"mod": [453, 473, 480]}}}, {"path": "sklearn/tests/test_naive_bayes.py", "status": "modified", "Loc": {"(None, 'test_check_accuracy_on_digits', 330)": {"mod": [332, 333, 341, 344, 348, 351, 355]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/cross_validation.py"], "doc": ["doc/modules/cross_validation.rst", "doc/tutorial/statistical_inference/model_selection.rst", "doc/whats_new.rst"], "test": ["sklearn/tests/test_naive_bayes.py", "sklearn/feature_selection/tests/test_rfe.py", "sklearn/tests/test_cross_validation.py"], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "6ff8478118935b72c35f3ec1b31e74f2a1aa2e90", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/528", "iss_label": "enhancement\ngood first issue\npotential plugin\nStale", "title": "Auto-GPT System Awareness", "body": "### System Awareness\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Summary \ud83d\udca1\r\n\r\nBefore going out to look at the internet \r\nIt would be helpful if upon activation the AI took inventory of the system it was on and shared the available tools and capabilities\r\nand if they were insufficient begin researching and developing GAP tools to use during the session with the expressed request to push the GAP tools via PR back to the community\r\n\r\n### Examples \ud83c\udf08\r\n\r\nAI System initializing\r\n- MacOS \r\n- Python3\r\n- Pip\r\n- Shell Commands available...\r\n- Desktop App skills available...\r\n\r\nWhat are your goals?\r\n\r\n### Motivation \ud83d\udd26\r\n\r\nusuability ", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/4548", "file_loc": {"base_commit": "6ff8478118935b72c35f3ec1b31e74f2a1aa2e90", "files": [{"path": ".github/PULL_REQUEST_TEMPLATE.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [44]}}}, {"path": ".github/workflows/ci.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [72]}}}, {"path": ".pre-commit-config.yaml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [34]}}}, {"path": "autogpt/plugins.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 5]}, "(None, 'scan_plugins', 203)": {"add": [219]}}}, {"path": "scripts/install_plugin_deps.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 32]}, "(None, 'install_plugin_dependencies', 8)": {"add": [18]}}}, {"path": "tests/integration/test_plugins.py", "status": "modified", "Loc": {"('MockConfig', 'mock_config_openai_plugin', 37)": {"mod": [42]}, "('MockConfig', 'mock_config_generic_plugin', 59)": {"mod": [63]}, "(None, 'test_scan_plugins_generic', 68)": {"mod": [71]}}}, {"path": "tests/integration/test_web_selenium.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 4, 6]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["autogpt/plugins.py", "scripts/install_plugin_deps.py"], "doc": [".github/PULL_REQUEST_TEMPLATE.md"], "test": ["tests/integration/test_web_selenium.py", "tests/integration/test_plugins.py"], "config": [".github/workflows/ci.yml", ".pre-commit-config.yaml"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "707ab7b3f84fb5664ff63da0b52e7b0d2e4df545", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/908", "iss_label": "bug", "title": "Agent stuck in the \"starting task\" step--Unsupported Protocol", "body": "\r\n#### Describe the bug\r\n\r\nI asked the agent to build a calculator, but it didn't give me any response, just stuck in the starting step.\r\n\r\n#### Setup and configuration\r\n**Current version**:\r\n\r\n```bash\r\ncommit e9121b78fed0b5ef36718ca0bf59588c0b094b86 (HEAD -> main)\r\nAuthor: Xingyao Wang \r\nDate: Sun Apr 7 16:07:59 2024 +0800\r\n```\r\n\r\n\r\n**My config.toml and environment vars** (be sure to redact API keys):\r\n```toml\r\nLLM_MODEL=\"gpt-3.5-turbo-1106\"\r\nLLM_API_KEY=\"already set, and have test in python script, which works\"\r\nLLM_EMBEDDING_MODEL=\"openai\"\r\nWORKSPACE_DIR=\"./workspace\"\r\n```\r\n\r\n**My model and agent** (you can see these settings in the UI):\r\n* Model: PlannerAgent\r\n* Agent: gpt-3.5-turbo-1106\r\n\r\n**Commands I ran to install and run OpenDevin**:\r\n```\r\nmake build \r\nmake run\r\n```\r\n\r\n**Steps to Reproduce**:\r\nrun the commands, input: build a calculator with python\r\n\r\n**Logs, error messages, and screenshots**:\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/86202027/b078b7b0-446d-4e9d-bc83-ff3c270d9512)\r\nbackend:\r\n```\r\nINFO: 127.0.0.1:34564 - \"GET /litellm-agents HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:34572 - \"GET /messages/total HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:34584 - \"DELETE /messages HTTP/1.1\" 200 OK\r\n\r\n\r\n==============\r\nSTEP 0\r\n\r\nPLAN:\r\nbuild a calculator with python\r\n\r\nINFO:\r\nHINT:\r\nYou're not currently working on any tasks. Your next action MUST be to mark a task as in_progress.\r\n```\r\n\r\nfrontend:\r\n```\r\n22:35:39 - opendevin:INFO: sandbox.py:117 - Using workspace directory: /mnt/d/OpenDevin/workspace\r\n22:35:39 - opendevin:INFO: sandbox.py:257 - Container stopped\r\n22:35:39 - opendevin:INFO: sandbox.py:277 - Container started\r\n22:37:54 - opendevin:INFO: sandbox.py:117 - Using workspace directory: /mnt/d/OpenDevin/workspace\r\n```\r\n\r\nllm prompt_001:\r\n```\r\n[{'content': '\\n# Task\\nYou\\'re a diligent software engineer AI. You can\\'t see, draw, or interact with a\\nbrowser, but you can read and write files, and you can run commands, and you can think.\\n\\nYou\\'ve been given the following task:\\n\\nbuild a calculator with python\\n\\n## Plan\\nAs you complete this task, you\\'re building a plan and keeping\\ntrack of your progress. Here\\'s a JSON representation of your plan:\\n\\n{\\n \"id\": \"0\",\\n \"goal\": \"build a calculator with python\",\\n \"state\": \"open\",\\n \"subtasks\": []\\n}\\n\\n\\nYou\\'re not currently working on any tasks. Your next action MUST be to mark a task as in_progress.\\n\\nYou\\'re responsible for managing this plan and the status of tasks in\\nit, by using the `add_task` and `modify_task` actions described below.\\n\\nIf the History below contradicts the state of any of these tasks, you\\nMUST modify the task using the `modify_task` action described below.\\n\\nBe sure NOT to duplicate any tasks. Do NOT use the `add_task` action for\\na task that\\'s already represented. Every task must be represented only once.\\n\\nTasks that are sequential MUST be siblings. They must be added in order\\nto their parent task.\\n\\nIf you mark a task as \\'completed\\', \\'verified\\', or \\'abandoned\\',\\nall non-abandoned subtasks will be marked the same way.\\nSo before closing a task this way, you MUST not only be sure that it has\\nbeen completed successfully--you must ALSO be sure that all its subtasks\\nare ready to be marked the same way.\\n\\nIf, and only if, ALL tasks have already been marked verified,\\nyou MUST respond with the `finish` action.\\n\\n## History\\nHere is a recent history of actions you\\'ve taken in service of this plan,\\nas well as observations you\\'ve made. This only includes the MOST RECENT\\nten actions--more happened before that.\\n\\n[]\\n\\n\\nYour most recent action is at the bottom of that history.\\n\\n## Action\\nWhat is your next thought or action? Your response must be in JSON format.\\n\\nIt must be an object, and it must contain two fields:\\n* `action`, which is one of the actions below\\n* `args`, which is a map of key-value pairs, specifying the arguments for that action\\n\\n* `read` - reads the content of a file. Arguments:\\n * `path` - the path of the file to read\\n* `write` - writes the content to a file. Arguments:\\n * `path` - the path of the file to write\\n * `content` - the content to write to the file\\n* `run` - runs a command on the command line in a Linux shell. Arguments:\\n * `command` - the command to run\\n * `background` - if true, run the command in the background, so that other commands can be run concurrently. Useful for e.g. starting a server. You won\\'t be able to see the logs. You don\\'t need to end the command with `&`, just set this to true.\\n* `kill` - kills a background command\\n * `id` - the ID of the background command to kill\\n* `browse` - opens a web page. Arguments:\\n * `url` - the URL to open\\n* `think` - make a plan, set a goal, or record your thoughts. Arguments:\\n * `thought` - the thought to record\\n* `add_task` - add a task to your plan. Arguments:\\n * `parent` - the ID of the parent task\\n * `goal` - the goal of the task\\n * `subtasks` - a list of subtasks, each of which is a map with a `goal` key.\\n* `modify_task` - close a task. Arguments:\\n * `id` - the ID of the task to close\\n * `state` - set to \\'in_progress\\' to start the task, \\'completed\\' to finish it, \\'verified\\' to assert that it was successful, \\'abandoned\\' to give up on it permanently, or `open` to stop working on it for now.\\n* `finish` - if ALL of your tasks and subtasks have been verified or abandoned, and you\\'re absolutely certain that you\\'ve completed your task and have tested your work, use the finish action to stop working.\\n\\nYou MUST take time to think in between read, write, run, browse, and recall actions.\\nYou should never act twice in a row without thinking. But if your last several\\nactions are all `think` actions, you should consider taking a different action.\\n\\nWhat is your next thought or action? Again, you must reply with JSON, and only with JSON.\\n\\nYou\\'re not currently working on any tasks. Your next action MUST be to mark a task as in_progress.\\n', 'role': 'user'}]\r\n```\r\n\r\nllm response is empty\r\n\r\n#### Additional Context\r\nI also tried to use gpt-4 and got the same result.\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/960", "file_loc": {"base_commit": "707ab7b3f84fb5664ff63da0b52e7b0d2e4df545", "files": [{"path": "opendevin/config.py", "status": "modified", "Loc": {"(None, 'get_all', 78)": {"mod": [78, 82]}}}, {"path": "opendevin/server/agent/manager.py", "status": "modified", "Loc": {"('AgentManager', 'create_controller', 93)": {"mod": [107, 108, 109, 110, 111]}}}, {"path": "opendevin/server/listen.py", "status": "modified", "Loc": {"(None, 'read_default_model', 114)": {"mod": [115]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["opendevin/config.py", "opendevin/server/agent/manager.py", "opendevin/server/listen.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "59f4d299b6ae3232a1d8fe5d5d9652bffa17a728", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/809", "iss_label": "", "title": "facerec_from_webcam_multiprocessing.py run Global is not defined", "body": "* face_recognition version: 1.23\r\n* Python version: 3.6.6\r\n* Operating System: windows 10\r\n\r\n### Description\r\n![image](https://user-images.githubusercontent.com/2375460/56864177-e74b0900-69f1-11e9-9cad-d44cc8ca9d3d.png)\r\n\r\n\r\n### What I Did\r\n\r\n```\r\nfacerec_from_webcam_multiprocessing.py run Global is not defined. pls fix it, thanks\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ageitgey/face_recognition/pull/905", "commit_html_url": null, "file_loc": {"base_commit": "59f4d299b6ae3232a1d8fe5d5d9652bffa17a728", "files": [{"path": "examples/facerec_from_webcam_multiprocessing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 113], "mod": [3, 125, 130, 131, 154, 189]}, "(None, 'next_id', 17)": {"mod": [17]}, "(None, 'prev_id', 25)": {"mod": [25]}, "(None, 'capture', 33)": {"mod": [33, 43, 47]}, "(None, 'process', 56)": {"mod": [56, 62, 72, 109]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/facerec_from_webcam_multiprocessing.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "dc86509b44b3fb0cd9a1a6d6ed564b082dc50848", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/26139", "iss_label": "Docs\nIO HDF5", "title": "Doc for HDFStore compression unclear on what the default value of None does", "body": "The doc for the `HDFStore` class mentions:\r\n\r\n``` \r\ncomplevel : int, 0-9, default None\r\n Specifies a compression level for data.\r\n A value of 0 disables compression.\r\n```\r\n\r\nThat doesn't actually answer the question of what compression level is used when the default (None) is used, though. Is None translated further down to 0? it turns out yes, but you have to dig in the code to actually figure that out. And it could as well have been translated eventually to any other value.\r\n\r\nTwo options:\r\n1. Actually change the default in the `complevel` argument to be \"0\". (It's an immutable object, so it's fine as a default value for a function argument.)\r\n2. Just adjust the doc in some way.\r\n\r\nWhen the right solution is decided, I can do a pull request with it. Thanks!", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/26158", "file_loc": {"base_commit": "dc86509b44b3fb0cd9a1a6d6ed564b082dc50848", "files": [{"path": "pandas/io/pytables.py", "status": "modified", "Loc": {"('HDFStore', None, 401)": {"mod": [425]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["pandas/io/pytables.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "0949d2e77022ad69cc07d4b25a858a7e023503ac", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/1207", "iss_label": "", "title": "git push upstream branch does not exist, wrong command recommended first", "body": "\r\n\r\n\r\n\r\nRecently I noticed a change in a `thefuck` behavior that I use very regularly which I wanted to call out as what I think is an unwanted change. This was introduced very recently, I believe with the 3.31 release. When using `git push` on a git repository where the branch does not exist in the upstream repository, `git` responds with a specific command one should run to create the upstream branch. Prior to version 3.31, `thefuck` seemed to recognize this and made the first suggested Corrected Command was the one `git` recommended. As of version 3.31, `thefuck` instead puts a generic `git push --no-verify` command first, and the one `git` recommended is instead the second result.\r\n\r\nIn this case where `git` recommends a specific command, `git push --no-verify` doesn't actually help or do what the user wants; you need the `git push --set-upstream origin branch-name` command which `thefuck` now arrives at second. Because of the inconvenience for this particular case, combined with the fact that the first option recommended by `thefuck` isn't functionally valid, the prior behavior is more correct for this particular case.\r\n\r\nBelow is all the debug information requested in the issue template:\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.31 using Python 3.9.5 and ZSH 5.8\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n Arch Linux\r\n\r\nHow to reproduce the bug:\r\n\r\n - In a git repo, create a branch which does not exist in the upstream repository\r\n - Attempt to push the branch with `git push`\r\n - You should see an error message saying \"fatal: The current branch branch-name has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin branch-name\"\r\n - invoke `thefuck`\r\n - Prior to 3.31, `thefuck` would present as the first option the exact command which git tells you to use (git push --set-upstream origin branch-name).\r\n - As of 3.31, `thefuck` instead presents as the first option a more generic `git push --no-verify`, and git's recommended command is the second result.\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\nhttps://pastebin.com/qpyEcreC\r\n\r\nIf the bug only appears with a specific application, the output of that application and its version:\r\n\r\n git version 2.32.0\r\n\r\nAnything else you think is relevant:\r\n\r\n N/A\r\n\r\n\r\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/1208", "file_loc": {"base_commit": "0949d2e77022ad69cc07d4b25a858a7e023503ac", "files": [{"path": "thefuck/rules/git_hook_bypass.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [26]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/rules/git_hook_bypass.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "b8a43011e75da4353b0d5ef314c96cb1276f12f0", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3893", "iss_label": "", "title": "[Bug] 1.7.1 not support 1.6.0 script", "body": "Hello All,\r\n\r\nMy spider is created by scrapy 1.6.0.\r\nThese days, the scrapy updated to 1.7.1, and we found that it cannot support the code build by 1.6.0.\r\n\r\nHere is the error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/bin/scrapy\", line 6, in \r\n from scrapy.cmdline import execute\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/cmdline.py\", line 10, in \r\n from scrapy.crawler import CrawlerProcess\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/crawler.py\", line 11, in \r\n from scrapy.core.engine import ExecutionEngine\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/core/engine.py\", line 14, in \r\n from scrapy.core.scraper import Scraper\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/core/scraper.py\", line 18, in \r\n from scrapy.core.spidermw import SpiderMiddlewareManager\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/core/spidermw.py\", line 13, in \r\n from scrapy.utils.conf import build_component_list\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/utils/conf.py\", line 4, in \r\n import configparser\r\nImportError: No module named configparser\r\n```\r\n\r\nWould you please take time to check the issue?\r\n\r\nAppreciate for your help in advance.\r\n\r\nThank you.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3896", "file_loc": {"base_commit": "b8a43011e75da4353b0d5ef314c96cb1276f12f0", "files": [{"path": "scrapy/utils/conf.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7], "mod": [4]}, "(None, 'get_config', 94)": {"mod": [97]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["scrapy/utils/conf.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "454e9613b0b4c7a9dbb2b8273aff0b36c4d8a2bb", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/1276", "iss_label": "bug", "title": "[Bug]: Browsing is not working", "body": "### Is there an existing issue for the same bug?\r\n\r\n- [X] I have checked the troubleshooting document at https://github.com/OpenDevin/OpenDevin/blob/main/docs/guides/Troubleshooting.md\r\n- [X] I have checked the existing issues.\r\n\r\n### Describe the bug\r\n\r\nWhen I ask a question that requires browsing the web to get the answer, OpenDevin does not use the \"browsing\" tab.\r\n\r\nFor instance, I asked\r\n```\r\nPlease resolve this pull request: https://github.com/OpenDevin/OpenDevin/issues/1275\r\n```\r\n\r\nIn trying to resolve the pull request, OpenDevin tried to install Playwright to browse the web instead of using built-in browsing capability, and the browsing tab said \"no screenshot available\".\r\n\r\n### Current Version\r\n\r\n```bash\r\n`ghcr.io/opendevin/opendevin:0.3.1`\r\n```\r\n\r\n\r\n### Installation and Configuration\r\n\r\n```bash\r\nexport LLM_API_KEY=\"sk-...\"\r\nexport WORKSPACE_DIR=$(pwd)/workspace\r\n```\r\n\r\n\r\n### Model and Agent\r\n\r\n_No response_\r\n\r\n### Reproduction Steps\r\n\r\n1. Ask OpenDevin: `Please resolve this pull request: https://github.com/OpenDevin/OpenDevin/issues/1275`\r\n\r\n### Logs, Errors, Screenshots, and Additional Context\r\n\r\nIt seems that this is a relevant error:\r\n\r\n```\r\nSTEP 2\r\n\r\n23:12:20 - opendevin:INFO: agent_controller.py:89\r\nPLAN\r\nResolve this pull request: https://github.com/OpenDevin/OpenDevin/issues/1275\r\n23:12:26 - opendevin:INFO: agent_controller.py:107\r\nACTION\r\nBrowseURLAction(url='https://github.com/OpenDevin/OpenDevin/pull/1275', action=)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n - Avoid using `tokenizers` before the fork if possible\r\n - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\n23:12:27 - opendevin:INFO: agent_controller.py:160\r\nOBSERVATION\r\nBrowserType.launch: Executable doesn't exist at /root/.cache/ms-playwright/chromium-1112/chrome-linux/chrome\r\n\u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\r\n\u2551 Looks like Playwright was just installed or updated. \u2551\r\n\u2551 Please run the following command to download new browsers: \u2551\r\n\u2551 \u2551\r\n\u2551 playwright install \u2551\r\n\u2551 \u2551\r\n\u2551 <3 Playwright Team \u2551\r\n\u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\r\n```", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/1184", "file_loc": {"base_commit": "454e9613b0b4c7a9dbb2b8273aff0b36c4d8a2bb", "files": [{"path": "containers/app/Dockerfile", "status": "modified", "Loc": {"(None, None, None)": {"add": [50]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["containers/app/Dockerfile"], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "92c825be6a7362099400c9c3fe8b01ea13add3dc", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/8", "iss_label": "feature\nanswered\nreviewed", "title": "Nesting FastAPI instances doesn't work very well", "body": "Do this:\r\n\r\n main_app = FastAPI()\r\n sub_api = FastAPI()\r\n\r\n ...\r\n main_app.router.routes.append(Mount('/subapi', app=sub_api))\r\n\r\n`sub_api` will correctly serve ever `/subapi` -- docs, methods, all that. However, the docs will still look for `/openapi.json` (absolute link) when trying to load the openapi spec. Additionally, the spec will not be adjusted to have the correct links, relative to where the module is mounted.\r\n\r\nPerhaps this is a corner use case, but a lot of apps might have different collections of routes mounted in different subpaths.", "pr_html_url": "https://github.com/fastapi/fastapi/pull/26", "file_loc": {"base_commit": "92c825be6a7362099400c9c3fe8b01ea13add3dc", "files": [{"path": "fastapi/applications.py", "status": "modified", "Loc": {"('FastAPI', '__init__', 20)": {"add": [27, 45]}, "('FastAPI', 'openapi', 61)": {"add": [68]}, "('FastAPI', 'setup', 72)": {"mod": [83, 91]}}}, {"path": "fastapi/openapi/utils.py", "status": "modified", "Loc": {"(None, 'get_openapi', 212)": {"mod": [218, 237]}}}, {"path": "mkdocs.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [59]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["fastapi/openapi/utils.py", "fastapi/applications.py"], "doc": ["mkdocs.yml"], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "9f1b9dbf60f406e8d6205402b8ac078195cd0c01", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/4517", "iss_label": "type: bug\nstatus: triage needed\naws:cloudformation\naws:iam", "title": "bug: AWS::NoValue produces error when used in IAM policy template", "body": "### Is there an existing issue for this?\r\n\r\n- [x] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nWhen I try to create a role with S3 resource and I use `!Ref AWS::NoValue` for its resource, it fails with errors. It is supposed to be removed from array entry, but it looks like it evaluates as `__aws_no_value__`, which then fails to validate because the value is not in acceptable format for ARN.\r\n(Message: `Resource __aws_no_value__ must be in ARN format or \"*\".`)\r\n\r\ntemplate file `test.template` :\r\n```\r\nAWSTemplateFormatVersion: 2010-09-09\r\n\r\nConditions:\r\n someCondition: false\r\n\r\nResources:\r\n SomeRole:\r\n Type: AWS::IAM::Role\r\n Properties:\r\n RoleName: SomeRole\r\n AssumeRolePolicyDocument:\r\n Version: 2012-10-17\r\n Statement:\r\n - Effect: Allow\r\n Principal:\r\n Service:\r\n - lambda.amazonaws.com\r\n Action:\r\n - sts:AssumeRole\r\n Policies:\r\n - PolicyName: SomePolicy\r\n PolicyDocument:\r\n Version: 2012-10-17\r\n Statement:\r\n - Effect: Allow\r\n Action:\r\n - s3:GetObject\r\n - s3:GetObjectVersion\r\n Resource:\r\n - arn:aws:s3:::some-prefix-*/*\r\n - !If\r\n - someCondition\r\n - !Ref arn:aws:s3:::another-prefix-*/*\r\n - !Ref AWS::NoValue\r\n```\r\nExecuted command:\r\n```\r\nawslocal cloudformation deploy \\\r\n --no-fail-on-empty-changeset \\\r\n --capabilities CAPABILITY_NAMED_IAM \\\r\n --template-file test.template \\\r\n --stack-name \"test-stack\"\r\n```\r\nError log produced:\r\n```\r\n2021-08-30T07:38:54:DEBUG:localstack.utils.cloudformation.template_deployer: Error applying changes for CloudFormation stack \"test-resources-iam\": An error occurred (MalformedPolicyDocument) when calling the PutRolePolicy operation: Resource __aws_no_value__ must be in ARN format or \"*\". Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 2083, in _run\r\n self.do_apply_changes_in_loop(changes, stack, stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 2154, in do_apply_changes_in_loop\r\n self.apply_change(change, stack, new_resources, stack_name=stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 2218, in apply_change\r\n result = deploy_resource(resource_id, new_resources, stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1037, in deploy_resource\r\n return execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1152, in execute_resource_action\r\n resource_id, resources, resource_type, func, stack_name, action_name\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1314, in configure_resource_via_sdk\r\n run_post_create_actions(action_name, resource_id, resources, resource_type, stack_name, result)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1414, in run_post_create_actions\r\n PolicyDocument=doc,\r\n File \"/opt/code/localstack/.venv/lib/python3.7/site-packages/botocore/client.py\", line 386, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n File \"/opt/code/localstack/.venv/lib/python3.7/site-packages/botocore/client.py\", line 705, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.errorfactory.MalformedPolicyDocumentException: An error occurred (MalformedPolicyDocument) when calling the PutRolePolicy operation: Resource __aws_no_value__ must be in ARN format or \"*\".\r\n```\r\n\r\n### Expected Behavior\r\n\r\nCreate stack without failing\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith the `localstack` script\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n```\r\nFORCE_NONINTERACTIVE=1 \\\r\nSERVICES=iam,s3,lambda,cloudformation \\\r\nlocalstack infra start &\r\n```\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n```\r\nawslocal cloudformation deploy \\\r\n --no-fail-on-empty-changeset \\\r\n --capabilities CAPABILITY_NAMED_IAM \\\r\n --template-file test.template \\\r\n --stack-name \"test-stack\"\r\n```\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Ubuntu 20.04\r\n- LocalStack: latest\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n_No response_", "pr_html_url": "https://github.com/localstack/localstack/pull/6760", "file_loc": {"base_commit": "9f1b9dbf60f406e8d6205402b8ac078195cd0c01", "files": [{"path": "localstack/services/cloudformation/models/cloudwatch.py", "status": "modified", "Loc": {"('CloudWatchAlarm', None, 6)": {"add": [11]}}}, {"path": "localstack/services/cloudformation/models/iam.py", "status": "modified", "Loc": {"('IAMRole', '_post_create', 278)": {"add": [314]}, "('IAMManagedPolicy', '_create', 46)": {"mod": [51]}}}, {"path": "tests/integration/cloudformation/test_cloudformation_iam.py", "status": "modified", "Loc": {"(None, 'test_iam_user_access_key', 156)": {"add": [174]}, "(None, None, None)": {"mod": [4, 8, 10, 13, 14, 15, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]}, "(None, 'test_delete_role_detaches_role_policy', 18)": {"mod": [29, 30, 31, 32, 33, 34, 36, 37, 38, 39, 40, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 74, 75, 76, 78, 79, 80]}, "(None, 'test_policy_attachments', 83)": {"mod": [110]}}}, {"path": "tests/integration/cloudformation/test_cloudformation_iam.snapshot.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}}}, {"path": "tests/integration/templates/iam_policy_attachments.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [4], "mod": [18]}}}, {"path": "tests/integration/templates/iam_role_policy.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [12, 19, 20]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack/services/cloudformation/models/cloudwatch.py", "tests/integration/cloudformation/test_cloudformation_iam.snapshot.json", "localstack/services/cloudformation/models/iam.py"], "doc": [], "test": ["tests/integration/cloudformation/test_cloudformation_iam.py"], "config": ["tests/integration/templates/iam_role_policy.yaml", "tests/integration/templates/iam_policy_attachments.yaml"], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "c4c1203fc07b2e23c3e5a5e9277266a711ab9466", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/117", "iss_label": "bug", "title": "GPT Engineer will not save individual files when given specs that result in many files.", "body": "The generated code goes into the logfile however it would be more useful if the tool could make all those files automatically.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/120", "file_loc": {"base_commit": "c4c1203fc07b2e23c3e5a5e9277266a711ab9466", "files": [{"path": "gpt_engineer/chat_to_files.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "(None, 'parse_chat', 6)": {"add": [11], "mod": [6, 7, 8, 10, 13, 14, 15, 16, 17, 18, 19]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt_engineer/chat_to_files.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "40e623b2768598e36c4f367bd166b36fffceb3f6", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/6177", "iss_label": "enhancement\ndocs", "title": "Switch to the latest sphinx", "body": "The docs fail to build with the current Sphinx (7.2.6):\r\n\r\n```\r\nreading sources... [ 48%] topics/downloader-middleware\r\nExtension error (scrapydocs):\r\nHandler for event 'doctree-read' threw an exception (exception: Next node is not a target)\r\n```\r\n\r\nSo we should update deps in docs/requirements.txt, fix this (and maybe others) problem and make sure the docs are built correctly.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/6200", "file_loc": {"base_commit": "40e623b2768598e36c4f367bd166b36fffceb3f6", "files": [{"path": "docs/requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2, 3, 4]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docs/requirements.txt"], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "0958cc333ee13d1ce5216ae0bdeaa53b5eacc6ea", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1059", "iss_label": "bug", "title": "ReadTimeout when using local LLM", "body": "**Bug description**\r\nWhen hosting the following model; https://huggingface.co/oobabooga/CodeBooga-34B-v0.1 locally using LMStudio 0.2.14 on Linux Mint 21.3 Cinnamon I am sometimes (usually after several iterations when the context gets large) confronted with a ReadTimeout.\r\n\r\nMetaGPT main branch, commit id: adb42f4, it reports version: 0.7.4 with pip show metagpt. Used Python 3.9.18.\r\n\r\nI used the following code to try out MetaGPT\r\n```\r\nimport asyncio\r\nfrom metagpt.roles.di.data_interpreter import DataInterpreter\r\n\r\nasync def main(requirement: str = \"\"):\r\n di = DataInterpreter()\r\n await di.run(requirement)\r\n\r\nif __name__ == \"__main__\":\r\n requirement = \"Create a dnd 5th edition graph displaying xp per level based on information from a reputable source determined by Googling. First write results in a CSV and validate the CSV contains multiple records. If the file does not contain records, determine if you can fix the code or whether you need to look at another source. After the CSV files is filled with records, create the graph based on this.\"\r\n\r\n asyncio.run(main(requirement))\r\n```\r\nI got the below exception\r\n```\r\nTraceback (most recent call last):\r\n File \"metagpt/lib/python3.9/site-packages/httpx/_transports/default.py\", line 69, in map_httpcore_exceptions\r\n yield\r\n File \"metagpt/lib/python3.9/site-packages/httpx/_transports/default.py\", line 254, in __aiter__\r\n async for part in self._httpcore_stream:\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/connection_pool.py\", line 367, in __aiter__\r\n raise exc from None\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/connection_pool.py\", line 363, in __aiter__\r\n async for part in self._stream:\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py\", line 349, in __aiter__\r\n raise exc\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py\", line 341, in __aiter__\r\n async for chunk in self._connection._receive_response_body(**kwargs):\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py\", line 210, in _receive_response_body\r\n event = await self._receive_event(timeout=timeout)\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py\", line 224, in _receive_event\r\n data = await self._network_stream.read(\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_backends/anyio.py\", line 36, in read\r\n return b\"\"\r\n File \"3.9.18/lib/python3.9/contextlib.py\", line 137, in __exit__\r\n self.gen.throw(typ, value, traceback)\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_exceptions.py\", line 14, in map_exceptions\r\n raise to_exc(exc) from exc\r\nhttpcore.ReadTimeout\r\n```\r\n**Bug solved method**\r\n\r\nIt would be nice if the timeout and retries are configurable to avoid this issue (for example like AutoGen does this in the LLM API configuration). N.b. I've tried larger local models in the past (for which disk swapping was required due to memory constraints). Those models can sometimes take more than an hour to respond. The model for which this bug is registered can fit in my CPU RAM (64Gb).", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1060", "file_loc": {"base_commit": "0958cc333ee13d1ce5216ae0bdeaa53b5eacc6ea", "files": [{"path": "config/config2.example.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [6]}}}, {"path": "metagpt/actions/action_node.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}, "('ActionNode', '_aask_v1', 411)": {"mod": [419]}, "('ActionNode', None, 122)": {"mod": [451]}, "('ActionNode', 'fill', 468)": {"mod": [476]}}}, {"path": "metagpt/configs/llm_config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12]}, "('LLMConfig', 'check_llm_key', 87)": {"add": [90]}, "('LLMConfig', None, 38)": {"mod": [77]}}}, {"path": "metagpt/const.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [134], "mod": [126]}}}, {"path": "metagpt/provider/anthropic_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}, "('AnthropicLLM', None, 14)": {"mod": [44, 49, 50, 52]}}}, {"path": "metagpt/provider/base_llm.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [25]}, "('BaseLLM', 'with_model', 257)": {"add": [260]}, "('BaseLLM', 'aask', 127)": {"mod": [133, 149]}, "('BaseLLM', None, 32)": {"mod": [155, 165, 169, 173, 184, 194]}, "('BaseLLM', 'aask_batch', 155)": {"mod": [161]}, "('BaseLLM', 'acompletion_text', 194)": {"mod": [197, 198]}}}, {"path": "metagpt/provider/dashscope_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27]}, "('DashScopeLLM', None, 152)": {"mod": [205, 211, 212, 214]}}}, {"path": "metagpt/provider/general_api_base.py", "status": "modified", "Loc": {"('APIRequestor', 'arequest_raw', 556)": {"mod": [576]}}}, {"path": "metagpt/provider/google_gemini_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}, "('GeminiLLM', None, 41)": {"mod": [126, 132, 133, 135]}}}, {"path": "metagpt/provider/human_provider.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8]}, "('HumanProvider', None, 13)": {"mod": [21, 38, 41, 45, 48]}, "('HumanProvider', 'aask', 28)": {"mod": [34, 36]}}}, {"path": "metagpt/provider/ollama_api.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8]}, "('OllamaLLM', None, 17)": {"mod": [53, 65, 66, 68]}, "('OllamaLLM', '_achat_completion', 53)": {"mod": [58]}, "('OllamaLLM', '_achat_completion_stream', 68)": {"mod": [74]}}}, {"path": "metagpt/provider/openai_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27]}, "('OpenAILLM', None, 43)": {"mod": [77, 107, 121, 122, 127, 128, 137, 154]}, "('OpenAILLM', '_achat_completion_stream', 77)": {"mod": [79]}, "('OpenAILLM', '_cons_kwargs', 107)": {"mod": [115]}, "('OpenAILLM', 'acompletion_text', 137)": {"mod": [142]}, "('OpenAILLM', '_achat_completion_function', 145)": {"mod": [146, 149]}}}, {"path": "metagpt/provider/qianfan_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}, "('QianFanLLM', None, 23)": {"mod": [110, 115, 116, 118]}}}, {"path": "metagpt/provider/spark_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}, "('SparkLLM', None, 26)": {"mod": [34, 37, 43, 46]}}}, {"path": "metagpt/provider/zhipuai_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}, "('ZhiPuAILLM', None, 26)": {"mod": [48, 54, 60, 61, 63]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [37]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/configs/llm_config.py", "metagpt/provider/zhipuai_api.py", "metagpt/provider/ollama_api.py", "metagpt/provider/spark_api.py", "metagpt/provider/anthropic_api.py", "metagpt/provider/qianfan_api.py", "metagpt/actions/action_node.py", "metagpt/provider/openai_api.py", "metagpt/provider/google_gemini_api.py", "metagpt/provider/dashscope_api.py", "metagpt/provider/base_llm.py", "metagpt/provider/general_api_base.py", "metagpt/const.py", "metagpt/provider/human_provider.py"], "doc": [], "test": [], "config": ["config/config2.example.yaml", "requirements.txt"], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "b5f0b743a7f50c72199eb792cd6e70730b60651f", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2047", "iss_label": "Needs triage", "title": "[BUG] printing -\\n- in rich.progress context manager will kill the jupyter.", "body": "try this code in the jupyter notebook:\r\n\r\n```python\r\nfrom rich.progress import Progress\r\nwith Progress() as progress:\r\n print(\"-\\n-\")\r\nprint(\"finished\")\r\n```\r\nand it will show a popup message displaying that the kernel has died.\r\nI have tested it on google colab and mint.\r\n\r\nalso, I have installed rich using\r\n```\r\npip install rich[jupyter]\r\n```", "pr_html_url": "https://github.com/Textualize/rich/pull/2209", "file_loc": {"base_commit": "b5f0b743a7f50c72199eb792cd6e70730b60651f", "files": [{"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}}}, {"path": "rich/file_proxy.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2]}, "('FileProxy', 'flush', 50)": {"mod": [51, 52, 53, 54]}}}, {"path": "tests/test_file_proxy.py", "status": "modified", "Loc": {"(None, 'test_flush', 20)": {"add": [27]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["rich/file_proxy.py"], "doc": ["CHANGELOG.md"], "test": ["tests/test_file_proxy.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "439c19596a248a31cd1aa8220f54a622a0322160", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/3689", "iss_label": "", "title": "using sparse matrix in fit_params", "body": "When the value of a fit_params is sparse matrix, it will raise error from the following code.\nsklearn/cross_validation.py\n\n```\n1224 if hasattr(v, '__len__') and len(v) == n_samples else v)\n1225 for k, v in fit_params.items()])\n```\n\nIt is because the `__len__` of sparse matrix is defined as\nscipy/sparse/base.py\n\n```\n190 def __len__(self):\n191 # return self.getnnz()\n192 raise TypeError(\"sparse matrix length is ambiguous; use getnnz()\"\n193 \" or shape[0]\")\n```\n\nIs there anyway to circumpass this issue. I do not want to convert the sparse matrix into a dense one, since it will consume a big memory.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/4049", "file_loc": {"base_commit": "439c19596a248a31cd1aa8220f54a622a0322160", "files": [{"path": "sklearn/cross_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1073]}, "(None, '_fit_and_predict', 1150)": {"mod": [1186, 1188, 1189, 1190]}, "(None, '_fit_and_score', 1305)": {"mod": [1379, 1381, 1382, 1383, 1384, 1385]}}}, {"path": "sklearn/tests/test_cross_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1108]}, "(None, 'assert_fit_params', 595)": {"mod": [596]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/cross_validation.py"], "doc": [], "test": ["sklearn/tests/test_cross_validation.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "adc1e590d4dc1e230b49a4c10b4cd7b672bb3d69", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/9174", "iss_label": "Bug\nhelp wanted", "title": "SVC and OneVsOneClassifier decision_function inconsistent on sub-sample", "body": "Hi,\r\n\r\nI'm seeing inconsistent numerical results with SVC's decision_function.\r\nWhen estimated over an entire batch of samples ( (n_samples, n_features) matrix ) compared to analyzing sample-by-sample, the results are not the same.\r\nThis is true for both the individual numerical values per sample and the overall distribution of the results.\r\n\r\n**The model is SVC with RBF kernel, for a 3-class classification:**\r\n```\r\nSVC(C=1.0, gamma=0.007, class_weight = new_class_weight, probability = True, random_state = 30, \r\ndecision_function_shape = 'ovr')\r\n```\r\n\r\n**The models are loaded from file:**\r\n\r\n`ML = joblib.load(\"model.pkl\")`\r\n\r\n**Option A, analyze a matrix:**\r\n\r\n`distances = ML.decision_function(X)`\r\n\r\n**Option B, analyze individual samples:** \r\n```\r\ndistances = numpy.zeros([X.shape[0], 3])\r\nfor i in range(X.shape[0]): \r\n distances[i,:]` = ML.decision_function(X[i,:].reshape(1,-1))\r\n```\r\n\r\n**Output for first two samples:**\r\n**Option A:**\r\nsample 1: [ 0.90835588, -0.17305875, 2.26470288]\r\nsample 2: [ 1.10437313, -0.2371539 , 2.13278077]\r\n\r\n**Option B:**\r\nsample 1: [ 0.82689247, -0.32689247, 2.5 ]\r\nsample 2: [ 1.22005359, -0.5 , 2.27994641]\r\n\r\nI couldn't find any indication for this behavior in the documentation.\r\n\r\nWindows-10-10.0.15063-SP0\r\nPython 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]\r\nNumPy 1.12.1\r\nSciPy 0.18.1\r\nScikit-Learn 0.18.1\r\n\r\nThanks!\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10440", "commit_html_url": null, "file_loc": {"base_commit": "adc1e590d4dc1e230b49a4c10b4cd7b672bb3d69", "files": [{"path": "doc/modules/multiclass.rst", "status": "modified", "Loc": {"(None, None, 230)": {"mod": [230]}}}, {"path": "doc/modules/svm.rst", "status": "modified", "Loc": {"(None, None, 116)": {"mod": [116]}, "(None, None, 118)": {"mod": [118]}}}, {"path": "doc/whats_new/v0.21.rst", "status": "modified", "Loc": {"(None, None, 26)": {"add": [26]}, "(None, None, 353)": {"add": [353]}}}, {"path": "sklearn/svm/base.py", "status": "modified", "Loc": {"('BaseSVC', 'decision_function', 527)": {"add": [549]}}}, {"path": "sklearn/utils/estimator_checks.py", "status": "modified", "Loc": {"(None, 'check_methods_subset_invariance', 815)": {"mod": [839, 840]}}}, {"path": "sklearn/utils/multiclass.py", "status": "modified", "Loc": {"(None, '_ovr_decision_function', 402)": {"mod": [434, 435, 437, 438, 440, 444, 445, 446, 447]}}}, {"path": "sklearn/utils/tests/test_multiclass.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18, 25]}, "(None, 'test_safe_split_with_precomputed_kernel', 361)": {"add": [380]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/utils/multiclass.py", "sklearn/utils/estimator_checks.py", "sklearn/svm/base.py"], "doc": ["doc/whats_new/v0.21.rst", "doc/modules/multiclass.rst", "doc/modules/svm.rst"], "test": ["sklearn/utils/tests/test_multiclass.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "da90449edfa13b5be1550b3acc212dbf3a8c6e69", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/1064", "iss_label": "", "title": "allow spiders to return dicts instead of Items", "body": "In many cases the requirement to define and yield Items from a spider is an unnecessary complication. \n\nAn example from Scrapy tutorial:\n\n```\nimport scrapy\n\nclass DmozItem(scrapy.Item):\n title = scrapy.Field()\n link = scrapy.Field()\n desc = scrapy.Field()\n\nclass DmozSpider(scrapy.Spider):\n name = \"dmoz\"\n allowed_domains = [\"dmoz.org\"]\n start_urls = [\n \"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/\",\n \"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/\"\n ]\n\n def parse(self, response):\n for sel in response.xpath('//ul/li'):\n item = DmozItem()\n item['title'] = sel.xpath('a/text()').extract()\n item['link'] = sel.xpath('a/@href').extract()\n item['desc'] = sel.xpath('text()').extract()\n yield item\n```\n\nIt can be made simpler with dicts instead of Items:\n\n```\nimport scrapy\n\nclass DmozSpider(scrapy.Spider):\n name = \"dmoz\"\n allowed_domains = [\"dmoz.org\"]\n start_urls = [\n \"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/\",\n \"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/\"\n ]\n\n def parse(self, response):\n for sel in response.xpath('//ul/li'):\n yield {\n 'title': sel.xpath('a/text()').extract(),\n 'link': sel.xpath('a/@href').extract(),\n 'desc': sel.xpath('text()').extract(),\n }\n```\n\nThe version with dicts gives a developer less concepts to learn, and it is easier to explain.\n\nWhen field metadata is not used and data is exported to JSON/XML yielding Python dicts should be enough. Even when you export to CSV dicts could be enough - columns can be set explicitly by an user.\n\nThis should also prevent tickets like https://github.com/scrapy/scrapy/issues/968.\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/1081", "file_loc": {"base_commit": "da90449edfa13b5be1550b3acc212dbf3a8c6e69", "files": [{"path": "docs/index.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [61, 86], "mod": [59, 75, 76]}}}, {"path": "docs/topics/architecture.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [105, 108]}}}, {"path": "docs/topics/exporters.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [199, 205], "mod": [10, 93, 94, 95, 170, 171]}}}, {"path": "docs/topics/images.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [66, 67, 68]}}}, {"path": "docs/topics/item-pipeline.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [137], "mod": [11, 12, 31, 32, 36, 158, 159]}}}, {"path": "docs/topics/items.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [13], "mod": [11, 12, 16, 67]}}}, {"path": "docs/topics/practices.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [187, 189, 190, 192, 193, 194, 196, 199, 201, 202, 204]}}}, {"path": "docs/topics/signals.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [74, 94]}}}, {"path": "docs/topics/spider-middleware.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [93, 100, 101, 113]}}}, {"path": "docs/topics/spiders.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [284, 290], "mod": [27, 28, 44, 46, 47, 49, 50, 51, 52, 54, 55, 57, 59, 61, 63, 64, 66, 67, 68, 69, 71, 72, 74, 76, 77, 79, 80, 81, 82, 84, 85, 87, 89, 90, 91, 92, 98, 99, 106, 107, 201, 202, 203, 204, 234, 251, 252, 271, 274]}}}, {"path": "scrapy/commands/parse.py", "status": "modified", "Loc": {"('Command', 'run_callback', 106)": {"mod": [110]}}}, {"path": "scrapy/contracts/default.py", "status": "modified", "Loc": {"('ReturnsContract', None, 21)": {"mod": [38, 39]}, "('ScrapesContract', 'post_process', 84)": {"mod": [86]}}}, {"path": "scrapy/contrib/exporter/__init__.py", "status": "modified", "Loc": {"('BaseItemExporter', '_get_serialized_fields', 52)": {"mod": [53, 54, 59, 67, 68, 69, 72]}, "('CsvItemExporter', '_write_headers_and_set_fields_to_export', 191)": {"mod": [194]}}}, {"path": "scrapy/contrib/pipeline/files.py", "status": "modified", "Loc": {"('FilesPipeline', 'item_completed', 269)": {"mod": [270]}}}, {"path": "scrapy/contrib/pipeline/images.py", "status": "modified", "Loc": {"('ImagesPipeline', 'item_completed', 111)": {"mod": [112]}}}, {"path": "scrapy/core/scraper.py", "status": "modified", "Loc": {"('Scraper', '_process_spidermw_output', 171)": {"mod": [177, 186]}}}, {"path": "tests/spiders.py", "status": "modified", "Loc": {"('ItemSpider', 'parse', 84)": {"add": [87]}}}, {"path": "tests/test_commands.py", "status": "modified", "Loc": {"('RunSpiderCommandTest', 'test_runspider', 132)": {"add": [137], "mod": [139, 141]}, "(None, None, None)": {"add": [241]}, "('ParseCommandTest', 'setUp', 188)": {"mod": [195, 196, 198, 204]}}}, {"path": "tests/test_contracts.py", "status": "modified", "Loc": {"('TestSpider', None, 25)": {"add": [41, 48, 56, 64]}, "('ContractsManagerTest', 'test_returns', 104)": {"add": [112]}, "('ContractsManagerTest', None, 72)": {"add": [122]}, "('ContractsManagerTest', 'test_scrapes', 123)": {"add": [131, 136]}}}, {"path": "tests/test_contrib_exporter.py", "status": "modified", "Loc": {"('BaseItemExporterTest', None, 18)": {"add": [45], "mod": [36]}, "('XmlItemExporterTest', None, 196)": {"add": [213]}, "(None, None, None)": {"add": [327], "mod": [1, 5, 9, 10, 11]}, "('BaseItemExporterTest', 'test_export_item', 36)": {"mod": [39]}, "('BaseItemExporterTest', 'test_serialize_field', 46)": {"mod": [47, 48, 49, 50]}, "('PythonItemExporterTest', 'test_nested_item', 79)": {"mod": [81]}, "('CsvItemExporterTest', None, 140)": {"mod": [153, 154, 155]}, "('CsvItemExporterTest', 'test_header', 153)": {"mod": [157, 158, 159, 161, 162, 163, 164, 165, 166, 168, 169, 170, 171, 172, 173, 174, 176, 177, 178, 179, 181]}, "('CsvItemExporterTest', 'test_join_multivalue', 183)": {"mod": [188, 189, 190, 191, 192, 193, 194]}, "('XmlItemExporterTest', 'test_multivalued_fields', 218)": {"mod": [219, 220, 221, 222, 223, 224, 225, 226]}, "('XmlItemExporterTest', 'test_nested_item', 228)": {"mod": [229, 231, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248]}, "('XmlItemExporterTest', 'test_nested_list_item', 250)": {"mod": [251, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267]}, "('JsonLinesItemExporterTest', 'test_nested_item', 281)": {"mod": [283]}, "('JsonItemExporterTest', None, 298)": {"mod": [309]}, "('JsonItemExporterTest', 'test_two_items', 309)": {"mod": [311, 312, 315]}, "('CustomItemExporter', 'serialize_field', 332)": {"mod": [336, 337]}, "('CustomItemExporterTest', 'test_exporter_custom_serializer', 330)": {"mod": [342, 343, 344, 345]}}}, {"path": "tests/test_engine.py", "status": "modified", "Loc": {"('TestSpider', None, 36)": {"add": [43]}, "(None, None, None)": {"add": [67]}, "('TestSpider', 'parse_item', 51)": {"mod": [52]}, "('CrawlerRun', None, 81)": {"mod": [84]}, "('CrawlerRun', '__init__', 84)": {"mod": [91, 92]}, "('EngineTest', 'test_crawler', 154)": {"mod": [155, 156, 157, 158, 159, 160, 161, 162]}}}, {"path": "tests/test_pipeline_files.py", "status": "modified", "Loc": {"('FilesPipelineTestCaseFields', 'test_item_fields_default', 144)": {"mod": [145, 150, 151, 152, 153, 154, 155, 156, 157]}, "('FilesPipelineTestCaseFields', 'test_item_fields_override_settings', 159)": {"mod": [160, 165, 166, 167, 168, 169, 170, 171, 172, 173]}}}, {"path": "tests/test_pipeline_images.py", "status": "modified", "Loc": {"('ImagesPipelineTestCaseFields', 'test_item_fields_default', 170)": {"mod": [171, 176, 177, 178, 179, 180, 181, 182, 183]}, "('ImagesPipelineTestCaseFields', 'test_item_fields_override_settings', 185)": {"mod": [186, 191, 192, 193, 194, 195, 196, 197, 198, 199]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/core/scraper.py", "scrapy/commands/parse.py", "scrapy/contracts/default.py", "scrapy/contrib/exporter/__init__.py", "tests/spiders.py", "scrapy/contrib/pipeline/images.py", "scrapy/contrib/pipeline/files.py"], "doc": ["docs/topics/practices.rst", "docs/topics/signals.rst", "docs/topics/spiders.rst", "docs/topics/architecture.rst", "docs/topics/items.rst", "docs/index.rst", "docs/topics/exporters.rst", "docs/topics/item-pipeline.rst", "docs/topics/images.rst", "docs/topics/spider-middleware.rst"], "test": ["tests/test_contracts.py", "tests/test_engine.py", "tests/test_commands.py", "tests/test_pipeline_files.py", "tests/test_pipeline_images.py", "tests/test_contrib_exporter.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "effd75dda5f4afa61f988035ff8fe4b3a447464e", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/10059", "iss_label": "", "title": "Duplicated input points silently create duplicated clusters in KMeans", "body": "#### Description\r\nWhen there are duplicated input points to Kmeans resulting to number of unique points < number of requested clusters, there is no error thrown. Instead, clustering continues to (seemingly) produce the number of clusters requested, but some of them are exactly the same, so the cluster labels produced for the input points do not go all the way to number of requested clusters.\r\n\r\n#### Steps/Code to Reproduce\r\n```python\r\nfrom sklearn.cluster import KMeans\r\nimport numpy as np\r\n\r\n# some input points here are identical, so that n_total=17, n_unique=9\r\nx2d = np.array([(1086, 348), (1087, 347), (1190, 244), (1190, 244), (1086, 348), (1185, 249), (1193, 241), (1185, 249), (1087, 347), (1188, 247), (1187, 233), (26, 111), (26, 111), (26, 110), (26, 110), (26, 110), (26, 110)])\r\nkmeans = KMeans(n_clusters=10) # n_clusters > n_unique\r\nc_labels = kmeans.fit_predict(x2d)\r\nc_centers = kmeans.cluster_centers_\r\n```\r\n#### Expected Results\r\nEither an error thrown, or the cluster labels produced should match the unique clusters only (i.e. no identical cluster centres)\r\n\r\n#### Actual Results\r\n```python\r\n>>> c_labels # note there's no entry for cluster 9\r\narray([7, 2, 6, 6, 7, 5, 4, 5, 2, 1, 3, 8, 8, 0, 0, 0, 0], dtype=int32)\r\n>>> c_centers # two of these 10 clusters have identical centers, so only 9 of them are unique\r\narray([[ 26., 110.],\r\n [ 1188., 247.],\r\n [ 1087., 347.],\r\n [ 1187., 233.],\r\n [ 1193., 241.],\r\n [ 1185., 249.],\r\n [ 1190., 244.],\r\n [ 1086., 348.],\r\n [ 26., 111.],\r\n [ 26., 110.]]) \r\n```\r\n\r\n#### Versions\r\n```python\r\nDarwin-16.7.0-x86_64-i386-64bit\r\nPython 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:04:09)\r\n[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]\r\nNumPy 1.13.1\r\nSciPy 0.19.1\r\nScikit-Learn 0.18.2\r\n```", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10099", "file_loc": {"base_commit": "effd75dda5f4afa61f988035ff8fe4b3a447464e", "files": [{"path": "doc/whats_new/v0.20.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [136]}}}, {"path": "sklearn/cluster/k_means_.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [34]}, "(None, 'k_means', 167)": {"add": [376]}}}, {"path": "sklearn/cluster/tests/test_k_means.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [17, 20]}, "(None, 'test_sparse_validate_centers', 855)": {"add": [869]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "code"}, "loctype": {"code": ["sklearn/cluster/k_means_.py"], "doc": ["doc/whats_new/v0.20.rst"], "test": ["sklearn/cluster/tests/test_k_means.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "0dad0fce72266aa7b38b536f87bab26e7f233c74", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4477", "iss_label": "bug", "title": "is_generator_with_return_value raises IndentationError with a flush left doc string", "body": "### Description\r\n\r\nCode that is accepted by the python interpreter raises when fed through `textwrap.dedent`\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create `is_generator_bug.py` with the content below (which I simplified from [the `is_generator_with_return_value` method body](https://github.com/scrapy/scrapy/blob/2.0.1/scrapy/utils/misc.py#L186-L187)\r\n2. Run `python is_generator_bug.py`\r\n3. Observe the kaboom\r\n\r\n```python\r\nimport ast\r\nimport inspect\r\nfrom textwrap import dedent\r\nclass Bob:\r\n def doit(self):\r\n \"\"\"\r\nthis line is flush left\r\n \"\"\"\r\n if True:\r\n yield 1234\r\n\r\nif __name__ == '__main__':\r\n b = Bob()\r\n c = b.doit\r\n if inspect.isgeneratorfunction(c):\r\n tree = ast.parse(dedent(inspect.getsource(c)))\r\n```\r\n\r\n**Expected behavior:** [What you expect to happen]\r\n\r\nNo Error\r\n\r\n**Actual behavior:** [What actually happens]\r\n\r\n```console\r\n$ python3.7 is_generator_bug.py\r\nTraceback (most recent call last):\r\n File \"is_generator_bug.py\", line 16, in \r\n tree = ast.parse(dedent(inspect.getsource(c)))\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ast.py\", line 35, in parse\r\n return compile(source, filename, mode, PyCF_ONLY_AST)\r\n File \"\", line 1\r\n def doit(self):\r\n ^\r\nIndentationError: unexpected indent\r\n```\r\n\r\n**Reproduces how often:** [What percentage of the time does it reproduce?]\r\n\r\n100%\r\n\r\n### Versions\r\n\r\n```\r\nScrapy : 2.0.1\r\nlxml : 4.5.0.0\r\nlibxml2 : 2.9.10\r\ncssselect : 1.1.0\r\nparsel : 1.5.2\r\nw3lib : 1.21.0\r\nTwisted : 20.3.0\r\nPython : 3.7.7 (default, Mar 11 2020, 23:30:22) - [Clang 10.0.0 (clang-1000.11.45.5)]\r\npyOpenSSL : 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019)\r\ncryptography : 2.8\r\nPlatform : Darwin-17.7.0-x86_64-i386-64bit\r\n```\r\n\r\n### Additional context\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4935", "file_loc": {"base_commit": "0dad0fce72266aa7b38b536f87bab26e7f233c74", "files": [{"path": "scrapy/utils/misc.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [12]}, "(None, 'is_generator_with_return_value', 217)": {"mod": [230]}, "(None, 'warn_on_generator_with_return_value', 240)": {"mod": [245, 247, 248, 249, 250, 251]}}}, {"path": "tests/test_utils_misc/test_return_with_argument_inside_generator.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1], "mod": [3]}, "('UtilsMiscPy3TestCase', None, 6)": {"mod": [8, 9]}, "('UtilsMiscPy3TestCase', 'test_generators_with_return_statements', 8)": {"mod": [13, 17, 21, 25, 28, 32, 40, 41, 43, 44, 49, 50, 51, 52, 53, 54, 55, 56]}, "('UtilsMiscPy3TestCase', 'g', 13)": {"mod": [15]}, "('UtilsMiscPy3TestCase', 'k', 28)": {"mod": [30]}, "('UtilsMiscPy3TestCase', 'n', 40)": {"mod": [46, 47]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/utils/misc.py"], "doc": [], "test": ["tests/test_utils_misc/test_return_with_argument_inside_generator.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "e217b68fd00bb7c54b81a492ee6f9db6498517fa", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/18146", "iss_label": "Bug", "title": "Something goes wrong with KernelPCA with 32 bits input data", "body": "When given 32 bits input, KernelPCA succeed to transform the data into a 17-dimensional feature space while the original space was 3 features. I did not debug yet but this seems really unlikely.\r\n\r\n```python\r\n# %%\r\nfrom sklearn.datasets import make_blobs\r\nfrom sklearn.preprocessing import StandardScaler\r\n\r\nX, y = make_blobs(\r\n n_samples=30,\r\n centers=[[0, 0, 0], [1, 1, 1]],\r\n random_state=0,\r\n cluster_std=0.1\r\n)\r\nX = StandardScaler().fit_transform(X)\r\nX -= X.min()\r\n\r\n# %%\r\nimport numpy as np\r\nfrom sklearn.decomposition import KernelPCA\r\n\r\nkpca = KernelPCA()\r\nprint(kpca.fit_transform(X).shape)\r\nprint(kpca.fit_transform(X.astype(np.float32)).shape)\r\n```", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/18149", "file_loc": {"base_commit": "e217b68fd00bb7c54b81a492ee6f9db6498517fa", "files": [{"path": "doc/whats_new/v0.24.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [118], "mod": [25, 26]}}}, {"path": "sklearn/decomposition/tests/test_kernel_pca.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12]}, "(None, 'test_kernel_pca_inverse_transform', 290)": {"add": [297]}}}, {"path": "sklearn/utils/validation.py", "status": "modified", "Loc": {"(None, '_check_psd_eigenvalues', 1093)": {"mod": [1186]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/utils/validation.py"], "doc": ["doc/whats_new/v0.24.rst"], "test": ["sklearn/decomposition/tests/test_kernel_pca.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "6c8f52d42563c1207a8cb3fbbfccb6d4af2a0670", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/544", "iss_label": "priority: high", "title": "S3 object metadata not saved when uploaded with presigned url", "body": "Use case:\r\nI'm enabling users to directly upload to s3 using presigned url. S3 is configured to add event to SQS on Put. Queue consumer, reads the queue and makes HEAD requests with object keys to get the metadata and save information to database (generic image upload, so I know where to add file).\r\n\r\nTest script in node js - some ugly code here (to install deps run `npm install aws-sdk request`):\r\n```js\r\nconst AWS = require(\"aws-sdk\");\r\nconst request = require(\"request\");\r\n\r\nlet s3 = new AWS.S3({\r\n endpoint: \"http://localhost:4572\",\r\n s3ForcePathStyle: true,\r\n accessKeyId: \"\",\r\n secretAccessKey: \"\",\r\n region: \"us-west-1\"\r\n});\r\n\r\nvar bucket = \"bucketest\";\r\nvar key = \"test.txt\";\r\n\r\ns3.createBucket({Bucket: bucket}, function (err, data) {\r\n if (err) {\r\n console.error(err.message);\r\n // ignore, probably there is bucket already\r\n }\r\n\r\n var params = {\r\n Bucket: bucket,\r\n Key: key,\r\n Metadata: {\r\n venue: \"123\"\r\n }\r\n };\r\n\r\n s3.getSignedUrl('putObject', params, function (err, url) {\r\n if (err) {\r\n console.error('Presigning post data encountered an error', err);\r\n } else {\r\n console.log('==== URL: ', url);\r\n\r\n var body = new Buffer('Test data.');\r\n request.put({ url, body, method: \"PUT\" }, function(err, resp, body) {\r\n if (err) {\r\n console.log('======= error:', error); \r\n return;\r\n }\r\n\r\n console.log(body);\r\n\r\n s3.headObject({Bucket: bucket, Key: key}, function (err, data) {\r\n if (err) console.log(\"====== error1:\", err, err.stack); \r\n else console.log(\"==== HEAD RESPONSE\", data); \r\n });\r\n })\r\n }\r\n });\r\n});\r\n```\r\n\r\nOutput:\r\n```\r\n==== URL: http://localhost:4572/heaps-test/test.txt?AWSAccessKeyId=somekey&Expires=1515503310&Signature=TgK3B33p2kwCWs5F5KtaZ3fxgXA%3D&x-amz-meta-venue=123\r\n"56dd8a439abf97fda051f88f09f00d65"2018-01-09T12:53:30.637Z\r\n==== HEAD RESPONSE { LastModified: 2018-01-09T12:53:30.000Z,\r\n ContentLength: 10,\r\n ETag: '\"56dd8a439abf97fda051f88f09f00d65\"',\r\n ContentType: 'text/html; charset=utf-8',\r\n Metadata: {} }\r\n\r\n```\r\n\r\nExpected Output (tested with live AWS): \r\n```\r\n==== URL: https://heaps-test.s3.eu-west-1.amazonaws.com/test.txt?AWSAccessKeyId=somekey&Expires=1515503234&Signature=enc17C6glTsVtOiGobugz5NELIc%3D&x-amz-meta-venue=123\r\n\r\n==== HEAD RESPONSE { AcceptRanges: 'bytes',\r\n LastModified: 2018-01-09T12:52:15.000Z,\r\n ContentLength: 10,\r\n ETag: '\"56dd8a439abf97fda051f88f09f00d65\"',\r\n ContentType: 'binary/octet-stream',\r\n Metadata: { venue: '123' } }\r\n```\r\n\r\nAs you can see Metadata is empty when using localstack\r\n ", "pr_html_url": "https://github.com/localstack/localstack/pull/1745", "file_loc": {"base_commit": "6c8f52d42563c1207a8cb3fbbfccb6d4af2a0670", "files": [{"path": "localstack/services/s3/s3_listener.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [44, 308]}, "('ProxyListenerS3', 'forward_request', 514)": {"add": [563]}, "('ProxyListenerS3', 'return_response', 595)": {"mod": [665]}}}, {"path": "tests/integration/test_s3.py", "status": "modified", "Loc": {"('S3ListenerTest', None, 30)": {"add": [187]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack/services/s3/s3_listener.py"], "doc": [], "test": ["tests/integration/test_s3.py"], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "a2723f16f2d5c748c382359c6ce5fdd1e53728d3", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/1639", "iss_label": "function: process text", "title": "This model's maximum context length is 8191 tokens, however you requested 89686 tokens (89686 in your prompt)", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\nThe program is trying to process an absurd amount of information at once. It happens over and over again.\r\n\r\nAdding chunk 17 / 20 to memory\r\nSYSTEM: Command browse_website returned: Error: This model's maximum context length is 8191 tokens, however you requested 89686 tokens (89686 in your prompt;\r\n 0 for the completion). Please reduce your prompt; or completion length.\n\n### Current behavior \ud83d\ude2f\n\n_No response_\n\n### Expected behavior \ud83e\udd14\n\n_No response_\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\n# Paste your prompt here\r\n```\r\n", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/2542", "file_loc": {"base_commit": "a2723f16f2d5c748c382359c6ce5fdd1e53728d3", "files": [{"path": ".env.template", "status": "modified", "Loc": {"(None, None, None)": {"add": [154], "mod": [10, 11]}}}, {"path": "autogpt/config/config.py", "status": "modified", "Loc": {"('Config', '__init__', 19)": {"mod": [34]}}}, {"path": "autogpt/processing/text.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 5]}, "(None, 'summarize_text', 44)": {"add": [60, 78], "mod": [65, 77, 81, 85, 97]}, "(None, 'split_text', 14)": {"mod": [14, 27, 28, 31, 32, 33, 34, 36, 37, 38, 41]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["autogpt/processing/text.py", "autogpt/config/config.py"], "doc": [], "test": [], "config": ["requirements.txt", ".env.template"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "141d638e590897d4ec5371c4868f027dad95a38e", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/36691", "iss_label": "module\naffects_2.4\nsupport:core\ndocs", "title": "stat documentation: mime_type vs mimetype, mime output vs descriptive output", "body": "\r\n\r\n##### ISSUE TYPE\r\n - Documentation Report\r\n\r\n##### COMPONENT NAME\r\nstat\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```\r\nansible 2.4.3.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/oliver/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]\r\n```\r\n\r\n##### CONFIGURATION\r\nansible.cfg:\r\n```\r\n[defaults]\r\ninventory = ./hosts\r\nroles_path = ./roles\r\nremote_user = oliver\r\nnocows = 1\r\nvault_password_file = ./get_vault_password_from_keyring.py\r\n\r\ngathering = smart\r\nfact_caching = jsonfile\r\nfact_caching_timeout = 21600\r\nfact_caching_connection = ./cache\r\n\r\n[privilege_escalation]\r\nbecome = True\r\nbecome_method = sudo\r\n\r\n[ssh_connection]\r\npipelining = True\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nRunning Ansible on Ubuntu 16.04.4 x86_64; target is the same machine.\r\n\r\nTried with and without python-magic module installed, but didn't observe any different behaviour.\r\n\r\n##### SUMMARY\r\n\r\nDocumentation for stat module (http://docs.ansible.com/ansible/latest/stat_module.html) mentions that a \"mime_type\" entry will be set if get_mime is set to true, with example content being \"PDF document, version 1.2\". I couldn't get this result; rather:\r\n- a \"mimetype\" entry is set (ie. no underscore)\r\n- the mimetype entry contains the actual mime type (eg. \"application/pdf\") rather than a description (eg. \"PDF document, version 1.2\")\r\n\r\n##### STEPS TO REPRODUCE\r\n```yaml\r\n- name: Get mime type of test file\r\n stat: path=\"/home/oliver/mozilla.pdf\"\r\n register: my_stat_check\r\n\r\n- debug:\r\n msg: \"{{ my_stat_check.stat.mimetype }}\"\r\n\r\n- debug:\r\n msg: \"{{ my_stat_check.stat.mime_type }}\"\r\n```\r\n\r\n\r\n##### EXPECTED RESULTS\r\nI expected the first debug message to fail, and expected the second one to succeed and print \"PDF document, version 1.2\".\r\n\r\nAlternatively, the documentation should state that \"mimetype\" will be set, and will contain the technical mime type rather than a description.\r\nThough admittedly I'd prefer to also get the descriptive type output, since eg. for swap files the mime type is always \"application/octet-stream\" (so a swap file is indistinguishable from any other binary file); while the descriptive type is something like \"Linux/i386 swap file (new style)\" which is more useful.\r\n\r\n##### ACTUAL RESULTS\r\nAt the moment, the first debug message will work and will print \"application/pdf\". The second debug message will fail with \"The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'mime_type'\".\r\n\r\n```\r\nTASK [tools : Get mime type of test file] *****************************************************************************************************************************************************************************************************\r\nok: [myhost] => {\"changed\": false, \"stat\": {\"atime\": 1519571973.3959274, \"attr_flags\": \"e\", \"attributes\": [\"extents\"], \"block_size\": 4096, \"blocks\": 352, \"charset\": \"binary\", \"checksum\": \"2d9eb9f17601726c56bd0c4fbc770430d0ac2277\", \"ctime\": 1519484291.234419, \"dev\": 2098, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 1000, \"gr_name\": \"oliver\", \"inode\": 12592144, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"md5\": \"760bf09b20afc699ad5cb4cabf3a151a\", \"mimetype\": \"application/pdf\", \"mode\": \"0664\", \"mtime\": 1519484291.234419, \"nlink\": 1, \"path\": \"/home/oliver/mozilla.pdf\", \"pw_name\": \"oliver\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 180168, \"uid\": 1000, \"version\": \"18446744071694658390\", \"wgrp\": true, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\r\n\r\nTASK [tools : debug] **************************************************************************************************************************************************************************************************************************\r\nok: [myhost] => {\r\n \"msg\": \"application/pdf\"\r\n}\r\n\r\nTASK [tools : debug] **************************************************************************************************************************************************************************************************************************\r\nfatal: [myhost]: FAILED! => {\"msg\": \"The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'mime_type'\\n\\nThe error appears to have been in '/home/oliver/devel/myhost/ansible/roles/tools/tasks/main.yml': line 40, column 3, but may\\nbe elsewhere in the file depending on the exact syntax problem.\\n\\nThe offending line appears to be:\\n\\n\\n- debug:\\n ^ here\\n\\nexception type: \\nexception: 'dict object' has no attribute 'mime_type'\"}\r\n\tto retry, use: --limit @/home/oliver/devel/myhost/ansible/playbooks/desktop.retry\r\n```\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/36693", "file_loc": {"base_commit": "141d638e590897d4ec5371c4868f027dad95a38e", "files": [{"path": "lib/ansible/modules/files/stat.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [318, 324]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/modules/files/stat.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "ba1b3db70907b975b5ca52b9957c5ed7a186a0fa", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/12990", "iss_label": "", "title": "kindly adding some documentations on t5-v1_1-base\"\"", "body": "## Environment info\r\n\r\n\r\n- `transformers` version:\r\n- Platform:\r\n- Python version:\r\n- PyTorch version (GPU?):\r\n- Tensorflow version (GPU?):\r\n- Using GPU in script?:\r\n- Using distributed or parallel set-up in script?:\r\n\r\n### Who can help\r\n\r\n\r\nDocumentation: @sgugger\r\nHi\r\nCould you kindly add some documentations on \"t5-v1_1-base\"? I tested one code with t5-base and t5-v1 version, for t5-v1 I got memory issue, this seems to me the model size is different and larger, also fast tokenizer for this model does not work, could you kindly add a documentation on these differences?\r\n\r\nthanks a lot.\r\n\r\n\r\n", "pr_html_url": "https://github.com/huggingface/transformers/pull/13240", "file_loc": {"base_commit": "ba1b3db70907b975b5ca52b9957c5ed7a186a0fa", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [274]}}}, {"path": "docs/source/index.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [610], "mod": [285, 288, 291, 295, 298, 301, 303, 306, 310, 313]}}}, {"path": "docs/source/model_doc/byt5.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [41], "mod": [43]}}}, {"path": "docs/source/model_doc/mt5.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [30]}}}, {"path": "docs/source/model_doc/t5.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [53, 102], "mod": [16, 17, 45, 46, 47, 48, 49, 58, 59, 60, 61, 62, 66, 75, 77, 78, 79, 81, 82, 83, 84, 88, 89, 90, 94, 95, 96, 98, 99, 100, 101]}}}, {"path": "src/transformers/models/t5/modeling_flax_t5.py", "status": "modified", "Loc": {"('FlaxT5PreTrainedModel', 'encode', 1044)": {"add": [1063], "mod": [1062, 1066]}, "('FlaxT5PreTrainedModel', 'decode', 1101)": {"add": [1120, 1123], "mod": [1122, 1126, 1133]}, "(None, None, None)": {"add": [1333, 1621], "mod": [1332, 1620, 1624, 1628]}, "('FlaxT5ForConditionalGeneration', 'decode', 1452)": {"add": [1471, 1474], "mod": [1473, 1477, 1484]}}}, {"path": "src/transformers/models/t5/modeling_t5.py", "status": "modified", "Loc": {"('T5Model', 'forward', 1317)": {"add": [1348], "mod": [1347]}, "('T5ForConditionalGeneration', 'forward', 1506)": {"add": [1539, 1547], "mod": [1541, 1546]}, "(None, None, None)": {"mod": [1237]}}}, {"path": "src/transformers/models/t5/modeling_tf_t5.py", "status": "modified", "Loc": {"('TFT5Model', 'call', 1105)": {"add": [1137], "mod": [1136]}, "('TFT5ForConditionalGeneration', 'call', 1290)": {"add": [1323], "mod": [1325, 1330, 1332]}, "('TFT5EncoderModel', 'call', 1557)": {"mod": [1574]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/models/t5/modeling_flax_t5.py", "src/transformers/models/t5/modeling_t5.py", "src/transformers/models/t5/modeling_tf_t5.py"], "doc": ["docs/source/model_doc/t5.rst", "docs/source/index.rst", "docs/source/model_doc/mt5.rst", "README.md", "docs/source/model_doc/byt5.rst"], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "8fa10b1049ddf21f188b9605bcd5afbe33bf33db", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/975", "iss_label": "enhancement\nhacktoberfest", "title": "Correcting `app-install` to `apt-get install` rather than `install`", "body": "\r\n\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.29 using Python 3.6.8 and Bash 4.4.20(1)-release\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n Ubuntu 18.04.3 LTS\r\n\r\nHow to reproduce the bug:\r\n\r\n ~$ sudo apt-install python\r\n fuck\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\n```\r\nDEBUG: Run with settings: {'alter_history': True,\r\n 'debug': True,\r\n 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},\r\n 'exclude_rules': [],\r\n 'history_limit': None,\r\n 'instant_mode': False,\r\n 'no_colors': False,\r\n 'num_close_matches': 3,\r\n 'priority': {},\r\n 'repeat': False,\r\n 'require_confirmation': True,\r\n 'rules': [],\r\n 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],\r\n 'user_dir': PosixPath('/home/user/.config/thefuck'),\r\n 'wait_command': 3,\r\n 'wait_slow_command': 15}\r\nDEBUG: Received output: sudo: apt-install: command not found\r\n\r\nDEBUG: Call: sudo apt-install python; with env: {'CLUTTER_IM_MODULE': 'xim', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_MENU_PREFIX': 'gnome-', 'LANG': 'C', 'DISPLAY': ':0', 'GNOME_SHELL_SESSION_MODE': 'ubuntu', 'COLORTERM': 'truecolor', 'TF_SHELL_ALIASES': 'alias alert=\\'notify-send --urgency=low -i \"$([ $? = 0 ] && echo terminal || echo error)\" \"$(history|tail -n1|sed -e \\'\\\\\\'\\'s/^\\\\s*[0-9]\\\\+\\\\s*//;s/[;&|]\\\\s*alert$//\\'\\\\\\'\\')\"\\'\\nalias egrep=\\'egrep --color=auto\\'\\nalias fgrep=\\'fgrep --color=auto\\'\\nalias grep=\\'grep --color=auto\\'\\nalias l=\\'ls -CF\\'\\nalias la=\\'ls -A\\'\\nalias ll=\\'ls -alF\\'\\nalias ls=\\'ls --color=auto\\'', 'DESKTOP_AUTOSTART_ID': '10e5aedf3552f69d7a157076635571365600000035090007', 'USERNAME': 'user', 'XDG_VTNR': '2', 'PYTHONIOENCODING': 'utf-8', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'MANDATORY_PATH': '/usr/share/gconf/ubuntu.mandatory.path', 'XDG_SESSION_ID': '2', 'USER': 'user', 'DESKTOP_SESSION': 'ubuntu', 'QT4_IM_MODULE': 'xim', 'TEXTDOMAINDIR': '/usr/share/locale/', 'GNOME_TERMINAL_SCREEN': '/org/gnome/Terminal/screen/988562f2_716d_4bc1_9825_43d1608e1ccb', 'TF_SHELL': 'bash', 'DEFAULTS_PATH': '/usr/share/gconf/ubuntu.default.path', 'PWD': '/home/user', 'HOME': '/home/user', 'TEXTDOMAIN': 'im-config', 'SSH_AGENT_PID': '3588', 'QT_ACCESSIBILITY': '1', 'XDG_SESSION_TYPE': 'x11', 'XDG_DATA_DIRS': '/usr/share/ubuntu:/usr/local/share:/usr/share:/var/lib/snapd/desktop', 'XDG_SESSION_DESKTOP': 'ubuntu', 'GTK_MODULES': 'gail:atk-bridge', 'WINDOWPATH': '2', 'TERM': 'xterm-256color', 'SHELL': '/bin/bash', 'VTE_VERSION': '5202', 'QT_IM_MODULE': 'ibus', 'XMODIFIERS': '@im=ibus', 'IM_CONFIG_PHASE': '2', 'XDG_CURRENT_DESKTOP': 'ubuntu:GNOME', 'GPG_AGENT_INFO': '/run/user/1000/gnupg/S.gpg-agent:0:1', 'TF_ALIAS': 'fuck', 'GNOME_TERMINAL_SERVICE': ':1.82', 'XDG_SEAT': 'seat0', 'SHLVL': '1', 'LANGUAGE': 'en_IL:en', 'GDMSESSION': 'ubuntu', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'LOGNAME': 'user', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'XDG_RUNTIME_DIR': '/run/user/1000', 'XAUTHORITY': '/run/user/1000/gdm/Xauthority', 'TF_HISTORY': '\\t apt-install brew\\n\\t apt-get install brew\\n\\t fuck\\n\\t sudo apt-install python\\n\\t sudo install python\\n\\t thefuck --version\\n\\t adb_release -a\\n\\t lsb_release -a\\n\\t export THEFUCK_DEBUG=true\\n\\t sudo apt-install python', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/etc/xdg', 'PATH': '/home/user/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'THEFUCK_DEBUG': 'true', 'SESSION_MANAGER': 'local/virt-lnx:@/tmp/.ICE-unix/3509,unix/virt-lnx:/tmp/.ICE-unix/3509', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'GTK_IM_MODULE': 'ibus', '_': '/usr/local/bin/thefuck', 'LC_ALL': 'C', 'GIT_TRACE': '1'}; is slow: took: 0:00:00.008240\r\nDEBUG: Importing rule: adb_unknown_command; took: 0:00:00.000389\r\nDEBUG: Importing rule: ag_literal; took: 0:00:00.000628\r\nDEBUG: Importing rule: apt_get; took: 0:00:00.014902\r\nDEBUG: Importing rule: apt_get_search; took: 0:00:00.000415\r\nDEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000915\r\nDEBUG: Importing rule: apt_list_upgradable; took: 0:00:00.000459\r\nDEBUG: Importing rule: apt_upgrade; took: 0:00:00.000436\r\nDEBUG: Importing rule: aws_cli; took: 0:00:00.000384\r\nDEBUG: Importing rule: az_cli; took: 0:00:00.000309\r\nDEBUG: Importing rule: brew_cask_dependency; took: 0:00:00.000625\r\nDEBUG: Importing rule: brew_install; took: 0:00:00.000120\r\nDEBUG: Importing rule: brew_link; took: 0:00:00.000283\r\nDEBUG: Importing rule: brew_reinstall; took: 0:00:00.000605\r\nDEBUG: Importing rule: brew_uninstall; took: 0:00:00.000291\r\nDEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000142\r\nDEBUG: Importing rule: brew_update_formula; took: 0:00:00.000292\r\nDEBUG: Importing rule: brew_upgrade; took: 0:00:00.000112\r\nDEBUG: Importing rule: cargo; took: 0:00:00.000098\r\nDEBUG: Importing rule: cargo_no_command; took: 0:00:00.000292\r\nDEBUG: Importing rule: cat_dir; took: 0:00:00.000322\r\nDEBUG: Importing rule: cd_correction; took: 0:00:00.001288\r\nDEBUG: Importing rule: cd_mkdir; took: 0:00:00.000479\r\nDEBUG: Importing rule: cd_parent; took: 0:00:00.000114\r\nDEBUG: Importing rule: chmod_x; took: 0:00:00.000108\r\nDEBUG: Importing rule: composer_not_command; took: 0:00:00.000309\r\nDEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000570\r\nDEBUG: Importing rule: cpp11; took: 0:00:00.000311\r\nDEBUG: Importing rule: dirty_untar; took: 0:00:00.001544\r\nDEBUG: Importing rule: dirty_unzip; took: 0:00:00.001127\r\nDEBUG: Importing rule: django_south_ghost; took: 0:00:00.000117\r\nDEBUG: Importing rule: django_south_merge; took: 0:00:00.000106\r\nDEBUG: Importing rule: dnf_no_such_command; took: 0:00:00.000930\r\nDEBUG: Importing rule: docker_login; took: 0:00:00.000350\r\nDEBUG: Importing rule: docker_not_command; took: 0:00:00.000597\r\nDEBUG: Importing rule: dry; took: 0:00:00.000107\r\nDEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000416\r\nDEBUG: Importing rule: fix_alt_space; took: 0:00:00.000284\r\nDEBUG: Importing rule: fix_file; took: 0:00:00.003212\r\nDEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000493\r\nDEBUG: Importing rule: git_add; took: 0:00:00.000547\r\nDEBUG: Importing rule: git_add_force; took: 0:00:00.000371\r\nDEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000382\r\nDEBUG: Importing rule: git_branch_delete; took: 0:00:00.000293\r\nDEBUG: Importing rule: git_branch_exists; took: 0:00:00.000364\r\nDEBUG: Importing rule: git_branch_list; took: 0:00:00.000304\r\nDEBUG: Importing rule: git_checkout; took: 0:00:00.000835\r\nDEBUG: Importing rule: git_commit_amend; took: 0:00:00.000306\r\nDEBUG: Importing rule: git_commit_reset; took: 0:00:00.000281\r\nDEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000288\r\nDEBUG: Importing rule: git_diff_staged; took: 0:00:00.000275\r\nDEBUG: Importing rule: git_fix_stash; took: 0:00:00.000285\r\nDEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000278\r\nDEBUG: Importing rule: git_help_aliased; took: 0:00:00.000278\r\nDEBUG: Importing rule: git_merge; took: 0:00:00.000272\r\nDEBUG: Importing rule: git_merge_unrelated; took: 0:00:00.000264\r\nDEBUG: Importing rule: git_not_command; took: 0:00:00.000326\r\nDEBUG: Importing rule: git_pull; took: 0:00:00.000277\r\nDEBUG: Importing rule: git_pull_clone; took: 0:00:00.000272\r\nDEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000270\r\nDEBUG: Importing rule: git_push; took: 0:00:00.000273\r\nDEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00.000268\r\nDEBUG: Importing rule: git_push_force; took: 0:00:00.000270\r\nDEBUG: Importing rule: git_push_pull; took: 0:00:00.000273\r\nDEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000352\r\nDEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000279\r\nDEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000187\r\nDEBUG: Importing rule: git_remote_delete; took: 0:00:00.000264\r\nDEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000192\r\nDEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000297\r\nDEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000324\r\nDEBUG: Importing rule: git_rm_staged; took: 0:00:00.000276\r\nDEBUG: Importing rule: git_stash; took: 0:00:00.000271\r\nDEBUG: Importing rule: git_stash_pop; took: 0:00:00.000273\r\nDEBUG: Importing rule: git_tag_force; took: 0:00:00.000265\r\nDEBUG: Importing rule: git_two_dashes; took: 0:00:00.000273\r\nDEBUG: Importing rule: go_run; took: 0:00:00.000287\r\nDEBUG: Importing rule: gradle_no_task; took: 0:00:00.000598\r\nDEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000341\r\nDEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000293\r\nDEBUG: Importing rule: grep_recursive; took: 0:00:00.000296\r\nDEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000514\r\nDEBUG: Importing rule: gulp_not_task; took: 0:00:00.000295\r\nDEBUG: Importing rule: has_exists_script; took: 0:00:00.000275\r\nDEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00.000287\r\nDEBUG: Importing rule: heroku_not_command; took: 0:00:00.000285\r\nDEBUG: Importing rule: history; took: 0:00:00.000116\r\nDEBUG: Importing rule: hostscli; took: 0:00:00.000655\r\nDEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000415\r\nDEBUG: Importing rule: java; took: 0:00:00.000295\r\nDEBUG: Importing rule: javac; took: 0:00:00.000282\r\nDEBUG: Importing rule: lein_not_task; took: 0:00:00.000615\r\nDEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000452\r\nDEBUG: Importing rule: ln_s_order; took: 0:00:00.000435\r\nDEBUG: Importing rule: long_form_help; took: 0:00:00.000128\r\nDEBUG: Importing rule: ls_all; took: 0:00:00.000304\r\nDEBUG: Importing rule: ls_lah; took: 0:00:00.000298\r\nDEBUG: Importing rule: man; took: 0:00:00.000308\r\nDEBUG: Importing rule: man_no_space; took: 0:00:00.000105\r\nDEBUG: Importing rule: mercurial; took: 0:00:00.000275\r\nDEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000117\r\nDEBUG: Importing rule: mkdir_p; took: 0:00:00.000281\r\nDEBUG: Importing rule: mvn_no_command; took: 0:00:00.000317\r\nDEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000287\r\nDEBUG: Importing rule: no_command; took: 0:00:00.000282\r\nDEBUG: Importing rule: no_such_file; took: 0:00:00.000114\r\nDEBUG: Importing rule: npm_missing_script; took: 0:00:00.000624\r\nDEBUG: Importing rule: npm_run_script; took: 0:00:00.000367\r\nDEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000446\r\nDEBUG: Importing rule: open; took: 0:00:00.000362\r\nDEBUG: Importing rule: pacman; took: 0:00:00.000502\r\nDEBUG: Importing rule: pacman_not_found; took: 0:00:00.000116\r\nDEBUG: Importing rule: path_from_history; took: 0:00:00.000126\r\nDEBUG: Importing rule: php_s; took: 0:00:00.000314\r\nDEBUG: Importing rule: pip_install; took: 0:00:00.000375\r\nDEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000362\r\nDEBUG: Importing rule: port_already_in_use; took: 0:00:00.000196\r\nDEBUG: Importing rule: prove_recursively; took: 0:00:00.000283\r\nDEBUG: Importing rule: pyenv_no_such_command; took: 0:00:00.000610\r\nDEBUG: Importing rule: python_command; took: 0:00:00.000301\r\nDEBUG: Importing rule: python_execute; took: 0:00:00.000275\r\nDEBUG: Importing rule: quotation_marks; took: 0:00:00.000099\r\nDEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000368\r\nDEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000103\r\nDEBUG: Importing rule: rm_dir; took: 0:00:00.000283\r\nDEBUG: Importing rule: rm_root; took: 0:00:00.000373\r\nDEBUG: Importing rule: scm_correction; took: 0:00:00.000289\r\nDEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000282\r\nDEBUG: Importing rule: sl_ls; took: 0:00:00.000098\r\nDEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000281\r\nDEBUG: Importing rule: sudo; took: 0:00:00.000105\r\nDEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000280\r\nDEBUG: Importing rule: switch_lang; took: 0:00:00.000145\r\nDEBUG: Importing rule: systemctl; took: 0:00:00.000448\r\nDEBUG: Importing rule: test.py; took: 0:00:00.000098\r\nDEBUG: Importing rule: tmux; took: 0:00:00.000332\r\nDEBUG: Importing rule: touch; took: 0:00:00.000405\r\nDEBUG: Importing rule: tsuru_login; took: 0:00:00.000333\r\nDEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000317\r\nDEBUG: Importing rule: unknown_command; took: 0:00:00.000107\r\nDEBUG: Importing rule: unsudo; took: 0:00:00.000098\r\nDEBUG: Importing rule: vagrant_up; took: 0:00:00.000282\r\nDEBUG: Importing rule: whois; took: 0:00:00.000441\r\nDEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000371\r\nDEBUG: Importing rule: yarn_alias; took: 0:00:00.000272\r\nDEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000719\r\nDEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000386\r\nDEBUG: Importing rule: yarn_help; took: 0:00:00.000285\r\nDEBUG: Trying rule: path_from_history; took: 0:00:00.000518\r\nDEBUG: Trying rule: dry; took: 0:00:00.000080\r\nDEBUG: Trying rule: git_stash_pop; took: 0:00:00.000024\r\nDEBUG: Trying rule: test.py; took: 0:00:00.000002\r\nDEBUG: Trying rule: adb_unknown_command; took: 0:00:00.000015\r\nDEBUG: Trying rule: ag_literal; took: 0:00:00.000015\r\nDEBUG: Trying rule: apt_get; took: 0:00:00.000358\r\nDEBUG: Trying rule: apt_get_search; took: 0:00:00.000020\r\nDEBUG: Trying rule: apt_invalid_operation; took: 0:00:00.000068\r\nDEBUG: Trying rule: apt_list_upgradable; took: 0:00:00.000059\r\nDEBUG: Trying rule: apt_upgrade; took: 0:00:00.000019\r\nDEBUG: Trying rule: aws_cli; took: 0:00:00.000015\r\nDEBUG: Trying rule: az_cli; took: 0:00:00.000014\r\nDEBUG: Trying rule: brew_link; took: 0:00:00.000016\r\nDEBUG: Trying rule: brew_reinstall; took: 0:00:00.000013\r\nDEBUG: Trying rule: brew_uninstall; took: 0:00:00.000012\r\nDEBUG: Trying rule: brew_update_formula; took: 0:00:00.000013\r\nDEBUG: Trying rule: cargo; took: 0:00:00.000002\r\nDEBUG: Trying rule: cargo_no_command; took: 0:00:00.000015\r\nDEBUG: Trying rule: cat_dir; took: 0:00:00.000015\r\nDEBUG: Trying rule: cd_correction; took: 0:00:00.000055\r\nDEBUG: Trying rule: cd_mkdir; took: 0:00:00.000017\r\nDEBUG: Trying rule: cd_parent; took: 0:00:00.000002\r\nDEBUG: Trying rule: chmod_x; took: 0:00:00.000003\r\nDEBUG: Trying rule: composer_not_command; took: 0:00:00.000014\r\nDEBUG: Trying rule: cp_omitting_directory; took: 0:00:00.000053\r\nDEBUG: Trying rule: cpp11; took: 0:00:00.000015\r\nDEBUG: Trying rule: dirty_untar; took: 0:00:00.000014\r\nDEBUG: Trying rule: dirty_unzip; took: 0:00:00.000013\r\nDEBUG: Trying rule: django_south_ghost; took: 0:00:00.000003\r\nDEBUG: Trying rule: django_south_merge; took: 0:00:00.000002\r\nDEBUG: Trying rule: docker_login; took: 0:00:00.000014\r\nDEBUG: Trying rule: docker_not_command; took: 0:00:00.000053\r\nDEBUG: Trying rule: fab_command_not_found; took: 0:00:00.000014\r\nDEBUG: Trying rule: fix_alt_space; took: 0:00:00.000008\r\nDEBUG: Trying rule: fix_file; took: 0:00:00.000009\r\nDEBUG: Trying rule: gem_unknown_command; took: 0:00:00.000018\r\nDEBUG: Trying rule: git_add; took: 0:00:00.000013\r\nDEBUG: Trying rule: git_add_force; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_bisect_usage; took: 0:00:00.000012\r\nDEBUG: Trying rule: git_branch_delete; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_branch_exists; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_branch_list; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_checkout; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_commit_amend; took: 0:00:00.000010\r\nDEBUG: Trying rule: git_commit_reset; took: 0:00:00.000014\r\nDEBUG: Trying rule: git_diff_no_index; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_diff_staged; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_fix_stash; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_flag_after_filename; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_help_aliased; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_merge; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_merge_unrelated; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_not_command; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_pull; took: 0:00:00.000014\r\nDEBUG: Trying rule: git_pull_clone; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_pull_uncommitted_changes; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_push; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_push_different_branch_names; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_push_pull; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_push_without_commits; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_rebase_merge_dir; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_rebase_no_changes; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_remote_delete; took: 0:00:00.000014\r\nDEBUG: Trying rule: git_remote_seturl_add; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_rm_local_modifications; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_rm_recursive; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_rm_staged; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_stash; took: 0:00:00.000011\r\nDEBUG: Trying rule: git_tag_force; took: 0:00:00.000012\r\nDEBUG: Trying rule: git_two_dashes; took: 0:00:00.000011\r\nDEBUG: Trying rule: go_run; took: 0:00:00.000015\r\nDEBUG: Trying rule: gradle_no_task; took: 0:00:00.000018\r\nDEBUG: Trying rule: gradle_wrapper; took: 0:00:00.000014\r\nDEBUG: Trying rule: grep_arguments_order; took: 0:00:00.000014\r\nDEBUG: Trying rule: grep_recursive; took: 0:00:00.000013\r\nDEBUG: Trying rule: grunt_task_not_found; took: 0:00:00.000014\r\nDEBUG: Trying rule: gulp_not_task; took: 0:00:00.000013\r\nDEBUG: Trying rule: has_exists_script; took: 0:00:00.000053\r\nDEBUG: Trying rule: heroku_multiple_apps; took: 0:00:00.000015\r\nDEBUG: Trying rule: heroku_not_command; took: 0:00:00.000015\r\nDEBUG: Trying rule: hostscli; took: 0:00:00.000053\r\nDEBUG: Trying rule: ifconfig_device_not_found; took: 0:00:00.000014\r\nDEBUG: Trying rule: java; took: 0:00:00.000013\r\nDEBUG: Trying rule: javac; took: 0:00:00.000014\r\nDEBUG: Trying rule: lein_not_task; took: 0:00:00.000052\r\nDEBUG: Trying rule: ln_no_hard_link; took: 0:00:00.000007\r\nDEBUG: Trying rule: ln_s_order; took: 0:00:00.000044\r\nDEBUG: Trying rule: ls_all; took: 0:00:00.000015\r\nDEBUG: Trying rule: ls_lah; took: 0:00:00.000012\r\nDEBUG: Trying rule: man; took: 0:00:00.000014\r\nDEBUG: Trying rule: mercurial; took: 0:00:00.000017\r\nDEBUG: Trying rule: mkdir_p; took: 0:00:00.000007\r\nDEBUG: Trying rule: mvn_no_command; took: 0:00:00.000020\r\nDEBUG: Trying rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000012\r\nDEBUG: Trying rule: no_such_file; took: 0:00:00.000608\r\nDEBUG: Trying rule: npm_missing_script; took: 0:00:00.000017\r\nDEBUG: Trying rule: npm_run_script; took: 0:00:00.000012\r\nDEBUG: Trying rule: npm_wrong_command; took: 0:00:00.000059\r\nDEBUG: Trying rule: open; took: 0:00:00.000019\r\nDEBUG: Trying rule: php_s; took: 0:00:00.000019\r\nDEBUG: Trying rule: pip_install; took: 0:00:00.000058\r\nDEBUG: Trying rule: pip_unknown_command; took: 0:00:00.000055\r\nDEBUG: Trying rule: port_already_in_use; took: 0:00:00.000489\r\nDEBUG: Trying rule: prove_recursively; took: 0:00:00.000022\r\nDEBUG: Trying rule: pyenv_no_such_command; took: 0:00:00.000015\r\nDEBUG: Trying rule: python_command; took: 0:00:00.000047\r\nDEBUG: Trying rule: python_execute; took: 0:00:00.000015\r\nDEBUG: Trying rule: quotation_marks; took: 0:00:00.000003\r\nDEBUG: Trying rule: react_native_command_unrecognized; took: 0:00:00.000013\r\nDEBUG: Trying rule: remove_trailing_cedilla; took: 0:00:00.000003\r\nDEBUG: Trying rule: rm_dir; took: 0:00:00.000009\r\nDEBUG: Trying rule: scm_correction; took: 0:00:00.000017\r\nDEBUG: Trying rule: sed_unterminated_s; took: 0:00:00.000013\r\nDEBUG: Trying rule: sl_ls; took: 0:00:00.000002\r\nDEBUG: Trying rule: ssh_known_hosts; took: 0:00:00.000014\r\nDEBUG: Trying rule: sudo; took: 0:00:00.000004\r\nDEBUG: Trying rule: sudo_command_from_user_path; took: 0:00:00.000125\r\nDEBUG: Trying rule: switch_lang; took: 0:00:00.000022\r\nDEBUG: Trying rule: systemctl; took: 0:00:00.000057\r\nDEBUG: Trying rule: tmux; took: 0:00:00.000014\r\nDEBUG: Trying rule: touch; took: 0:00:00.000016\r\nDEBUG: Trying rule: tsuru_login; took: 0:00:00.000013\r\nDEBUG: Trying rule: tsuru_not_command; took: 0:00:00.000012\r\nDEBUG: Trying rule: unknown_command; took: 0:00:00.000114\r\nDEBUG: Trying rule: unsudo; took: 0:00:00.000004\r\nDEBUG: Trying rule: vagrant_up; took: 0:00:00.000015\r\nDEBUG: Trying rule: whois; took: 0:00:00.000014\r\nDEBUG: Trying rule: workon_doesnt_exists; took: 0:00:00.000014\r\nDEBUG: Trying rule: yarn_alias; took: 0:00:00.000013\r\nDEBUG: Trying rule: yarn_command_not_found; took: 0:00:00.000014\r\nDEBUG: Trying rule: yarn_command_replaced; took: 0:00:00.000021\r\nDEBUG: Trying rule: yarn_help; took: 0:00:00.000014\r\nDEBUG: Trying rule: man_no_space; took: 0:00:00.000002\r\nDEBUG: Trying rule: no_command; took: 0:00:00.011747\r\nsudo install python [enter/\u2191/\u2193/ctrl+c]\r\nAborted\r\nDEBUG: Total took: 0:00:04.897380\r\n\r\n```\r\n\r\nWhile this is an opinionated review rather than objectively a bug, I would imagine the correction I suggested is a little more intuitive than the current correction. \r\n\r\nIn case my suggestion is well-accepted, I would be take this issue and make the correction. (Happy Hacktoberfest!)\r\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/977", "file_loc": {"base_commit": "8fa10b1049ddf21f188b9605bcd5afbe33bf33db", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [338]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "ad24759871ea43131711cfce1e5fc69c06d82956", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16668", "iss_label": "Clean", "title": "CLN: private impl of OrderedDefaultDict can be removed", "body": "https://github.com/pandas-dev/pandas/blob/master/pandas/compat/__init__.py#L376\r\n\r\nI think this was leftover from 2.6 compat.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/16939", "file_loc": {"base_commit": "ad24759871ea43131711cfce1e5fc69c06d82956", "files": [{"path": "pandas/compat/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24]}, "('OrderedDefaultdict', None, 376)": {"mod": [376, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 389, 390, 391, 392, 393, 395, 396, 397]}}}, {"path": "pandas/core/panel.py", "status": "modified", "Loc": {"('Panel', 'from_dict', 240)": {"add": [262], "mod": [265]}, "(None, None, None)": {"mod": [22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/panel.py", "pandas/compat/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "3694711a7e975324d52c258ab73a8f5e766a3f1c", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/54746", "iss_label": "module\nsupport:community\nbug\ntraceback\naffects_2.7\ncrypto", "title": "acme_certificate - dest must include path info or fails", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\n\r\nCalling acme_certificate with dest set to a pure filename (no path) will cause the module to fail.\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\nacme_certificate\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```paste below\r\nansible 2.7.9\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/dhagan/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/dhagan/.local/lib/python2.7/site-packages/ansible\r\n executable location = /home/dhagan/.local/bin/ansible\r\n python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\nDEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = [u'timer']\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\nUbuntu 18.04 on Windows 10\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nExample below uses dns-01 challenge against route53. Either fill in w/ appropriate domain info, or modify as you see fit to allow challenge to pass.\r\n\r\n\r\n```yaml\r\n- name: letsencrypt\r\n hosts: localhost\r\n connection: local\r\n gather_facts: false\r\n vars:\r\n target: \"myhost.mytld.com\"\r\n zone: \"mytld.com\"\r\n contact: \"mailto:name@example.com\"\r\n\r\n\r\n tasks:\r\n - name: acme account\r\n acme_account:\r\n account_key_src: test_key.pem\r\n acme_directory: \"https://acme-staging-v02.api.letsencrypt.org/directory\"\r\n acme_version: 2\r\n allow_creation: true\r\n contact: \"{{ contact }}\"\r\n state: present\r\n terms_agreed: yes\r\n validate_certs: yes\r\n register: account\r\n\r\n - name: create private key\r\n openssl_privatekey:\r\n path: test.key\r\n size: 2048\r\n type: RSA\r\n\r\n - name: create CSR if not present\r\n openssl_csr:\r\n common_name: \"{{ target }}\"\r\n path: test.csr\r\n privatekey_path: test.key\r\n subject_alt_name: \"DNS:{{target}}\"\r\n\r\n - name: acme request\r\n acme_certificate:\r\n account_key_src: test_key.pem\r\n modify_account: no\r\n account_uri: \"{{ account.account_uri }}\"\r\n challenge: \"dns-01\"\r\n csr: test.csr\r\n dest: test.cert\r\n terms_agreed: yes\r\n validate_certs: yes\r\n register: acme_request\r\n\r\n - name: meet challenge requirements\r\n route53:\r\n zone: \"{{ zone }}\"\r\n record: \"{{ acme_request.challenge_data[target]['dns-01'].record }}\"\r\n type: TXT\r\n ttl: 60\r\n state: present\r\n overwrite: yes\r\n wait: yes\r\n value: \"{{ acme_request.challenge_data[target]['dns-01'].resource_value | regex_replace('^(.*)$', '\\\"\\\\1\\\"') }}\"\r\n when: acme_request is changed\r\n\r\n - name: acme certificate\r\n acme_certificate:\r\n account_key_src: test_key.pem\r\n modify_account: no\r\n account_uri: \"{{ account.account_uri }}\"\r\n challenge: \"dns-01\"\r\n src: test.csr\r\n dest: test.cert\r\n data: \"{{ acme_request }}\"\r\n\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\nCertificate gets written to test.cert in current directory.\r\n\r\n##### ACTUAL RESULTS\r\n\r\nThis section of ansible/lib/ansible/module_utils/acme.py, starting at line 121, causes the module to fail because os.path.dirname(dest) for a bare filename is empty.\r\n\r\n```\r\nelse:\r\n if not os.access(os.path.dirname(dest), os.W_OK):\r\n os.remove(tmpsrc)\r\n raise ModuleFailException(\"Destination dir %s not writable\" % (os.path.dirname(dest)))\r\n```\r\n\r\n```paste below\r\ndhagan@onmyoji-shi:~/cloud-ansible$ ansible-playbook -vvv test.yml\r\nansible-playbook 2.7.9\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/dhagan/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/dhagan/.local/lib/python2.7/site-packages/ansible\r\n executable location = /home/dhagan/.local/bin/ansible-playbook\r\n python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]\r\nUsing /etc/ansible/ansible.cfg as config file\r\n/etc/ansible/hosts did not meet host_list requirements, check plugin documentation if this is unexpected\r\n/etc/ansible/hosts did not meet script requirements, check plugin documentation if this is unexpected\r\nParsed /etc/ansible/hosts inventory source with ini plugin\r\n [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'\r\n\r\n\r\nPLAYBOOK: test.yml **************************************************************************************************************************************************************************************************************************\r\n1 plays in test.yml\r\n\r\nPLAY [letsencrypt] **************************************************************************************************************************************************************************************************************************\r\nMETA: ran handlers\r\n\r\nTASK [acme account] *************************************************************************************************************************************************************************************************************************\r\ntask path: /mnt/xxxx/test.yml:12\r\n<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan\r\n<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222 `\" && echo ansible-tmp-1554239227.37-238894588553222=\"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222 `\" ) && sleep 0'\r\nUsing module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/acme/acme_account.py\r\n<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpmJ9dFJ TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/AnsiballZ_acme_account.py\r\n<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/AnsiballZ_acme_account.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/AnsiballZ_acme_account.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/ > /dev/null 2>&1 && sleep 0'\r\nok: [localhost] => {\r\n \"account_uri\": \"https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx\",\r\n \"changed\": false,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"account_key_content\": null,\r\n \"account_key_src\": \"test_key.pem\",\r\n \"account_uri\": null,\r\n \"acme_directory\": \"https://acme-staging-v02.api.letsencrypt.org/directory\",\r\n \"acme_version\": 2,\r\n \"allow_creation\": true,\r\n \"contact\": [\r\n \"mailto:xxxxxx\"\r\n ],\r\n \"new_account_key_content\": null,\r\n \"new_account_key_src\": null,\r\n \"select_crypto_backend\": \"auto\",\r\n \"state\": \"present\",\r\n \"terms_agreed\": true,\r\n \"validate_certs\": true\r\n }\r\n }\r\n}\r\n\r\nTASK [create private key] *******************************************************************************************************************************************************************************************************************\r\ntask path: /xxxx/test.yml:24\r\n<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan\r\n<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355 `\" && echo ansible-tmp-1554239231.41-236536755185355=\"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355 `\" ) && sleep 0'\r\nUsing module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/openssl_privatekey.py\r\n<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpD_PkzS TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/AnsiballZ_openssl_privatekey.py\r\n<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/AnsiballZ_openssl_privatekey.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/AnsiballZ_openssl_privatekey.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/ > /dev/null 2>&1 && sleep 0'\r\nok: [localhost] => {\r\n \"changed\": false,\r\n \"filename\": \"test.key\",\r\n \"fingerprint\": {\r\n \"md5\": \"xxxx\",\r\n \"sha1\": \"xxxx\",\r\n \"sha224\": \"xxxx\",\r\n \"sha256\": \"xxxx\",\r\n \"sha384\": \"xxxx\",\r\n \"sha512\": \"xxxx\"\r\n },\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"attributes\": null,\r\n \"backup\": null,\r\n \"cipher\": null,\r\n \"content\": null,\r\n \"delimiter\": null,\r\n \"directory_mode\": null,\r\n \"follow\": false,\r\n \"force\": false,\r\n \"group\": null,\r\n \"mode\": null,\r\n \"owner\": null,\r\n \"passphrase\": null,\r\n \"path\": \"test.key\",\r\n \"regexp\": null,\r\n \"remote_src\": null,\r\n \"selevel\": null,\r\n \"serole\": null,\r\n \"setype\": null,\r\n \"seuser\": null,\r\n \"size\": 2048,\r\n \"src\": null,\r\n \"state\": \"present\",\r\n \"type\": \"RSA\",\r\n \"unsafe_writes\": null\r\n }\r\n },\r\n \"size\": 2048,\r\n \"type\": \"RSA\"\r\n}\r\n\r\nTASK [create CSR if not present] ************************************************************************************************************************************************************************************************************\r\ntask path: /xxxx/test.yml:30\r\n<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan\r\n<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665 `\" && echo ansible-tmp-1554239232.48-242780450892665=\"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665 `\" ) && sleep 0'\r\nUsing module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/openssl_csr.py\r\n<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpOlkhLU TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/AnsiballZ_openssl_csr.py\r\n<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/AnsiballZ_openssl_csr.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/AnsiballZ_openssl_csr.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/ > /dev/null 2>&1 && sleep 0'\r\nok: [localhost] => {\r\n \"basicConstraints\": null,\r\n \"changed\": false,\r\n \"extendedKeyUsage\": null,\r\n \"filename\": \"test.csr\",\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"attributes\": null,\r\n \"backup\": null,\r\n \"basicConstraints\": null,\r\n \"basicConstraints_critical\": false,\r\n \"commonName\": \"xxxx\",\r\n \"common_name\": \"xxxx\",\r\n \"content\": null,\r\n \"countryName\": null,\r\n \"delimiter\": null,\r\n \"digest\": \"sha256\",\r\n \"directory_mode\": null,\r\n \"emailAddress\": null,\r\n \"extendedKeyUsage\": null,\r\n \"extendedKeyUsage_critical\": false,\r\n \"follow\": false,\r\n \"force\": false,\r\n \"group\": null,\r\n \"keyUsage\": null,\r\n \"keyUsage_critical\": false,\r\n \"localityName\": null,\r\n \"mode\": null,\r\n \"ocspMustStaple\": false,\r\n \"ocspMustStaple_critical\": false,\r\n \"organizationName\": null,\r\n \"organizationalUnitName\": null,\r\n \"owner\": null,\r\n \"path\": \"test.csr\",\r\n \"privatekey_passphrase\": null,\r\n \"privatekey_path\": \"test.key\",\r\n \"regexp\": null,\r\n \"remote_src\": null,\r\n \"selevel\": null,\r\n \"serole\": null,\r\n \"setype\": null,\r\n \"seuser\": null,\r\n \"src\": null,\r\n \"state\": \"present\",\r\n \"stateOrProvinceName\": null,\r\n \"subject\": null,\r\n \"subjectAltName\": [\r\n \"DNS:xxxx\"\r\n ],\r\n \"subjectAltName_critical\": false,\r\n \"subject_alt_name\": \"DNS:xxxx\",\r\n \"unsafe_writes\": null,\r\n \"version\": 1\r\n }\r\n },\r\n \"keyUsage\": null,\r\n \"ocspMustStaple\": false,\r\n \"privatekey\": \"test.key\",\r\n \"subject\": [\r\n [\r\n \"CN\",\r\n \"xxxx\"\r\n ]\r\n ],\r\n \"subjectAltName\": [\r\n \"DNS:xxxx\"\r\n ]\r\n}\r\n\r\nTASK [acme request] *************************************************************************************************************************************************************************************************************************\r\ntask path: /xxx/test.yml:37\r\n<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan\r\n<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495 `\" && echo ansible-tmp-1554239233.51-137769391892495=\"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495 `\" ) && sleep 0'\r\nUsing module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/acme/acme_certificate.py\r\n<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpgY1bDo TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/AnsiballZ_acme_certificate.py\r\n<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/AnsiballZ_acme_certificate.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/AnsiballZ_acme_certificate.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/ > /dev/null 2>&1 && sleep 0'\r\nchanged: [localhost] => {\r\n \"account_uri\": \"https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx\",\r\n \"authorizations\": {\r\n \"xxxx\": {\r\n \"challenges\": [\r\n {\r\n \"status\": \"pending\",\r\n \"token\": \"xxxx\",\r\n \"type\": \"dns-01\",\r\n \"uri\": \"https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx\"\r\n },\r\n {\r\n \"status\": \"pending\",\r\n \"token\": \"xxxx\",\r\n \"type\": \"tls-alpn-01\",\r\n \"uri\": \"https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx\"\r\n },\r\n {\r\n \"status\": \"pending\",\r\n \"token\": \"xxxx\",\r\n \"type\": \"http-01\",\r\n \"uri\": \"https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx\"\r\n }\r\n ],\r\n \"combinations\": [\r\n [\r\n 1\r\n ],\r\n [\r\n 2\r\n ],\r\n [\r\n 0\r\n ]\r\n ],\r\n \"expires\": \"2019-04-09T21:07:15Z\",\r\n \"identifier\": {\r\n \"type\": \"dns\",\r\n \"value\": \"xxxx\"\r\n },\r\n \"status\": \"pending\",\r\n \"uri\": \"https://acme-staging.api.letsencrypt.org/acme/authz/xxxx\"\r\n }\r\n },\r\n \"cert_days\": -1,\r\n \"challenge_data\": {\r\n \"test-name.dhagan.dev.nsoc.state911.us\": {\r\n \"dns-01\": {\r\n \"record\": \"_acme-challenge.xxxx\",\r\n \"resource\": \"_acme-challenge\",\r\n \"resource_value\": \"xxxx\"\r\n },\r\n \"http-01\": {\r\n \"resource\": \".well-known/acme-challenge/xxxx\",\r\n \"resource_value\": \"xxxx\"\r\n },\r\n \"tls-alpn-01\": {\r\n \"resource\": \"xxxx\",\r\n \"resource_value\": \"xxxx\"\r\n }\r\n }\r\n },\r\n \"challenge_data_dns\": {\r\n \"_acme-challenge.xxxx\": [\r\n \"xxxx\"\r\n ]\r\n },\r\n \"changed\": true,\r\n \"finalize_uri\": null,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"account_email\": null,\r\n \"account_key_content\": null,\r\n \"account_key_src\": \"test_key.pem\",\r\n \"account_uri\": \"https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx\",\r\n \"acme_directory\": \"https://acme-staging.api.letsencrypt.org/directory\",\r\n \"acme_version\": 1,\r\n \"agreement\": null,\r\n \"chain_dest\": null,\r\n \"challenge\": \"dns-01\",\r\n \"csr\": \"test.csr\",\r\n \"data\": null,\r\n \"deactivate_authzs\": false,\r\n \"dest\": \"test.cert\",\r\n \"force\": false,\r\n \"fullchain_dest\": null,\r\n \"modify_account\": false,\r\n \"remaining_days\": 10,\r\n \"select_crypto_backend\": \"auto\",\r\n \"terms_agreed\": true,\r\n \"validate_certs\": true\r\n }\r\n },\r\n \"order_uri\": null\r\n}\r\n\r\nTASK [meet challenge requirements] **********************************************************************************************************************************************************************************************************\r\ntask path: /xxxx/test.yml:49\r\n<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan\r\n<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110 `\" && echo ansible-tmp-1554239236.67-178715692795110=\"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110 `\" ) && sleep 0'\r\nUsing module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/cloud/amazon/route53.py\r\n<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpCRBtC3 TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/AnsiballZ_route53.py\r\n<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/AnsiballZ_route53.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/AnsiballZ_route53.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/ > /dev/null 2>&1 && sleep 0'\r\nchanged: [localhost] => {\r\n \"changed\": true,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"alias\": null,\r\n \"alias_evaluate_target_health\": false,\r\n \"alias_hosted_zone_id\": null,\r\n \"aws_access_key\": null,\r\n \"aws_secret_key\": null,\r\n \"ec2_url\": null,\r\n \"failover\": null,\r\n \"health_check\": null,\r\n \"hosted_zone_id\": null,\r\n \"identifier\": null,\r\n \"overwrite\": true,\r\n \"private_zone\": false,\r\n \"profile\": null,\r\n \"record\": \"_acme-challenge.xxxx\",\r\n \"region\": null,\r\n \"retry_interval\": \"500\",\r\n \"security_token\": null,\r\n \"state\": \"present\",\r\n \"ttl\": 60,\r\n \"type\": \"TXT\",\r\n \"validate_certs\": true,\r\n \"value\": [\r\n \"\\\"xxxx\\\"\"\r\n ],\r\n \"vpc_id\": null,\r\n \"wait\": true,\r\n \"wait_timeout\": 600,\r\n \"weight\": null,\r\n \"zone\": \"xxxx\"\r\n }\r\n }\r\n}\r\n\r\nTASK [acme certificate] *********************************************************************************************************************************************************************************************************************\r\ntask path: /xxxx/test.yml:62\r\n<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan\r\n<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429 `\" && echo ansible-tmp-1554239274.52-105161436966429=\"` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429 `\" ) && sleep 0'\r\nUsing module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/acme/acme_certificate.py\r\n<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpvcA5GB TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/AnsiballZ_acme_certificate.py\r\n<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/AnsiballZ_acme_certificate.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/AnsiballZ_acme_certificate.py && sleep 0'\r\n<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/ > /dev/null 2>&1 && sleep 0'\r\nThe full traceback is:\r\nWARNING: The below traceback may *not* be related to the actual failure.\r\n File \"/tmp/ansible_acme_certificate_payload_B3NU6E/__main__.py\", line 931, in main\r\n client.get_certificate()\r\n File \"/tmp/ansible_acme_certificate_payload_B3NU6E/__main__.py\", line 824, in get_certificate\r\n if self.dest and write_file(self.module, self.dest, pem_cert.encode('utf8')):\r\n File \"/tmp/ansible_acme_certificate_payload_B3NU6E/ansible_acme_certificate_payload.zip/ansible/module_utils/acme.py\", line 138, in write_file\r\n raise ModuleFailException(\"Destination dir %s not writable\" % (os.path.dirname(dest)))\r\n\r\nfatal: [localhost]: FAILED! => {\r\n \"changed\": false,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"account_email\": null,\r\n \"account_key_content\": null,\r\n \"account_key_src\": \"test_key.pem\",\r\n \"account_uri\": \"https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx\",\r\n \"acme_directory\": \"https://acme-staging.api.letsencrypt.org/directory\",\r\n \"acme_version\": 1,\r\n \"agreement\": null,\r\n \"chain_dest\": null,\r\n \"challenge\": \"dns-01\",\r\n \"csr\": \"test.csr\",\r\n \"data\": {\r\n \"account_uri\": \"https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx\",\r\n \"authorizations\": {\r\n \"test-name.dhagan.dev.nsoc.state911.us\": {\r\n \"challenges\": [\r\n {\r\n \"status\": \"pending\",\r\n \"token\": \"xxxx\",\r\n \"type\": \"dns-01\",\r\n \"uri\": \"https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx\"\r\n },\r\n {\r\n \"status\": \"pending\",\r\n \"token\": \"xxxx\",\r\n \"type\": \"tls-alpn-01\",\r\n \"uri\": \"https://acme-staging.api.letsencrypt.org/acme/challenge/xxxxx/xxxx\"\r\n },\r\n {\r\n \"status\": \"pending\",\r\n \"token\": \"xxxx\",\r\n \"type\": \"http-01\",\r\n \"uri\": \"https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx\"\r\n }\r\n ],\r\n \"combinations\": [\r\n [\r\n 1\r\n ],\r\n [\r\n 2\r\n ],\r\n [\r\n 0\r\n ]\r\n ],\r\n \"expires\": \"2019-04-09T21:07:15Z\",\r\n \"identifier\": {\r\n \"type\": \"dns\",\r\n \"value\": \"xxxx\"\r\n },\r\n \"status\": \"pending\",\r\n \"uri\": \"https://acme-staging.api.letsencrypt.org/acme/authz/xxxx\"\r\n }\r\n },\r\n \"cert_days\": -1,\r\n \"challenge_data\": {\r\n \"xxxx\": {\r\n \"dns-01\": {\r\n \"record\": \"_acme-challenge.xxxx\",\r\n \"resource\": \"_acme-challenge\",\r\n \"resource_value\": \"xxxx\"\r\n },\r\n \"http-01\": {\r\n \"resource\": \".well-known/acme-challenge/xxxx\",\r\n \"resource_value\": \"xxxx\"\r\n },\r\n \"tls-alpn-01\": {\r\n \"resource\": \"xxxx\",\r\n \"resource_value\": \"xxxx\"\r\n }\r\n }\r\n },\r\n \"challenge_data_dns\": {\r\n \"_acme-challenge.xxxx\": [\r\n \"xxxx\"\r\n ]\r\n },\r\n \"changed\": true,\r\n \"failed\": false,\r\n \"finalize_uri\": null,\r\n \"order_uri\": null\r\n },\r\n \"deactivate_authzs\": false,\r\n \"dest\": \"test.cert\",\r\n \"force\": false,\r\n \"fullchain_dest\": null,\r\n \"modify_account\": false,\r\n \"remaining_days\": 10,\r\n \"select_crypto_backend\": \"auto\",\r\n \"src\": \"test.csr\",\r\n \"terms_agreed\": false,\r\n \"validate_certs\": true\r\n }\r\n },\r\n \"msg\": \"Destination dir not writable\",\r\n \"other\": {}\r\n}\r\n to retry, use: --limit @/xxxx/test.retry\r\n\r\nPLAY RECAP **********************************************************************************************************************************************************************************************************************************\r\nlocalhost : ok=5 changed=2 unreachable=0 failed=1\r\n\r\nPlaybook run took 0 days, 0 hours, 1 minutes, 0 seconds\r\n```\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/54754", "file_loc": {"base_commit": "3694711a7e975324d52c258ab73a8f5e766a3f1c", "files": [{"path": "lib/ansible/module_utils/acme.py", "status": "modified", "Loc": {"(None, 'write_file', 79)": {"mod": [122, 124]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/module_utils/acme.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "41750a6cff55e401364568868d619747de3db037", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/3785", "iss_label": "wontfix\nCore: Encoder-Decoder", "title": "How to fine tune EncoderDecoder model for training a new corpus of data ?", "body": "is there any documentation available for the same?", "pr_html_url": "https://github.com/huggingface/transformers/pull/3383", "file_loc": {"base_commit": "41750a6cff55e401364568868d619747de3db037", "files": [{"path": "docs/source/index.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [91]}}}, {"path": "src/transformers/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [43], "mod": [270]}}}, {"path": "src/transformers/configuration_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27, 84]}}}, {"path": "src/transformers/modeling_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [29, 88, 221]}}}, {"path": "src/transformers/modeling_bert.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [961]}}}, {"path": "src/transformers/modeling_encoder_decoder.py", "status": "modified", "Loc": {"('PreTrainedEncoderDecoder', None, 29)": {"add": [36], "mod": [29, 31, 33, 35, 38, 39, 44, 158, 159, 160]}, "('PreTrainedEncoderDecoder', '__init__', 38)": {"add": [41]}, "('PreTrainedEncoderDecoder', 'from_pretrained', 44)": {"add": [145, 150], "mod": [46, 47, 50, 54, 55, 58, 65, 75, 76, 78, 79, 80, 82, 83, 84, 85, 87, 88, 89, 91, 92, 94, 95, 96, 98, 99, 104, 105, 107, 111, 112, 115, 116, 117, 118, 119, 120, 121, 122, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 154]}, "(None, None, None)": {"mod": [19, 21, 23]}, "('PreTrainedEncoderDecoder', 'save_pretrained', 158)": {"mod": [162, 165, 166, 167, 169, 170, 171, 172, 173, 174, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 192, 194, 195, 196, 197, 199, 200, 201, 202, 204, 205, 207, 208, 209, 210, 211, 213, 214]}, "('PreTrainedEncoderDecoder', 'forward', 204)": {"mod": [216, 217, 218, 219, 220, 221, 222, 223, 225, 226, 227, 228, 229, 231, 233, 234, 236]}}}, {"path": "src/transformers/modeling_utils.py", "status": "modified", "Loc": {"('PreTrainedModel', 'generate', 764)": {"mod": [1014]}}}, {"path": "src/transformers/utils_encoder_decoder.py", "status": "removed", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/configuration_auto.py", "src/transformers/__init__.py", "src/transformers/modeling_auto.py", "src/transformers/utils_encoder_decoder.py", "src/transformers/modeling_utils.py", "src/transformers/modeling_bert.py", "src/transformers/modeling_encoder_decoder.py"], "doc": ["docs/source/index.rst"], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "2908a2c32a81fca78277a22f15fa8e3abe75e092", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/71517", "iss_label": "easyfix\npython3\nmodule\nsupport:core\nbug\nhas_pr\nP3\nsystem\naffects_2.9", "title": "Reboot module doesn't work with async", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\nThe `reboot` module does not work with `async`, `poll`, and `async_status`. Suppose I have 10 nodes to reboot, but I can only set `fork` to 2. The `reboot` module will reboot 2 nodes at a time. I tried using `async`, `poll`, and `async_status` to kick of the reboots on the 10 nodes, 2 at a time, and then poll for the results. `async` and `poll` seem to do nothing on the `reboot` module as the behavior remains the same as without them.\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n`reboot` module\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```paste below\r\nansible 2.9.12\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib64/python3.6/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.6.8 (default, May 2 2019, 19:37:42) [GCC 4.4.7 20120313 (Red Hat 4.4.7-23)]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\nANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True\r\nANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null\r\nANSIBLE_SSH_RETRIES(/etc/ansible/ansible.cfg) = 2\r\nCOMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False\r\nDEFAULT_FORKS(/etc/ansible/ansible.cfg) = 2\r\nDEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit\r\nDEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 40\r\nHOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False\r\nRETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nDescribed in the summary\r\n\r\n\r\n```yaml\r\n\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\nI expect the `reboot` module to start the reboot on 2 nodes, then move on to something else (like start the reboot on another 2 nodes), then come back to check on the results of the reboots by using `async`, `poll`, and `async_status`.\r\n\r\n##### ACTUAL RESULTS\r\n\r\nThe `reboot` module ignores `async` and `poll`.\r\n\r\n\r\n```paste below\r\n\r\n```\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/80017", "file_loc": {"base_commit": "2908a2c32a81fca78277a22f15fa8e3abe75e092", "files": [{"path": "lib/ansible/plugins/action/reboot.py", "status": "modified", "Loc": {"('ActionModule', 'run', 409)": {"mod": [411]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/plugins/action/reboot.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "c482b5727e3bd98b6f9780e51615791e413d542d", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/29916", "iss_label": "Enhancement\nIO HDF5", "title": "HDF5: empty groups and keys", "body": "Hi,\r\n\r\nWith some of the hdf5 files I have, `pandas.HDFStore.groups()` returns an empty list. (as does `.keys()` which iterates over the groups). However, the data are accessible via `.get()` or `.get_node()`.\r\n\r\nThis is related to #21543 and #21372 where the `.groups()` logic was changed, in particular using `self._handle.walk_groups()` instead of `self._handle.walk_nodes()`, now to be found here:\r\nhttps://github.com/pandas-dev/pandas/blob/ea2e26ae7d700d7fd363ea5bfc05d2fe3fb8a5ee/pandas/io/pytables.py#L1212\r\n\r\n\r\n#### Current Output\r\n\r\n```python\r\n>>> hdf.groups()\r\n[]\r\n```\r\n```python\r\n>>> hdf.keys()\r\n[]\r\n```\r\n\r\n#### Expected Ouptut\r\n\r\nList of groups and keys as visible with e.g. `h5dump`.\r\n**Note:** Changing the aforementioned line back to use `.walk_nodes()` fixes the issue and lists the groups and keys properly:\r\n\r\n```python\r\n>>> hdf.groups()\r\n[/Data/Table Layout (Table(69462,), zlib(4)) ''\r\n description := {\r\n...\r\n/Data/Array Layout/2D Parameters/Data Parameters (Table(15,)) ''\r\n description := {\r\n \"mnemonic\": StringCol(itemsize=8, shape=(), dflt=b'', pos=0),\r\n \"description\": StringCol(itemsize=48, shape=(), dflt=b'', pos=1),\r\n \"isError\": Int64Col(shape=(), dflt=0, pos=2),\r\n \"units\": StringCol(itemsize=7, shape=(), dflt=b'', pos=3),\r\n \"category\": StringCol(itemsize=31, shape=(), dflt=b'', pos=4)}\r\n byteorder := 'little'\r\n chunkshape := (642,)]]\r\n```\r\n```python\r\n>>> hdf.keys()\r\n['/Data/Table Layout',\r\n '/Metadata/Data Parameters',\r\n '/Metadata/Experiment Notes',\r\n '/Metadata/Experiment Parameters',\r\n '/Metadata/Independent Spatial Parameters',\r\n '/Metadata/_record_layout',\r\n '/Data/Array Layout/Layout Description',\r\n '/Data/Array Layout/1D Parameters/Data Parameters',\r\n '/Data/Array Layout/2D Parameters/Data Parameters']\r\n```\r\n\r\n#### Fix\r\n\r\nOne solution would be (I guess) to revert #21543, another to fix at least `.keys()` to use `._handle.walk_nodes()` instead of `.groups()` in\r\nhttps://github.com/pandas-dev/pandas/blob/ea2e26ae7d700d7fd363ea5bfc05d2fe3fb8a5ee/pandas/io/pytables.py#L562\r\n\r\nCould also be that it is a bug in `pytables`.\r\n\r\n#### Problem background\r\n\r\nI was trying to figure out why some hdf5 files open fine with `pandas` but fail with `dask`.\r\nThe reason is that `dask` allows wildcards and iterates over the keys to find valid ones. If `.keys()` is empty, reading the files with `dask` fails.\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n
    \r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit : None\r\npython : 3.7.3.final.0\r\npython-bits : 64\r\nOS : Linux\r\nOS-release : 3.10.0-957.27.2.el7.x86_64\r\nmachine : x86_64\r\nprocessor : x86_64\r\nbyteorder : little\r\nLC_ALL : None\r\nLANG : C\r\nLOCALE : en_US.UTF-8\r\n\r\npandas : 0.25.3\r\nnumpy : 1.17.3\r\npytz : 2019.3\r\ndateutil : 2.8.1\r\npip : 19.3.1\r\nsetuptools : 42.0.1.post20191125\r\nCython : None\r\npytest : 5.0.1\r\nhypothesis : None\r\nsphinx : None\r\nblosc : None\r\nfeather : None\r\nxlsxwriter : None\r\nlxml.etree : 4.4.2\r\nhtml5lib : None\r\npymysql : None\r\npsycopg2 : None\r\njinja2 : 2.10.3\r\nIPython : 7.10.0\r\npandas_datareader: None\r\nbs4 : None\r\nbottleneck : None\r\nfastparquet : None\r\ngcsfs : None\r\nlxml.etree : 4.4.2\r\nmatplotlib : 3.1.2\r\nnumexpr : 2.7.0\r\nodfpy : None\r\nopenpyxl : None\r\npandas_gbq : None\r\npyarrow : None\r\npytables : None\r\ns3fs : None\r\nscipy : 1.3.2\r\nsqlalchemy : None\r\ntables : 3.6.1\r\nxarray : 0.14.1\r\nxlrd : None\r\nxlwt : None\r\nxlsxwriter : None\r\n\r\n
    \r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/32723", "file_loc": {"base_commit": "c482b5727e3bd98b6f9780e51615791e413d542d", "files": [{"path": "doc/source/whatsnew/v1.1.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [964]}}}, {"path": "pandas/io/pytables.py", "status": "modified", "Loc": {"('HDFStore', 'keys', 583)": {"add": [586, 590], "mod": [592]}, "('HDFStore', None, 442)": {"mod": [583]}}}, {"path": "pandas/tests/io/pytables/test_store.py", "status": "modified", "Loc": {"('TestHDFStore', None, 66)": {"add": [343]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/io/pytables.py"], "doc": ["doc/source/whatsnew/v1.1.0.rst"], "test": ["pandas/tests/io/pytables/test_store.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "f3372a3753643fea601564c01fcf65cc25a2db62", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4667", "iss_label": "bug", "title": "List of fields incorrectly accessed for dataclass items", "body": "### Description\r\n\r\nIf I make a `dataclass` item and want to export to csv, I get this error:\r\n\r\n```\r\n...\r\n File \"/home/tadej/miniconda3/envs/main/lib/python3.7/site-packages/scrapy/exporters.py\", line 251, in _write_headers_and_set_fields_to_export\r\n self.fields_to_export = list(item.fields.keys())\r\nAttributeError: 'CompanyItem' object has no attribute 'fields'\r\n```\r\nThe problem stems from here\r\n\r\nhttps://github.com/scrapy/scrapy/blob/master/scrapy/exporters.py#L243-L253\r\n\r\nThere should be an additional if case checking if the item is of type dataclass, and then accessing the fields differently, perhaps as\r\n```python\r\n[field.name for field in fields(item)]\r\n```\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4668", "file_loc": {"base_commit": "f3372a3753643fea601564c01fcf65cc25a2db62", "files": [{"path": "scrapy/exporters.py", "status": "modified", "Loc": {"('CsvItemExporter', '_write_headers_and_set_fields_to_export', 243)": {"mod": [246, 247, 248, 249, 250, 251]}}}, {"path": "tests/test_exporters.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10, 25, 159, 168, 199, 221, 311, 414, 451, 529]}, "('BaseItemExporterTest', None, 26)": {"add": [27]}, "('BaseItemExporterTest', 'setUp', 28)": {"mod": [29]}, "('BaseItemExporterTest', '_assert_expected_item', 39)": {"mod": [42]}, "('BaseItemExporterTest', 'test_export_dict_item', 65)": {"mod": [66]}, "('BaseItemExporterTest', 'test_serialize_field', 68)": {"mod": [69, 72]}, "('BaseItemExporterTest', 'test_field_custom_serializer', 84)": {"mod": [85, 86, 88, 89, 90, 92, 94, 95, 96]}, "('PythonItemExporterTest', 'test_nested_item', 107)": {"mod": [108, 110]}, "('PythonItemExporterTest', 'test_export_list', 121)": {"mod": [122, 123, 124]}, "('PythonItemExporterTest', 'test_export_item_dict_list', 134)": {"mod": [135, 137]}, "('PythonItemExporterTest', 'test_export_binary', 147)": {"mod": [149]}, "('PickleItemExporterTest', 'test_export_multiple_items', 177)": {"mod": [178, 179, 187, 188]}, "('CsvItemExporterTest', 'test_header_export_all', 245)": {"mod": [248]}, "('CsvItemExporterTest', 'test_header_export_all_dict', 252)": {"mod": [254]}, "('CsvItemExporterTest', 'test_header_export_single_field', 258)": {"mod": [259]}, "('CsvItemExporterTest', 'test_header_export_two_items', 266)": {"mod": [267]}, "('CsvItemExporterTest', 'test_header_no_header_line', 277)": {"mod": [278]}, "('XmlItemExporterTest', 'xmltuple', 318)": {"mod": [321, 322]}, "('XmlItemExporterTest', 'test_multivalued_fields', 346)": {"mod": [348, 349, 350, 351, 352]}, "('XmlItemExporterTest', 'test_nested_item', 355)": {"mod": [356, 358]}, "('XmlItemExporterTest', 'test_nested_list_item', 378)": {"mod": [379, 381]}, "('JsonLinesItemExporterTest', '_check_output', 422)": {"mod": [424]}, "('JsonLinesItemExporterTest', 'test_nested_item', 426)": {"mod": [427, 429]}, "('JsonItemExporterTest', '_check_output', 459)": {"mod": [461]}, "('JsonItemExporterTest', 'assertTwoItemsExported', 463)": {"mod": [469]}, "('JsonItemExporterTest', 'test_two_dict_items', 474)": {"mod": [475]}, "('JsonItemExporterTest', 'test_nested_item', 477)": {"mod": [478, 479, 480, 485]}, "('JsonItemExporterTest', 'test_nested_dict_item', 488)": {"mod": [490]}, "('CustomItemExporterTest', None, 509)": {"mod": [509]}, "('CustomItemExporterTest', 'test_exporter_custom_serializer', 511)": {"mod": [519, 522, 523]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/exporters.py"], "doc": [], "test": ["tests/test_exporters.py"], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "2b8e056d5d4d14665b88a01c41356253c94b9259", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/322", "iss_label": "enhancement\ngood first issue", "title": "Print and store how many tokens were used in memory/logs", "body": "In this way, we can also store this to benchmark results.\r\n\r\nA huge increase in tokens will not be worth a minor improvement in benchmark resultss.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/438", "file_loc": {"base_commit": "2b8e056d5d4d14665b88a01c41356253c94b9259", "files": [{"path": "gpt_engineer/ai.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 7, 11, 56]}, "('AI', 'next', 34)": {"add": [54]}, "('AI', None, 12)": {"mod": [17, 34]}, "('AI', 'start', 17)": {"mod": [23]}}}, {"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [63]}}}, {"path": "gpt_engineer/steps.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 37]}, "(None, 'clarify', 48)": {"add": [73], "mod": [55]}, "(None, 'respec', 107)": {"add": [121], "mod": [111]}, "(None, 'gen_entrypoint', 212)": {"add": [226]}, "(None, 'simple_gen', 41)": {"mod": [43]}, "(None, 'gen_spec', 90)": {"mod": [100]}, "(None, 'gen_unit_tests', 128)": {"mod": [138]}, "(None, 'gen_clarified_code', 146)": {"mod": [154]}, "(None, 'gen_code', 160)": {"mod": [169]}, "(None, 'use_feedback', 236)": {"mod": [243]}, "(None, 'fix_code', 248)": {"mod": [256]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"add": [21]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/ai.py", "gpt_engineer/main.py", "gpt_engineer/steps.py"], "doc": [], "test": [], "config": ["pyproject.toml"], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "6dda14dc47d82f0e32df05fea8ba6444ba52b90a", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/20058", "iss_label": "", "title": "Push to Hub fails with `model_name`", "body": "### System Info\r\n\r\n- `transformers` version: 4.25.0.dev0\r\n- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.13\r\n- Huggingface_hub version: 0.10.1\r\n- PyTorch version (GPU?): 1.13.0+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: yes\r\n- Using distributed or parallel set-up in script?: no\r\n\r\n\r\n### Who can help?\r\n\r\n@sanchit-gandhi \r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\n```python\r\nfrom datasets import load_dataset, DatasetDict\r\n\r\ncommon_voice = DatasetDict()\r\n\r\n#common_voice[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"train+validation\", use_auth_token=True)\r\n#common_voice[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"test\", use_auth_token=True)\r\n\r\ncommon_voice[\"train\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"train[:1%]+validation[:1%]\", use_auth_token=True)\r\ncommon_voice[\"test\"] = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"sv-SE\", split=\"test[:1%]\", use_auth_token=True)\r\n\r\nprint(common_voice)\r\n\r\ncommon_voice = common_voice.remove_columns([\"accent\", \"age\", \"client_id\", \"down_votes\", \"gender\", \"locale\", \"path\", \"segment\", \"up_votes\"])\r\n\r\nprint(common_voice)\r\n\r\nfrom transformers import WhisperFeatureExtractor\r\n\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai/whisper-small\")\r\n\r\nfrom transformers import WhisperTokenizer\r\n\r\ntokenizer = WhisperTokenizer.from_pretrained(\"openai/whisper-small\", language=\"swedish\", task=\"transcribe\")\r\n\r\nfrom transformers import WhisperProcessor\r\n\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-small\", language=\"swedish\", task=\"transcribe\")\r\n\r\nprint(common_voice[\"train\"][0])\r\n\r\nfrom datasets import Audio\r\n\r\ncommon_voice = common_voice.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n\r\n\r\nprint(common_voice[\"train\"][0])\r\n\r\ndef prepare_dataset(batch):\r\n # load and resample audio data from 48 to 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n\r\n # encode target text to label ids \r\n batch[\"labels\"] = tokenizer(batch[\"sentence\"]).input_ids\r\n return batch\r\n\r\ncommon_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=1)\r\n\r\nimport torch\r\n\r\nfrom dataclasses import dataclass\r\nfrom typing import Any, Dict, List, Union\r\n\r\n@dataclass\r\nclass DataCollatorSpeechSeq2SeqWithPadding:\r\n processor: Any\r\n\r\n def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\r\n # split inputs and labels since they have to be of different lengths and need different padding methods\r\n # first treat the audio inputs by simply returning torch tensors\r\n input_features = [{\"input_features\": feature[\"input_features\"]} for feature in features]\r\n batch = self.processor.feature_extractor.pad(input_features, return_tensors=\"pt\")\r\n\r\n # get the tokenized label sequences\r\n label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\r\n # pad the labels to max length\r\n labels_batch = self.processor.tokenizer.pad(label_features, return_tensors=\"pt\")\r\n\r\n # replace padding with -100 to ignore loss correctly\r\n labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\r\n\r\n # if bos token is appended in previous tokenization step,\r\n # cut bos token here as it's append later anyways\r\n if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():\r\n labels = labels[:, 1:]\r\n\r\n batch[\"labels\"] = labels\r\n\r\n return batch\r\n\r\n\"\"\"Let's initialise the data collator we've just defined:\"\"\"\r\n\r\ndata_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)\r\n\r\nimport evaluate\r\n\r\nmetric = evaluate.load(\"wer\")\r\n\r\ndef compute_metrics(pred):\r\n pred_ids = pred.predictions\r\n label_ids = pred.label_ids\r\n\r\n # replace -100 with the pad_token_id\r\n label_ids[label_ids == -100] = tokenizer.pad_token_id\r\n\r\n # we do not want to group tokens when computing the metrics\r\n pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\r\n label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)\r\n\r\n wer = 100 * metric.compute(predictions=pred_str, references=label_str)\r\n\r\n return {\"wer\": wer}\r\n\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\")\r\n\r\nmodel.config.forced_decoder_ids = None\r\nmodel.config.suppress_tokens = []\r\n\r\nfrom transformers import Seq2SeqTrainingArguments\r\n\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./whisper-small-sv-test2\", # change to a repo name of your choice\r\n per_device_train_batch_size=16,\r\n gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size\r\n learning_rate=1e-5,\r\n warmup_steps=500,\r\n max_steps=10,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n group_by_length=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=8,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=1000,\r\n eval_steps=1000,\r\n logging_steps=25,\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n)\r\n\r\nfrom transformers import Seq2SeqTrainer\r\n\r\ntrainer = Seq2SeqTrainer(\r\n args=training_args,\r\n model=model,\r\n train_dataset=common_voice[\"train\"],\r\n eval_dataset=common_voice[\"test\"],\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics,\r\n tokenizer=processor.feature_extractor,\r\n)\r\n\r\ntrainer.train()\r\n\r\n\"\"\"Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):\"\"\"\r\n\r\nkwargs = {\r\n \"dataset_tags\": \"mozilla-foundation/common_voice_11_0\",\r\n \"dataset\": \"Common Voice 11.0\", # a 'pretty' name for the training dataset\r\n \"language\": \"sv\",\r\n #\"model_name\": \"WhisperSmallSwedishBirgerMoell\", # a 'pretty' name for our model\r\n \"finetuned_from\": \"openai/whisper-small\",\r\n \"tasks\": \"automatic-speech-recognition\",\r\n \"tags\": \"hf-asr-leaderboard\",\r\n}\r\n\r\ntrainer.push_to_hub(**kwargs)\r\n\r\nfrom transformers import pipeline\r\nimport gradio as gr\r\n\r\npipe = pipeline(model=\"birgermoell/whisper-small-sv-test2\") # change to \"your-username/the-name-you-picked\"\r\n\r\ndef transcribe(audio):\r\n text = pipe(audio)[\"text\"]\r\n return text\r\n\r\niface = gr.Interface(\r\n fn=transcribe, \r\n inputs=gr.Audio(source=\"microphone\", type=\"filepath\"), \r\n outputs=\"text\",\r\n title=\"Whisper Small SV\",\r\n description=\"Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.\",\r\n)\r\n\r\niface.launch()\r\n\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\nThe following script is a downloaded version of the colab notebook that follows the whisper fine-tuning tutorial.\r\nhttps://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb\r\n\r\nOne edit was that I removed the model name since I had an issue that it was complaining about two model names that made it impossible to upload. The script just runs on 1% of the dataset on 10 epochs.\r\n\r\nkwargs = {\r\n \"dataset_tags\": \"mozilla-foundation/common_voice_11_0\",\r\n \"dataset\": \"Common Voice 11.0\", # a 'pretty' name for the training dataset\r\n \"language\": \"sv\",\r\n #\"model_name\": \"WhisperSmallSwedishBirgerMoell\", # a 'pretty' name for our model\r\n \"finetuned_from\": \"openai/whisper-small\",\r\n \"tasks\": \"automatic-speech-recognition\",\r\n \"tags\": \"hf-asr-leaderboard\",\r\n}\r\n\r\nhttps://huggingface.co/birgermoell/whisper-small-sv-test2\r\n\r\nI also ran into similar issues when I trained a model on the whole dataset.\r\n\r\nhttps://huggingface.co/birgermoell/whisper-small-sv\r\n", "pr_html_url": "https://github.com/huggingface/transformers/pull/20117", "file_loc": {"base_commit": "6dda14dc47d82f0e32df05fea8ba6444ba52b90a", "files": [{"path": "src/transformers/models/clip/processing_clip.py", "status": "modified", "Loc": {"('CLIPProcessor', 'decode', 102)": {"add": [107]}}}, {"path": "src/transformers/models/flava/processing_flava.py", "status": "modified", "Loc": {"('FlavaProcessor', 'decode', 119)": {"add": [124]}}}, {"path": "src/transformers/models/layoutlmv2/processing_layoutlmv2.py", "status": "modified", "Loc": {"('LayoutLMv2Processor', 'decode', 155)": {"add": [160]}}}, {"path": "src/transformers/models/layoutlmv3/processing_layoutlmv3.py", "status": "modified", "Loc": {"('LayoutLMv3Processor', 'decode', 153)": {"add": [158]}}}, {"path": "src/transformers/models/layoutxlm/processing_layoutxlm.py", "status": "modified", "Loc": {"('LayoutXLMProcessor', 'decode', 155)": {"add": [160]}}}, {"path": "src/transformers/models/markuplm/processing_markuplm.py", "status": "modified", "Loc": {"('MarkupLMProcessor', 'decode', 135)": {"add": [140]}}}, {"path": "src/transformers/models/owlvit/processing_owlvit.py", "status": "modified", "Loc": {"('OwlViTProcessor', 'decode', 156)": {"add": [161]}}}, {"path": "src/transformers/models/vilt/processing_vilt.py", "status": "modified", "Loc": {"('ViltProcessor', 'decode', 103)": {"add": [108]}}}, {"path": "src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py", "status": "modified", "Loc": {"('VisionTextDualEncoderProcessor', None, 25)": {"add": [129]}}}, {"path": "src/transformers/models/x_clip/processing_x_clip.py", "status": "modified", "Loc": {"('XCLIPProcessor', 'decode', 104)": {"add": [109]}}}, {"path": "src/transformers/processing_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [229]}}}, {"path": "tests/models/clip/test_processor_clip.py", "status": "modified", "Loc": {"('CLIPProcessorTest', 'test_tokenizer_decode', 178)": {"add": [189]}}}, {"path": "tests/models/flava/test_processor_flava.py", "status": "modified", "Loc": {"('FlavaProcessorTest', 'test_tokenizer_decode', 222)": {"add": [233]}}}, {"path": "tests/models/layoutlmv2/test_processor_layoutlmv2.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21]}, "('LayoutLMv2ProcessorTest', None, 37)": {"add": [88, 135]}}}, {"path": "tests/models/layoutlmv3/test_processor_layoutlmv3.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21, 148]}, "('LayoutLMv3ProcessorTest', None, 37)": {"add": [101]}}}, {"path": "tests/models/layoutxlm/test_processor_layoutxlm.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21]}, "('LayoutXLMProcessorTest', None, 43)": {"add": [76, 128]}}}, {"path": "tests/models/markuplm/test_processor_markuplm.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [135]}}}, {"path": "tests/models/mctct/test_processor_mctct.py", "status": "modified", "Loc": {"('MCTCTProcessorTest', 'test_tokenizer_decode', 135)": {"add": [146]}}}, {"path": "tests/models/owlvit/test_processor_owlvit.py", "status": "modified", "Loc": {"('OwlViTProcessorTest', 'test_tokenizer_decode', 230)": {"add": [241]}}}, {"path": "tests/models/speech_to_text/test_processor_speech_to_text.py", "status": "modified", "Loc": {"('Speech2TextProcessorTest', 'test_tokenizer_decode', 135)": {"add": [146]}}}, {"path": "tests/models/vision_text_dual_encoder/test_processor_vision_text_dual_encoder.py", "status": "modified", "Loc": {"('VisionTextDualEncoderProcessorTest', 'test_tokenizer_decode', 159)": {"add": [170]}}}, {"path": "tests/models/wav2vec2/test_processor_wav2vec2.py", "status": "modified", "Loc": {"('Wav2Vec2ProcessorTest', 'test_tokenizer_decode', 128)": {"add": [139]}}}, {"path": "tests/models/wav2vec2_with_lm/test_processor_wav2vec2_with_lm.py", "status": "modified", "Loc": {"('Wav2Vec2ProcessorWithLMTest', None, 49)": {"add": [369]}}}, {"path": "tests/models/whisper/test_processor_whisper.py", "status": "modified", "Loc": {"('WhisperProcessorTest', 'test_tokenizer_decode', 107)": {"add": [118]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/models/clip/processing_clip.py", "src/transformers/models/layoutlmv2/processing_layoutlmv2.py", "src/transformers/models/vilt/processing_vilt.py", "src/transformers/models/x_clip/processing_x_clip.py", "src/transformers/models/markuplm/processing_markuplm.py", "src/transformers/models/flava/processing_flava.py", "src/transformers/processing_utils.py", "src/transformers/models/owlvit/processing_owlvit.py", "src/transformers/models/layoutlmv3/processing_layoutlmv3.py", "src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py", "src/transformers/models/layoutxlm/processing_layoutxlm.py"], "doc": [], "test": ["tests/models/mctct/test_processor_mctct.py", "tests/models/layoutlmv2/test_processor_layoutlmv2.py", "tests/models/layoutlmv3/test_processor_layoutlmv3.py", "tests/models/owlvit/test_processor_owlvit.py", "tests/models/markuplm/test_processor_markuplm.py", "tests/models/layoutxlm/test_processor_layoutxlm.py", "tests/models/whisper/test_processor_whisper.py", "tests/models/speech_to_text/test_processor_speech_to_text.py", "tests/models/clip/test_processor_clip.py", "tests/models/flava/test_processor_flava.py", "tests/models/wav2vec2/test_processor_wav2vec2.py", "tests/models/vision_text_dual_encoder/test_processor_vision_text_dual_encoder.py", "tests/models/wav2vec2_with_lm/test_processor_wav2vec2_with_lm.py"], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "b8e8fb48a84b65e805aecd263ebb7cd303e671ee", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/34896", "iss_label": "networking\nmodule\naffects_2.4\nsupport:community\naci\nfeature\ncisco", "title": "aci_epg module needs to support PreferredGroup", "body": "\r\n##### ISSUE TYPE\r\n - Feature Idea\r\n\r\n##### COMPONENT NAME\r\naci_epg\r\n\r\n##### ANSIBLE VERSION\r\n```\r\n[root@ansible-server ~]# ansible --version\r\nansible 2.4.0.0\r\n config file = None\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]\r\n[root@ansible-server ~]#\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n[root@ansible-server ~]# ansible-config dump --only-changed\r\n[root@ansible-server ~]#\r\n\r\n##### OS / ENVIRONMENT\r\nAnsible server on CentOS 7.3 and ACI version 3.0 or 3.1\r\n\r\n##### SUMMARY\r\n\r\nSince ACI 2.3 it is possible to configure EPGs to be part of a Preferred Group. This is a new attribute of the fvAEPg object. EPGs that are part of the Preferred Group can communicate without contracts. This is very convenient for migration scenarios as well as customers that implement ACI for network automation but not for policy.\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\nThe module should have a new option to configure.\r\n\r\npreferred_group: yes, no\r\n\r\nThe object to configure is:\r\n\r\nfvAEPg.attributes.prefGrMemb and the option is \"include\" or \"exclude\".\r\n\r\n##### ACTUAL RESULTS\r\n\r\n```\r\n\r\n```\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/35265", "file_loc": {"base_commit": "b8e8fb48a84b65e805aecd263ebb7cd303e671ee", "files": [{"path": "lib/ansible/modules/network/aci/aci_epg.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [66, 86]}, "(None, 'main', 161)": {"add": [171, 185, 191, 230], "mod": [196]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/modules/network/aci/aci_epg.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "4405b109e3abcd197666430708de2881e7cde8da", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/4280", "iss_label": "enhancement\ngood first issue\nfrontend\nlarge effort", "title": "Update the frontend to use i18n keys", "body": "**What problem or use case are you trying to solve?**\r\nThe new UI hardcodes english text throughout the app. In order to support i18n, we should extend our i18n provider and replaced the hardcoded values with the new keys\r\n\r\n**Describe the UX of the solution you'd like**\r\n\r\n**Do you have thoughts on the technical implementation?**\r\n\r\n**Describe alternatives you've considered**\r\n\r\n**Additional context**\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/4464", "file_loc": {"base_commit": "4405b109e3abcd197666430708de2881e7cde8da", "files": [{"path": "frontend/src/components/form/custom-input.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 15], "mod": [21]}}}, {"path": "frontend/src/components/form/settings-form.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 13, 17, 37], "mod": [10, 12, 15, 164, 174, 193, 223, 237, 244, 258, 294, 337, 348, 352, 359, 372, 373, 376, 380, 390, 391, 393, 395]}}}, {"path": "frontend/src/components/modals/AccountSettingsModal.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 11, 25], "mod": [89, 95, 125, 129]}}}, {"path": "frontend/src/components/modals/ConnectToGitHubByTokenModal.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 9, 13], "mod": [32, 33, 38]}}}, {"path": "frontend/src/components/modals/LoadingProject.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 3, 30], "mod": [34]}}}, {"path": "frontend/src/components/modals/connect-to-github-modal.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 10, 18], "mod": [27, 34, 58, 64]}}}, {"path": "frontend/src/components/modals/security/Security.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 3, 19], "mod": [24]}}}, {"path": "frontend/src/components/modals/security/invariant/Invariant.tsx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [126, 137, 146, 165, 167, 198, 200, 217, 219, 224, 267, 281, 284, 285, 292, 301, 307, 313]}}}, {"path": "frontend/src/components/project-menu/project-menu-details-placeholder.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 2, 12], "mod": [15]}}}, {"path": "frontend/src/components/project-menu/project-menu-details.tsx", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 2, 14], "mod": [35]}}}, {"path": "frontend/src/i18n/translation.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [1519], "mod": [801]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["frontend/src/components/modals/AccountSettingsModal.tsx", "frontend/src/components/modals/security/Security.tsx", "frontend/src/components/form/settings-form.tsx", "frontend/src/components/modals/security/invariant/Invariant.tsx", "frontend/src/components/modals/LoadingProject.tsx", "frontend/src/i18n/translation.json", "frontend/src/components/modals/ConnectToGitHubByTokenModal.tsx", "frontend/src/components/project-menu/project-menu-details.tsx", "frontend/src/components/form/custom-input.tsx", "frontend/src/components/modals/connect-to-github-modal.tsx", "frontend/src/components/project-menu/project-menu-details-placeholder.tsx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "e8a15d544490b3fe80ef77dd995d12de84194d00", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/7435", "iss_label": "", "title": "[RFC?] Make cross_val_score output a dict/named tuple.", "body": "Two major things here -\n- Often I see that only a partial output of `_fit_and_score` is taken for use. It is wasteful to generate and discard arrays. It would rather be much better to generate only the stuff that is required.\n- Now that we have more options, like @jnothman says [here](https://github.com/scikit-learn/scikit-learn/pull/7325#issuecomment-246529168) and [here](https://github.com/scikit-learn/scikit-learn/pull/7388#issuecomment-246233650) should we modify the output of `cross_val_score` (and also `_fit_and_score` to be a dict or a named tuple similar to the structure of `cv_results_`? (I think named-tuple is a better choice atleast for `_fit_and_score` as we stack the result of multiple `_fit_and_score` operations via `Parallel` mostly)\n\nIf we are changing the output of `cross_val_score`, this would be an ideal time to do it as we don't have to deprecate anything...\n\n@jnothman @amueller @vene @GaelVaroquaux @agramfort \n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/7388", "file_loc": {"base_commit": "e8a15d544490b3fe80ef77dd995d12de84194d00", "files": [{"path": "doc/modules/classes.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [225]}}}, {"path": "doc/modules/cross_validation.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [174], "mod": [189]}}}, {"path": "doc/modules/grid_search.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [86, 163]}}}, {"path": "doc/modules/model_evaluation.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [212]}}}, {"path": "doc/whats_new.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [33]}}}, {"path": "sklearn/metrics/scorer.py", "status": "modified", "Loc": {"(None, 'get_scorer', 211)": {"add": [211, 217]}, "(None, 'check_scoring', 231)": {"mod": [256, 262, 275, 276, 277, 278, 280, 281, 282]}}}, {"path": "sklearn/metrics/tests/test_score_objects.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10, 13, 23]}, "('EstimatorWithoutFit', None, 106)": {"mod": [107]}, "('EstimatorWithFit', None, 111)": {"mod": [112]}, "('EstimatorWithFitAndScore', None, 117)": {"mod": [118]}, "('EstimatorWithFitAndPredict', None, 126)": {"mod": [127]}, "(None, 'test_check_scoring', 148)": {"mod": [148, 149, 153, 157, 165, 167, 171, 174, 175, 176]}}}, {"path": "sklearn/model_selection/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [20, 52]}}}, {"path": "sklearn/model_selection/_search.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 27, 36]}, "(None, 'fit_grid_point', 271)": {"add": [301], "mod": [298, 299, 325, 326, 327, 328, 329, 330]}, "('BaseSearchCV', 'score', 402)": {"add": [421], "mod": [426]}, "('BaseSearchCV', '_store', 615)": {"add": [617, 621]}, "('BaseSearchCV', None, 376)": {"add": [698], "mod": [687, 688, 689, 690, 692, 693, 694, 695]}, "('GridSearchCV', None, 721)": {"add": [912, 924], "mod": [750, 751, 752, 753, 754, 804, 805, 806, 807, 860, 896, 897, 902, 905, 908, 921]}, "('RandomizedSearchCV', None, 973)": {"add": [1151, 1163], "mod": [1015, 1016, 1017, 1018, 1019, 1069, 1070, 1071, 1072, 1132, 1135, 1136, 1141, 1144, 1147, 1160]}, "('BaseSearchCV', '_check_is_fitted', 428)": {"mod": [430, 431, 432, 433]}, "('BaseSearchCV', 'fit', 544)": {"mod": [578, 596, 597, 608, 611, 637, 638, 639, 640, 642, 643, 644, 645, 649, 650, 671, 672, 673, 676, 677, 678, 679, 681, 683, 684, 685]}, "('BaseSearchCV', 'grid_scores_', 698)": {"mod": [705]}}}, {"path": "sklearn/model_selection/_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 301], "mod": [6, 7, 27, 32, 33]}, "(None, 'cross_val_score', 36)": {"add": [124], "mod": [49, 129, 131, 133, 134, 135, 136, 137, 138, 139, 140, 141]}, "(None, '_fit_and_score', 144)": {"add": [192, 225, 233], "mod": [162, 163, 195, 196, 198, 199, 247, 248, 249, 260, 263, 266, 272]}, "(None, 'validation_curve', 906)": {"add": [1006]}, "(None, '_score', 283)": {"mod": [283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298]}, "(None, 'permutation_test_score', 528)": {"mod": [558, 559, 560]}}}, {"path": "sklearn/model_selection/tests/test_search.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 31, 36, 56, 930, 1036], "mod": [30]}, "(None, 'test_unsupervised_grid_search', 635)": {"add": [644], "mod": [639, 640, 641, 642, 643, 648]}, "(None, 'check_cv_results_array_types', 697)": {"add": [698], "mod": [697, 706]}, "(None, 'test_random_search_cv_results', 792)": {"add": [812], "mod": [793, 794, 795, 796, 798, 799, 800, 803, 804, 805, 806, 807, 808, 809, 810, 811, 825, 829]}, "(None, 'test_no_refit', 370)": {"mod": [373, 374, 375, 376, 377, 379, 380, 381, 382, 383, 384, 385]}, "(None, 'test_pandas_input', 610)": {"mod": [625, 626]}, "(None, 'check_cv_results_grid_scores_consistency', 717)": {"mod": [718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733]}, "(None, 'test_grid_search_cv_results', 736)": {"mod": [744, 745, 746, 747, 748, 749, 763, 774, 777, 778]}, "(None, 'test_grid_search_cv_splits_consistency', 1258)": {"mod": [1275, 1276, 1277, 1278, 1279, 1287, 1288]}}}, {"path": "sklearn/model_selection/tests/test_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18, 27, 44, 45, 58, 264]}, "(None, 'test_cross_val_score_score_func', 379)": {"add": [390], "mod": [389]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/model_selection/_search.py", "sklearn/metrics/scorer.py", "sklearn/model_selection/_validation.py", "sklearn/model_selection/__init__.py"], "doc": ["doc/modules/classes.rst", "doc/modules/model_evaluation.rst", "doc/modules/cross_validation.rst", "doc/whats_new.rst", "doc/modules/grid_search.rst"], "test": ["sklearn/model_selection/tests/test_validation.py", "sklearn/metrics/tests/test_score_objects.py", "sklearn/model_selection/tests/test_search.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "a73883ae9ec66cb35a8222f204a5f2fafc326d3f", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/24100", "iss_label": "", "title": "[Trainer] Why not use `tqdm`'s `dynamic_ncols=True` option?", "body": "### Feature request\r\n\r\n# Problem\r\n\r\nTqdm progress bar is getting ugly when the width of the terminal is shrunk!\r\n\r\n![image](https://github.com/huggingface/transformers/assets/4879345/b60f232f-41a5-40de-b759-8bb2710d3b5f)\r\n\r\nIt progress bar makes the new line on every update! It is very ugly...\r\n\r\n# Solution\r\n\r\nSimply add the `dynamic_ncols=True` option to `tqdm`. It is located in `trainer_callbacks.ProgressCallback`.\r\n\r\n![image](https://github.com/huggingface/transformers/assets/4879345/6741eb00-7430-48db-acc8-4c3a0eb00217)\r\n\r\nYou can check the progress bar is now dynamically resized when the terminal size is updated.\r\n\r\n### Motivation\r\n\r\nWhen I connect `tmux` session with different widths of the terminal, then the `tqdm` printing is getting ugly.\r\n\r\n### Your contribution\r\n\r\nPlease check the PR #24101", "pr_html_url": "https://github.com/huggingface/transformers/pull/24101", "file_loc": {"base_commit": "a73883ae9ec66cb35a8222f204a5f2fafc326d3f", "files": [{"path": "src/transformers/trainer_callback.py", "status": "modified", "Loc": {"('ProgressCallback', 'on_train_begin', 474)": {"mod": [476]}, "('ProgressCallback', 'on_prediction_step', 484)": {"mod": [487]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/trainer_callback.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "f0f49c1865162fd1eef9199ab895811846516ada", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/422", "iss_label": "obsolete", "title": "Shell alias clobbers some history lines", "body": "I was trying this out, and found that large swathes of my history were missing after running \"fuck\" a single time. This should _not_ modify history except to insert a command it executes..\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/432", "file_loc": {"base_commit": "f0f49c1865162fd1eef9199ab895811846516ada", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [285, 309]}}}, {"path": "thefuck/conf.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19], "mod": [27, 29]}, "('Settings', '_val_from_env', 120)": {"mod": [129]}}}, {"path": "thefuck/types.py", "status": "modified", "Loc": {"('CorrectedCommand', 'run', 273)": {"mod": [281]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/types.py", "thefuck/conf.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "bd8c293a97f7f08989cff1db0d9c32f5a2208b77", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2518", "iss_label": "", "title": "AttributeError: 'FeedExporter' object has no attribute 'slot'", "body": "I have this simple spider, when I call `scrapy crawl dataspider` it works fine and prints the item in the output : \r\n\r\n import json\r\n from scrapy.spiders import Spider\r\n\r\n class dataspider(Spider):\r\n name='dataspider'\r\n start_urls=('https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL',)\r\n def parse(self, response):\r\n j=json.loads( response.body.decode('utf-8') )\r\n yield j['matches'][1]\r\n\r\nOutputs :\r\n\r\n> {'t': 'AAPL', 'n': 'Apple Inc.', 'e': 'NASDAQ', 'id': '22144'}\r\n\r\nHowever as soon as I try to save the item in a file using `scrapy crawl dataspider -o out.json` I get this error : \r\n\r\n> AttributeError: 'FeedExporter' object has no attribute 'slot'\r\n\r\nFull Traceback is : \r\n\r\n```\r\n$ scrapy crawl dataspider -o ./test.json\r\n2017-01-30 14:32:06 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: googlefinance)\r\n2017-01-30 14:32:06 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'googlefinance', 'CONCURRENT_REQUESTS': 100, 'CONCURRENT_REQUESTS_PER_DOMAIN': 100, 'DNS_TIMEOUT': 30, 'DOWNLOAD_TIMEOUT': 30, 'FEED_FORMAT': 'json', 'FEED_URI': './test.json', 'NEWSPIDER_MODULE': 'googlefinance.spiders', 'RETRY_HTTP_CODES': [500, 502, 503, 504, 400, 403, 404, 408], 'RETRY_TIMES': 30, 'SPIDER_MODULES': ['googlefinance.spiders'], 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; FSL 7.0.6.01001)'}\r\n2017-01-30 14:32:06 [scrapy.middleware] INFO: Enabled extensions:\r\n['scrapy.extensions.corestats.CoreStats',\r\n 'scrapy.extensions.telnet.TelnetConsole',\r\n 'scrapy.extensions.feedexport.FeedExporter',\r\n 'scrapy.extensions.logstats.LogStats']\r\n2017-01-30 14:32:06 [scrapy.middleware] INFO: Enabled downloader middlewares:\r\n['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',\r\n 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',\r\n 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',\r\n 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',\r\n 'scrapy.downloadermiddlewares.retry.RetryMiddleware',\r\n 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',\r\n 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',\r\n 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',\r\n 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',\r\n 'scrapy.downloadermiddlewares.stats.DownloaderStats']\r\n2017-01-30 14:32:06 [scrapy.middleware] INFO: Enabled spider middlewares:\r\n['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',\r\n 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',\r\n 'scrapy.spidermiddlewares.referer.RefererMiddleware',\r\n 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',\r\n 'scrapy.spidermiddlewares.depth.DepthMiddleware']\r\n2017-01-30 14:32:06 [scrapy.middleware] INFO: Enabled item pipelines:\r\n[]\r\n2017-01-30 14:32:06 [scrapy.core.engine] INFO: Spider opened\r\n2017-01-30 14:32:06 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 187, in open_spider\r\n uri = self.urifmt % self._get_uri_params(spider)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 262, in _get_uri_params\r\n params[k] = getattr(spider, k)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/spiders/__init__.py\", line 36, in logger\r\n logger = logging.getLogger(self.name)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1813, in getLogger\r\n return Logger.manager.getLogger(name)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1167, in getLogger\r\n raise TypeError('A logger name must be a string')\r\nTypeError: A logger name must be a string\r\n2017-01-30 14:32:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)\r\n2017-01-30 14:32:06 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023\r\n2017-01-30 14:32:07 [scrapy.core.engine] DEBUG: Crawled (200) (referer: None)\r\n2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>\r\n{'t': 'AAPL', 'n': 'Apple Inc.', 'e': 'NASDAQ', 'id': '22144'}\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 217, in item_scraped\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>\r\n{'t': 'AAPL', 'n': 'APPLE INC CEDEAR(REPR 1/10 SHR)', 'e': 'BCBA', 'id': '640373807586235'}\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 217, in item_scraped\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>\r\n{'t': 'AAPL', 'n': 'Apple', 'e': 'SWX', 'id': '268194557752272'}\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 217, in item_scraped\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>\r\n{'t': 'AVSPY', 'n': 'NASDAQ OMX Alpha AAPL vs. SPY Index', 'e': 'INDEXNASDAQ', 'id': '3139928'}\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 217, in item_scraped\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>\r\n{'t': 'AAPL34', 'n': 'APPLE DRN', 'e': 'BVMF', 'id': '486420404817650'}\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 217, in item_scraped\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>\r\n{'t': 'AAPL', 'n': 'APPLE COMPUTER INC', 'e': 'BMV', 'id': '119565461895124'}\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 217, in item_scraped\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>\r\n{'t': 'AAPL-EUR', 'n': 'Apple', 'e': 'SWX', 'id': '706336206708362'}\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 217, in item_scraped\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>\r\n{'t': 'AAPL-USD', 'n': 'Apple', 'e': 'SWX', 'id': '1009743014824088'}\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 217, in item_scraped\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.core.engine] INFO: Closing spider (finished)\r\n2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/site-packages/twisted/internet/defer.py\", line 150, in maybeDeferred\r\n result = f(*args, **kw)\r\n File \"/usr/lib/python3.6/site-packages/pydispatch/robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py\", line 198, in close_spider\r\n slot = self.slot\r\nAttributeError: 'FeedExporter' object has no attribute 'slot'\r\n2017-01-30 14:32:07 [scrapy.statscollectors] INFO: Dumping Scrapy stats:\r\n{'downloader/request_bytes': 309,\r\n 'downloader/request_count': 1,\r\n 'downloader/request_method_count/GET': 1,\r\n 'downloader/response_bytes': 761,\r\n 'downloader/response_count': 1,\r\n 'downloader/response_status_count/200': 1,\r\n 'finish_reason': 'finished',\r\n 'finish_time': datetime.datetime(2017, 1, 30, 13, 32, 7, 192220),\r\n 'item_scraped_count': 8,\r\n 'log_count/DEBUG': 10,\r\n 'log_count/ERROR': 10,\r\n 'log_count/INFO': 7,\r\n 'response_received_count': 1,\r\n 'scheduler/dequeued': 1,\r\n 'scheduler/dequeued/memory': 1,\r\n 'scheduler/enqueued': 1,\r\n 'scheduler/enqueued/memory': 1,\r\n 'start_time': datetime.datetime(2017, 1, 30, 13, 32, 6, 846350)}\r\n2017-01-30 14:32:07 [scrapy.core.engine] INFO: Spider closed (finished)))\r\n\r\n```\r\n\r\nAny idea what the problem is ?", "pr_html_url": "https://github.com/scrapy/scrapy/pull/2433", "file_loc": {"base_commit": "bd8c293a97f7f08989cff1db0d9c32f5a2208b77", "files": [{"path": "scrapy/spiderloader.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "('SpiderLoader', '_load_all_spiders', 26)": {"mod": [28, 29]}}}, {"path": "tests/test_spiderloader/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "('SpiderLoaderTest', 'test_crawler_runner_loading', 82)": {"add": [91]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/test_spiderloader/__init__.py", "scrapy/spiderloader.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "bb385394b87e382a34db829bc7ed60d347af73c9", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/11194", "iss_label": "Build / CI\nBlocker", "title": "NumPy dev causes test errors due to use of np.matrix", "body": "We are getting many warnings like `PendingDeprecationWarning('the matrix subclass is not the recommended way to represent matrices or deal with linear algebra (see https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html). Please adjust your code to use regular ndarray.` using numpy master (see logs at https://travis-ci.org/scikit-learn/scikit-learn/builds/387352026)\r\n\r\nApart from a very long log, this causes test failures where we have used `assert_no_warnings` (which we could now be importing from numpy instead of having our own implementation).\r\n\r\nIt might be a good idea to remove all uses of np.matrix that raise warnings. On the other hand, we might also consider that `assert_no_warnings` shouldn't be bothered by `PendingDeprecationWarning`s.", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/11251", "file_loc": {"base_commit": "bb385394b87e382a34db829bc7ed60d347af73c9", "files": [{"path": "sklearn/ensemble/tests/test_iforest.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8], "mod": [18]}, "(None, 'test_iforest_error', 91)": {"mod": [108, 109]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["sklearn/ensemble/tests/test_iforest.py"], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "96da9525043f78aca4544d01761b13b2140e9ae6", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/9825", "iss_label": "good first issue\nsite-bug\npatch-available", "title": "[cbc.ca] \"unable to extract OpenGraph description\"", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nUnited States\n\n### Provide a description that is worded well enough to be understood\n\nVideo from CBC's site will not download, throws an error saying \"unable to extract OpenGraph description\", then says it's finished downloading the playlist (but downloaded no video files).\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', 'https://www.cbc.ca/player/play/video/1.3594815']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2024.04.28.232723 from yt-dlp/yt-dlp-nightly-builds [ac817bc83] (pip)\r\n[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)\r\n[debug] exe versions: ffmpeg 2023-03-02-git-814178f926-full_build-www.gyan.dev (setts), ffprobe 2023-03-02-git-814178f926-full_build-www.gyan.dev, phantomjs 2.1.1\r\n[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2022.06.15, mutagen-1.46.0, requests-2.31.0, sqlite3-3.40.1, urllib3-2.2.1, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1810 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest\r\nLatest version: nightly@2024.04.28.232723 from yt-dlp/yt-dlp-nightly-builds\r\nyt-dlp is up to date (nightly@2024.04.28.232723 from yt-dlp/yt-dlp-nightly-builds)\r\n[generic] Extracting URL: https://www.cbc.ca/player/play/video/1.3594815\r\n[generic] 1: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] 1: Extracting information\r\n[debug] Looking for embeds\r\n[debug] Identified a twitter:player iframe\r\n[cbc.ca] Extracting URL: https://www.cbc.ca/i/phoenix/player/syndicate/?autoPlay=true&sourceId=1.3594815\r\n[cbc.ca] syndicate: Downloading webpage\r\nWARNING: [cbc.ca] unable to extract OpenGraph description; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n[download] Downloading playlist: CBC Player\r\n[cbc.ca] Playlist CBC Player: Downloading 0 items\r\n[download] Finished downloading playlist: CBC Player\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/9866", "file_loc": {"base_commit": "96da9525043f78aca4544d01761b13b2140e9ae6", "files": [{"path": "yt_dlp/extractor/cbc.py", "status": "modified", "Loc": {"('CBCPlayerIE', None, 152)": {"add": [279], "mod": [154]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["yt_dlp/extractor/cbc.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "77447e50c0b8143edcf34896af80dd58925582f9", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/2", "iss_label": "", "title": "TypeError: generate() got an unexpected keyword argument 'new_text_callback'", "body": "/privateGPT/gpt4all_j.py\", line 152, in _call\r\n text = self.client.generate(\r\nTypeError: generate() got an unexpected keyword argument 'new_text_callback'", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/3", "file_loc": {"base_commit": "77447e50c0b8143edcf34896af80dd58925582f9", "files": [{"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "9db8c213ffdad873380c9de41c142923ba0dc260", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1366", "iss_label": "enhancement", "title": "Add xlsx Export", "body": "\r\n\r\n## Checklist\r\n\r\n- [x] I'm reporting a feature request\r\n- [x] I've checked for similar feature requests including closed ones\r\n\r\n## Description\r\n\r\n\r\nWRITE DESCRIPTION HERE\r\n\r\nAdd an option to export the result on xlsx file type.\r\n", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/1367", "file_loc": {"base_commit": "9db8c213ffdad873380c9de41c142923ba0dc260", "files": [{"path": ".gitignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [24]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}}}, {"path": "sherlock/sherlock.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}, "(None, 'main', 477)": {"add": [508, 718]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sherlock/sherlock.py"], "doc": [], "test": [], "config": [".gitignore", "requirements.txt"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "8b209d4e17ad7dfc1ad7a80505eac42f71228734", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1539", "iss_label": "", "title": "ollama llm vision api call error: async for raw_chunk in stream_resp: TypeError: 'async for' requires an object with __aiter__ method, got bytes", "body": "**Bug description**\r\nwhen running any vision llm call (like example/llm_vision.py)\r\nthere seems to be an issue with async def _achat_completion_stream(self, messages: list[dict], timeout: int = USE_CONFIG_TIMEOUT) -> str: method\r\n\r\n**Bug solved method**\r\nNo solve yet\r\n\r\n**Environment information**\r\nsystem metal, llm ollama, Python 3.10.13\r\n\r\n- LLM type and model name: ollama, llava latest\r\n- System version: \r\n- Python version: Python 3.10.13\r\n- MetaGPT version or branch: main\r\n\r\n\r\n- packages version: /\r\n- installation method: from source\r\n\r\n**Screenshots or logs**\r\n.....\r\n do = self.iter(retry_state=retry_state)\r\n return fut.result()\r\n return self.__get_result()\r\n raise self._exception\r\n result = await fn(*args, **kwargs)\r\n return await self._achat_completion_stream(messages, timeout=self.get_timeout(timeout))\r\n async for raw_chunk in stream_resp: (HERE)\r\nTypeError: 'async for' requires an object with __aiter__ method, got bytes\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1544", "file_loc": {"base_commit": "8b209d4e17ad7dfc1ad7a80505eac42f71228734", "files": [{"path": "examples/llm_vision.py", "status": "modified", "Loc": {"(None, 'main', 12)": {"mod": [18, 19]}}}, {"path": "metagpt/configs/llm_config.py", "status": "modified", "Loc": {"('LLMType', None, 18)": {"mod": [29]}, "('LLMConfig', 'check_llm_key', 101)": {"mod": [107]}}}, {"path": "metagpt/provider/general_api_base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15]}, "('OpenAIResponse', None, 123)": {"mod": [124]}, "('APIRequestor', None, 227)": {"mod": [323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364]}, "('APIRequestor', 'request_headers', 423)": {"mod": [442]}}}, {"path": "metagpt/provider/general_api_requestor.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6, 12]}, "(None, 'parse_stream_helper', 15)": {"mod": [15, 18, 23, 24]}, "('GeneralAPIRequestor', None, 38)": {"mod": [40, 53, 54, 55]}, "('GeneralAPIRequestor', '_interpret_response_line', 53)": {"mod": [57]}, "('GeneralAPIRequestor', '_interpret_response', 59)": {"mod": [61, 62, 66, 67, 68, 72]}, "('GeneralAPIRequestor', '_interpret_async_response', 80)": {"mod": [82, 87, 89, 90, 91, 94, 98, 101]}}}, {"path": "metagpt/provider/ollama_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 15], "mod": [11]}, "('OllamaLLM', '__init__', 22)": {"add": [29], "mod": [23, 26]}, "('OllamaLLM', '_achat_completion_stream', 76)": {"add": [90, 91, 92, 109], "mod": [77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 95, 96, 99]}, "('OllamaLLM', None, 17)": {"mod": [36, 37, 38, 40, 41, 42, 43, 44, 49, 50, 51]}, "('OllamaLLM', '_achat_completion', 53)": {"mod": [54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 68, 69, 70, 71]}}}, {"path": "tests/metagpt/provider/test_ollama_api.py", "status": "modified", "Loc": {"('Iterator', 'mock_ollama_arequest', 26)": {"add": [30], "mod": [32, 33, 34, 36]}, "(None, None, None)": {"mod": [6, 10]}, "(None, 'mock_ollama_arequest', 23)": {"mod": [26, 40]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/llm_vision.py", "metagpt/configs/llm_config.py", "metagpt/provider/general_api_requestor.py", "metagpt/provider/ollama_api.py", "metagpt/provider/general_api_base.py"], "doc": [], "test": ["tests/metagpt/provider/test_ollama_api.py"], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "dbdd7996960ba46ed044a773290b02f17478c760", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/1065", "iss_label": "", "title": "Example_scenes.py run problem question", "body": "I was able to get the install success thanks to help. I ran the example_scenes.py file and have the results below. I am now also going through https://talkingphysics.wordpress.com/2019/01/08/getting-started-animating-with-manim-and-python-3-7/ and have similar errors when running the first run python -m manim pymanim_tutorial_P37.py Shapes -pl. So I am trying to crawl before walking and would like to get through example_scenes and first tutorial .py run with success so any help is appreciated.\r\n\r\n\r\nC:\\Users\\Admin\\Desktop\\manim-master>python ./manim.py example_scenes.py SquareTo\r\nCircle -pl\r\nMedia will be written to ./media\\. You can change this behavior with the --media\r\n_dir flag.\r\n[concat @ 0000000000375a40] Impossible to open 'CC:/Users/Admin/Desktop/manim-ma\r\nster/media/videos/example_scenes/480p15/partial_movie_files/SquareToCircle/00000\r\n.mp4'\r\nC:\\Users\\Admin\\Desktop\\manim-master\\media\\videos\\example_scenes\\480p15\\partial_m\r\novie_files\\SquareToCircle\\partial_movie_file_list.txt: Protocol not found\r\nDid you mean file:C:\\Users\\Admin\\Desktop\\manim-master\\media\\videos\\example_scene\r\ns\\480p15\\partial_movie_files\\SquareToCircle\\partial_movie_file_list.txt?\r\n\r\nFile ready at C:\\Users\\Admin\\Desktop\\manim-master\\media\\videos\\example_scenes\\48\r\n0p15\\SquareToCircle.mp4\r\n\r\nPlayed 3 animations\r\n\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Admin\\Desktop\\manim-master\\manimlib\\extract_scene.py\", line 156\r\n, in main\r\n open_file_if_needed(scene.file_writer, **config)\r\n File \"C:\\Users\\Admin\\Desktop\\manim-master\\manimlib\\extract_scene.py\", line 35,\r\n in open_file_if_needed\r\n os.startfile(file_path)\r\nFileNotFoundError: [WinError 2] The system cannot find the file specified: 'C:\\\\\r\nUsers\\\\Admin\\\\Desktop\\\\manim-master\\\\media\\\\videos\\\\example_scenes\\\\480p15\\\\Squa\r\nreToCircle.mp4'\r\n", "pr_html_url": "https://github.com/3b1b/manim/pull/1057", "file_loc": {"base_commit": "dbdd7996960ba46ed044a773290b02f17478c760", "files": [{"path": "manimlib/scene/scene_file_writer.py", "status": "modified", "Loc": {"('SceneFileWriter', 'combine_movie_files', 253)": {"mod": [289]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["manimlib/scene/scene_file_writer.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "c84aeac6b5695e7e1ac629d17fc51eb68ab91bae", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/502", "iss_label": "external issue", "title": "[youtube] YouTube serving erroneous DASH Manifest VP9 formats", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n\r\n- [x] I'm reporting a broken site support\r\n- [x] I've verified that I'm running yt-dlp version **2021.07.07**\r\n- [x] I've checked that all provided URLs are alive and playable in a browser (but with condition, see below)\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n\r\n```\r\n$ yt-dlp V_h3Z40AAtw -F\r\n[youtube] V_h3Z40AAtw: Downloading webpage\r\n[youtube] V_h3Z40AAtw: Downloading MPD manifest\r\n[info] Available formats for V_h3Z40AAtw:\r\nID EXT RESOLUTION FPS | FILESIZE TBR PROTO | VCODEC VBR ACODEC ABR ASR NOTE\r\n--- ---- ---------- --- - --------- ----- ----- - ----------- ----- --------- ---- ------- ---------------------------------------\r\n139 m4a audio only | 1.52MiB 50k dash | mp4a.40.5 50k 22050Hz DASH audio, m4a_dash, 22050Hz\r\n140 m4a audio only | 4.02MiB 129k https | mp4a.40.2 129k 44100Hz audio_quality_medium, m4a_dash, 44100Hz\r\n160 mp4 256x144 30 | 108k dash | avc1.4d400b 108k DASH video, mp4_dash\r\n278 webm 256x144 30 | 95k dash | vp9 95k DASH video, webm_dash\r\n133 mp4 426x240 30 | 242k dash | avc1.4d400c 242k DASH video, mp4_dash\r\n242 webm 426x240 30 | 220k dash | vp9 220k DASH video, webm_dash\r\n134 mp4 640x360 30 | 19.25MiB 620k https | avc1.4d401e 620k 360p, mp4_dash\r\n18 mp4 640x360 30 | 22.68MiB 730k https | avc1.42001E 730k mp4a.40.2 0k 44100Hz 360p, 44100Hz\r\n243 webm 640x360 30 | 405k dash | vp9 405k DASH video, webm_dash\r\n135 mp4 854x480 30 | 1155k dash | avc1.4d400c 1155k DASH video, mp4_dash\r\n244 webm 854x480 30 | 752k dash | vp9 752k DASH video, webm_dash\r\n136 mp4 1280x720 30 | 69.87MiB 2251k https | avc1.4d401f 2251k 720p, mp4_dash\r\n22 mp4 1280x720 30 | 2380k https | avc1.64001F 2380k mp4a.40.2 0k 44100Hz 720p, 44100Hz\r\n247 webm 1280x720 30 | 1505k dash | vp9 1505k DASH video, webm_dash\r\n248 webm 1920x1080 30 | 2646k dash | vp9 2646k DASH video, webm_dash\r\n\r\n$ yt-dlp -v V_h3Z40AAtw\r\n[debug] Command-line config: ['-v', 'V_h3Z40AAtw']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8\r\n[debug] yt-dlp version 2021.07.13.1626134551 (zip)\r\n[debug] Python version 3.9.6 (CPython 64bit) - Linux-5.8.0-41-generic-x86_64-with-glibc2.32\r\n[debug] exe versions: ffmpeg 4.3.1, ffprobe 4.3.1, rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n[debug] [youtube] Extracting URL: V_h3Z40AAtw\r\n[youtube] V_h3Z40AAtw: Downloading webpage\r\n[youtube] [debug] Fetching webpage from https://www.youtube.com/watch?v=V_h3Z40AAtw&bpctr=9999999999&has_verified=1\r\n[youtube] V_h3Z40AAtw: Downloading MPD manifest\r\n[youtube] [debug] Fetching webpage from https://manifest.googlevideo.com/api/manifest/dash/expire/1626251414/ei/NkzuYLz-Ap-0s8IPlf2YyAo/ip/2001%3A19f0%3A7001%3A13a1%3A5400%3A3ff%3Afe11%3A205f/id/57f877678d0002dc/source/youtube/requiressl/yes/playback_host/r2---sn-oguelne7.googlevideo.com/mh/t1/mm/31%2C29/mn/sn-oguelne7%2Csn-oguesnzz/ms/au%2Crdu/mv/m/mvi/2/pl/55/tx/24027688/txs/24027687%2C24027688%2C24027689%2C24027690/hfr/all/as/fmp4_audio_clear%2Cwebm_audio_clear%2Cwebm2_audio_clear%2Cfmp4_sd_hd_clear%2Cwebm2_sd_hd_clear/initcwndbps/1267500/vprv/1/mt/1626229496/fvip/2/keepalive/yes/fexp/24001373%2C24007246/itag/0/sparams/expire%2Cei%2Cip%2Cid%2Csource%2Crequiressl%2Ctx%2Ctxs%2Chfr%2Cas%2Cvprv%2Citag/sig/AOq0QJ8wRAIgel_8rJx7O1ChqaQTDiBI5cysHmZ_4uCmgCWN_kPxy8cCIDfgAFVDYl5WO7a1gLifSDw6vBfjELblxSgkOodOm1am/lsparams/playback_host%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps/lsig/AG3C_xAwRAIgJVqT1xhW2KIhXVIj6cJRomDQ7-UOvq8yyC_J5r7ksfMCIBG0VgIJSIlNe49rl6ty6WA_DuH2AhKJvLOpq8fUBojv\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] V_h3Z40AAtw: Downloading 1 format(s): 248+140\r\n[debug] locking youtube_V_h3Z40AAtw.lock\r\n[debug] Invoking downloader on \"https://manifest.googlevideo.com/api/manifest/dash/expire/1626251414/ei/NkzuYLz-Ap-0s8IPlf2YyAo/ip/2001%3A19f0%3A7001%3A13a1%3A5400%3A3ff%3Afe11%3A205f/id/57f877678d0002dc/source/youtube/requiressl/yes/playback_host/r2---sn-oguelne7.googlevideo.com/mh/t1/mm/31%2C29/mn/sn-oguelne7%2Csn-oguesnzz/ms/au%2Crdu/mv/m/mvi/2/pl/55/tx/24027688/txs/24027687%2C24027688%2C24027689%2C24027690/hfr/all/as/fmp4_audio_clear%2Cwebm_audio_clear%2Cwebm2_audio_clear%2Cfmp4_sd_hd_clear%2Cwebm2_sd_hd_clear/initcwndbps/1267500/vprv/1/mt/1626229496/fvip/2/keepalive/yes/fexp/24001373%2C24007246/itag/0/sparams/expire%2Cei%2Cip%2Cid%2Csource%2Crequiressl%2Ctx%2Ctxs%2Chfr%2Cas%2Cvprv%2Citag/sig/AOq0QJ8wRAIgel_8rJx7O1ChqaQTDiBI5cysHmZ_4uCmgCWN_kPxy8cCIDfgAFVDYl5WO7a1gLifSDw6vBfjELblxSgkOodOm1am/lsparams/playback_host%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps/lsig/AG3C_xAwRAIgJVqT1xhW2KIhXVIj6cJRomDQ7-UOvq8yyC_J5r7ksfMCIBG0VgIJSIlNe49rl6ty6WA_DuH2AhKJvLOpq8fUBojv\"\r\n[dashsegments] Total fragments: 50\r\n[download] Destination: Sweet Candy \u2461-V_h3Z40AAtw.f248.webm\r\n[download] Got server HTTP error: HTTP Error 404: Not Found. Retrying fragment 1 (attempt 1 of 10) ...\r\n^C[debug] unlocking youtube_V_h3Z40AAtw.lock\r\n\r\nERROR: Interrupted by user\r\n```\r\n\r\n\r\n\r\n## Description\r\n[link to video](https://youtu.be/V_h3Z40AAtw)\r\n\r\nThe video itself plays on browser, and doesn't have 1080p as you can see it.\r\n\r\nBut yt-dlp (and youtube-dl) reports 1080p format, which possibly doesn't exist on the server. (format `248` on the video fails to download all segments.)\r\n\r\nResolutions shown in webpage here:\r\n![image](https://user-images.githubusercontent.com/10355528/125551541-4e14ac3a-0f66-40e2-9931-0a4f25e04750.png)\r\n\r\n__Edit:__ Tested web and android clients, some locations (JP, Vultr JP, OCI US?), with cookies or not, but all of them has this \"ghosty\" format", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/536", "file_loc": {"base_commit": "c84aeac6b5695e7e1ac629d17fc51eb68ab91bae", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1340, 1341]}}}, {"path": "yt_dlp/downloader/youtube_live_chat.py", "status": "modified", "Loc": {"('YoutubeLiveChatFD', 'download_and_parse_fragment', 111)": {"mod": [119]}, "('YoutubeLiveChatFD', 'real_download', 22)": {"mod": [149, 158, 186]}}}, {"path": "yt_dlp/extractor/youtube.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [35, 39, 42], "mod": [31, 34, 38]}, "('YoutubeBaseInfoExtractor', None, 68)": {"add": [394, 404], "mod": [423, 424, 486, 522, 530, 531, 532]}, "('YoutubeIE', None, 756)": {"add": [1617, 1659], "mod": [1125, 1290, 1297, 1298, 1299, 1655, 1661, 1862, 1863, 1864, 1865, 2290, 2291, 2292, 2294, 2296, 2297, 2298, 2299, 2301, 2302, 2303, 2304, 2306, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 2314, 2315]}, "('YoutubeIE', '_extract_player_url', 1693)": {"add": [1698], "mod": [1695]}, "('YoutubeIE', '_get_video_info_params', 2271)": {"add": [2279]}, "('YoutubeIE', '_real_extract', 2290)": {"add": [2574, 2600, 2611, 2642, 2829], "mod": [2317, 2318, 2320, 2321, 2322, 2323, 2324, 2325, 2326, 2327, 2329, 2330, 2331, 2332, 2333, 2334, 2335, 2336, 2337, 2339, 2340, 2341, 2342, 2343, 2345, 2346, 2347, 2348, 2349, 2350, 2352, 2353, 2354, 2355, 2356, 2357, 2358, 2359, 2360, 2361, 2362, 2363, 2364, 2365, 2367, 2368, 2369, 2370, 2371, 2372, 2373, 2374, 2376, 2377, 2378, 2379, 2380, 2381, 2382, 2383, 2384, 2385, 2386, 2387, 2388, 2389, 2390, 2391, 2392, 2393, 2394, 2395, 2396, 2397, 2398, 2400, 2401, 2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411, 2412, 2413, 2414, 2415, 2417, 2418, 2419, 2420, 2421, 2422, 2423, 2424, 2425, 2426, 2427, 2429, 2430, 2432, 2433, 2434, 2435, 2436, 2437, 2438, 2440, 2441, 2442, 2444, 2445, 2446, 2447, 2448, 2449, 2450, 2451, 2452, 2454, 2455, 2456, 2457, 2458, 2459, 2460, 2461, 2462, 2463, 2464, 2465, 2466, 2467, 2468, 2470, 2471, 2472, 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481, 2482, 2483, 2484, 2485, 2486, 2487, 2488, 2489, 2490, 2491, 2492, 2493, 2494, 2496, 2498, 2507, 2508, 2509, 2510, 2511, 2557, 2588, 2591, 2594, 2603, 2619, 2622, 2625, 2626, 2627, 2628, 2629, 2630, 2632, 2634, 2639, 2645, 2663, 2664, 2665, 2666, 2667, 2668, 2669, 2670, 2671, 2672, 2673, 2676, 2677, 2678, 2679, 2680, 2681, 2682, 2683, 2684, 2685, 2686, 2687, 2688, 2689, 2690, 2691, 2692, 2728, 2730, 2734, 2737, 2738, 2740, 2742, 2749, 2750, 2753, 2754, 2755, 2832, 2946, 2947, 2979, 2980, 2981, 2989, 2990, 2993, 2994, 3007, 3009]}, "('YoutubePlaylistIE', None, 4145)": {"add": [4167, 4195], "mod": [4190]}, "('YoutubeSearchURLIE', None, 4379)": {"add": [4387]}, "('YoutubeBaseInfoExtractor', '_call_api', 470)": {"mod": [476]}, "('YoutubeBaseInfoExtractor', '_extract_identity_token', 493)": {"mod": [494]}, "('YoutubeBaseInfoExtractor', '_generate_api_headers', 530)": {"mod": [535, 536, 541]}, "('YoutubeIE', '_comment_entries', 2040)": {"mod": [2125]}, "('YoutubeTabIE', None, 3014)": {"mod": [3290]}, "('YoutubeTabIE', '_entries', 3639)": {"mod": [3696]}, "('YoutubeTabIE', '_extract_from_tabs', 3779)": {"mod": [3846]}, "('YoutubeTabIE', '_extract_mix_playlist', 3854)": {"mod": [3856, 3857]}, "('YoutubeTabIE', '_reload_with_unavailable_videos', 3950)": {"mod": [3974, 3975]}, "('YoutubeTabIE', '_extract_webpage', 3989)": {"mod": [4002]}, "('YoutubeSearchIE', '_get_n_results', 4367)": {"mod": [4369]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["yt_dlp/extractor/youtube.py", "yt_dlp/downloader/youtube_live_chat.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "59ca996eb1b510cef7ae60a179c36ea7f353f71e", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/197", "iss_label": "", "title": "Face on angles >180 degrees not recognized on extraction", "body": "Hi guys thanks for the amazing work here!\r\n\r\nI have been following the landmark detection dialogue #187 and have tried both hog and cnn with both face-alignment and face_recognition. I got face-alignment with cnn working great, with pytorch on win10 now. However, I noticed that none of the above are able to reliably able to identify faces where the face is pointing downwards, for example with the forehead pointing from 6 to 9 o'clock.\r\n\r\nI think all these algorithms tend to look for eyes being above the level of the mouth.\r\n\r\nFor example [Image Removed) this image would not be detected and extracted by hog or cnn in face-alignment or face_recognition.\r\n\r\nHowever by rotating it 90 deg to the right, so that the forehead is pointing up, makes it extracted.\r\n\r\nWould it be possible to have an argument set to resend the image for alignment but rotated if it was not caught the first time? \r\n\r\nI am ok with python and novice with git but could maybe even give it a try if someone points me to where the frame is passed for extraction.\r\n\r\nThanks!", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/253", "file_loc": {"base_commit": "59ca996eb1b510cef7ae60a179c36ea7f353f71e", "files": [{"path": "lib/cli.py", "status": "modified", "Loc": {"('DirectoryProcessor', 'get_faces_alignments', 140)": {"add": [144]}, "(None, None, None)": {"mod": [9]}, "('DirectoryProcessor', None, 29)": {"mod": [157]}, "('DirectoryProcessor', 'get_faces', 157)": {"mod": [159]}}}, {"path": "lib/faces_detect.py", "status": "modified", "Loc": {"('DetectedFace', '__init__', 10)": {"add": [11]}, "(None, 'detect_faces', 3)": {"mod": [3, 7]}, "('DetectedFace', None, 9)": {"mod": [10]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 32]}}}, {"path": "scripts/convert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [9]}, "('ConvertImage', 'convert', 216)": {"mod": [229, 230]}}}, {"path": "scripts/extract.py", "status": "modified", "Loc": {"('ExtractTrainingData', 'add_optional_arguments', 22)": {"add": [68]}, "('ExtractTrainingData', None, 12)": {"add": [100]}, "('ExtractTrainingData', 'handleImage', 101)": {"add": [104, 117], "mod": [102, 106, 107]}, "(None, None, None)": {"mod": [7]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/utils.py", "lib/faces_detect.py", "lib/cli.py", "scripts/convert.py", "scripts/extract.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/239", "iss_label": "", "title": "EOL Error when training", "body": "After pulling the latest commit today I am now getting the below error when trying to train.\r\n\r\n**Command**\r\npython faceswap.py train -A \"D:\\Fakes\\Data\\Dataset_A\\Faces\" -B \"D:\\Fakes\\Data\\Dataset_B\\Faces\" -m \"D:\\Fakes\\Model\" -p -s 100 -bs 80 -t LowMem\r\n\r\n**Error**\r\nTraceback (most recent call last):\r\n File \"faceswap.py\", line 12, in \r\n from scripts.convert import ConvertImage\r\n File \"D:\\Fakes\\faceswap\\scripts\\convert.py\", line 100\r\n help=\"Erosion kernel size. (Masked converter only). Positive values apply erosion which reduces the edge \\\r\n ^\r\nSyntaxError: EOL while scanning string literal\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/9438672b1cf80602fc93536670d9601d655377f5", "file_loc": {"base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "files": [{"path": "scripts/convert.py", "status": "modified", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scripts/convert.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "5f7a07c0c867abedbb3ebf135915eeee56add24b", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/9326", "iss_label": "", "title": "Issue with 'char_to_token()' function of DistilBertTokenizerFast ", "body": "## Environment info\r\n\r\n \r\n- `transformers` version: 4.0.1\r\n- Platform: Google Colab\r\n- Python version: 3.8\r\n- PyTorch version (GPU?):\r\n- Tensorflow version (GPU?): 2.4.0\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: NA\r\n\r\n### Who can help: **tokenizers: @mfuntowicz**\r\n\r\n## Information\r\n\r\nModel I am using DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') to tokenize Squad 2.0 train and validate dataset. \r\n\r\nThe problem arises when using below code snippet to add_token_positions (start and end position) as below from https://huggingface.co/transformers/custom_datasets.html:\r\n\r\n_def add_token_positions(encodings, answers):\r\n start_positions = []\r\n end_positions = []\r\n for i in range(len(answers)):\r\n start_positions.append(**encodings.char_to_token(i, answers[i]['answer_start'])**)\r\n end_positions.append(**encodings.char_to_token(i, answers[i]['answer_end'] - 1**))\r\n # if None, the answer passage has been truncated\r\n if start_positions[-1] is None:\r\n start_positions[-1] = tokenizer.model_max_length\r\n if end_positions[-1] is None:\r\n end_positions[-1] = tokenizer.model_max_length\r\n encodings.update({'start_positions': start_positions, 'end_positions': end_positions})\r\n\r\nadd_token_positions(train_encodings, train_answers)\r\nadd_token_positions(val_encodings, val_answers)_\r\n\r\n\r\n\r\n\r\nThe tasks I am working on is:\r\n*Training model on SQUaD 2.0 using code given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Follow the steps given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0 and then verify start and end position outcome using below code snippet in Expected behavior\r\n\r\n\r\n\r\n\r\n## Expected behavior:\r\n- Start and End position are being defined using above code snippet which will be provided as training/validation data to model but end position is not derived as correct value due to some issue with char_to_token() function which is used to find out end position.\r\n- Please find below snippet for verification that answer using start and end position after tokenization is not matching with actual answer.\r\n- So the training data which is being fed to model after tokenization is incorrect\r\n\r\nidx=8\r\nprint(f'Actual context: {train_contexts[idx]}')\r\nprint(f'Actual question: {train_questions[idx]}')\r\nprint(f\"Actual answer: {train_answers[idx]['text']}\")\r\n\r\nstart_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start'])\r\nend_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end'])\r\nprint(f\"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}\") \r\n\r\nOUTPUT:\r\n**Actual context:** Beyonc\u00e9 Giselle Knowles-Carter (/bi\u02d0\u02c8j\u0252nse\u026a/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father, Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release of Beyonc\u00e9's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles \"Crazy in Love\" and \"Baby Boy\".\r\n**Actual question:** When did Beyonc\u00e9 rise to fame?\r\n**Actual answer:** late 1990s\r\n**Answer after tokenization:** ['late', '1990s', 'as', 'lead', 'singer', 'of', 'r', '&', 'b', 'girl', '-', 'group', 'destiny', \"'\", 's', 'child', '.', 'managed', 'by', 'her', 'father', ',', 'mathew', 'knowles', ',', 'the', 'group', 'became', 'one', 'of', 'the', 'world', \"'\", 's', 'best', '-', 'selling', 'girl', 'groups', 'of', 'all', 'time', '.', 'their', 'hiatus', 'saw', 'the', 'release', 'of', 'beyonce', \"'\", 's', 'debut', 'album', ',', 'dangerously', 'in', 'love', '(', '2003', ')', ',', 'which', 'established', 'her', 'as', 'a', 'solo', 'artist', 'worldwide', ',', 'earned', 'five', 'grammy', 'awards', 'and', 'featured', 'the', 'billboard', 'hot', '100', 'number', '-', 'one', 'singles', '\"', 'crazy', 'in', 'love', '\"', 'and', '\"', 'baby', 'boy', '\"', '.', '[SEP]', 'when', 'did', 'beyonce', 'rise', 'to', 'fame', '?', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']", "pr_html_url": "https://github.com/huggingface/transformers/pull/9378", "file_loc": {"base_commit": "5f7a07c0c867abedbb3ebf135915eeee56add24b", "files": [{"path": "docs/source/custom_datasets.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [564], "mod": [561, 562, 566]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/source/custom_datasets.rst"], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "2940f987c0996fe083d1777bdc117fc28c576c08", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1007", "iss_label": "bug\nprimordial", "title": "running ingest throws attribute error module 'chromadb' has no attribute 'PersistentClient'", "body": "```\r\n(privategpt-py3.11) (base) \u279c privateGPT git:(main) \u2717 python ingest.py\r\nTraceback (most recent call last):\r\n File \"/Volumes/Projects/privateGPT/ingest.py\", line 169, in \r\n main()\r\n File \"/Volumes/Projects/privateGPT/ingest.py\", line 146, in main\r\n chroma_client = chromadb.PersistentClient(settings=CHROMA_SETTINGS , path=persist_directory)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAttributeError: module 'chromadb' has no attribute 'PersistentClient'\r\n\r\n```\r\n\r\n.env file:\r\n\r\n```\r\nPERSIST_DIRECTORY=db\r\nMODEL_TYPE=GPT4All\r\nMODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin\r\nEMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2\r\nMODEL_N_CTX=1000\r\nMODEL_N_BATCH=8\r\nTARGET_SOURCE_CHUNKS=4\r\n```\r\n\r\n**Environment (please complete the following information):**\r\n - OS / hardware: macOS 13.5.1\r\n - Python version 3.11.5\r\n\r\nany idea what's wrong here or how to solve it?", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/1015", "file_loc": {"base_commit": "2940f987c0996fe083d1777bdc117fc28c576c08", "files": [{"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [11, 12, 13, 14, 15, 16, 18, 19, 23]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["pyproject.toml"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "bbb9645f7c60c35177922d10ccc7ed4b90d261c3", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/979", "iss_label": "", "title": "`utils.text.reduce_message_length` Not reducing text length", "body": "**Bug description**\r\nI come across this issue.\r\n```python\r\n File \"/Users/azure/Documents/Workspace/Datasci/lib/python3.10/site-packages/metagpt/utils/text.py\", line 31, in reduce_message_length\r\n raise RuntimeError(\"fail to reduce message length\")\r\nRuntimeError: fail to reduce message length\r\n```\r\n\r\n**Bug solved method**\r\nDigging into the code, I assume `utils.text.reduce_message_length()` only check if the token is short enough.\r\nIf it's too long, it simply raise exception, instead of shorten it.\r\nFollowing it the code in `utils.text.reduce_message_length()`\r\n```python\r\ndef reduce_message_length(\r\n msgs: Generator[str, None, None],\r\n model_name: str,\r\n system_text: str,\r\n reserved: int = 0,\r\n) -> str:\r\n max_token = TOKEN_MAX.get(model_name, 2048) - count_string_tokens(system_text, model_name) - reserved\r\n for msg in msgs:\r\n if count_string_tokens(msg, model_name) < max_token or model_name not in TOKEN_MAX:\r\n return msg\r\n\r\n raise RuntimeError(\"fail to reduce message length\")\r\n``` \r\n\r\n- LLM type and model name:\r\n- System version:MetaGPT 0.7.4\r\n- Python version: Python 3.10.13\r\n\r\nIs it a feature that is not implemented yet, or I can try to create a PR to fix it", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/986", "file_loc": {"base_commit": "bbb9645f7c60c35177922d10ccc7ed4b90d261c3", "files": [{"path": "metagpt/actions/research.py", "status": "modified", "Loc": {"('CollectLinks', 'run', 94)": {"mod": [137]}}}, {"path": "metagpt/config2.py", "status": "modified", "Loc": {"('Config', 'default', 88)": {"mod": [95]}}}, {"path": "metagpt/utils/token_counter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [142, 158, 161], "mod": [144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157]}, "(None, 'count_message_tokens', 182)": {"mod": [182, 212, 213]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/actions/research.py", "metagpt/config2.py", "metagpt/utils/token_counter.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "c56bce482db698c7c7e7b583b8b2e08a211eb48b", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/10463", "iss_label": "API", "title": "Toward a consistent API for NearestNeighbors & co", "body": "### Estimators relying on `NearestNeighbors` (NN), and their related params:\r\n`params = (algorithm, leaf_size, metric, p, metric_params, n_jobs)`\r\n\r\n**sklearn.neighbors:**\r\n- `NearestNeighbors(n_neighbors, radius, *params)`\r\n- `KNeighborsClassifier(n_neighbors, *params)`\r\n- `KNeighborsRegressor(n_neighbors, *params)`\r\n- `RadiusNeighborsClassifier(radius, *params)`\r\n- `RadiusNeighborsRegressor(radius, *params)`\r\n- `LocalOutlierFactor(n_neighbors, *params)`\r\n- ~`KernelDensity(algorithm, metric, leaf_size, metric_params)`\r\n\r\n**sklearn.manifold:**\r\n- `TSNE(method=\"barnes_hut\", metric)`\r\n- `Isomap(n_neighbors, neighbors_algorithm, n_jobs)`\r\n- `LocallyLinearEmbedding(n_neighbors, neighbors_algorithm, n_jobs)`\r\n- `SpectralEmbedding(affinity='nearest_neighbors', n_neighbors, n_jobs)`\r\n\r\n**sklearn.cluster:**\r\n- `SpectralClustering(affinity='nearest_neighbors', n_neighbors, n_jobs)`\r\n- `DBSCAN(eps, *params)`\r\n\r\n### How do they call `NearestNeighbors` ?\r\n- Inherit from `NeighborsBase._fit`: NearestNeighbors, KNeighborsClassifier, KNeighborsRegressor, RadiusNeighborsClassifier, RadiusNeighborsRegressor, LocalOutlierFactor\r\n- Call `BallTree/KDTree(X)`: KernelDensity\r\n- Call `kneighbors_graph(X)`: SpectralClustering, SpectralEmbedding\r\n- Call `NearestNeighbors().fit(X)`: TSNE, DBSCAN, Isomap, kneighbors_graph\r\n\r\n### Do they handle other form of input X?\r\n- Handle precomputed distances matrix, with (metric/affinity='precomputed'): TSNE, DBSCAN, SpectralEmbedding, SpectralClustering\r\n- Handle `KNeighborsMixin` object: kneighbors_graph\r\n- Handle `NeighborsBase` object: all estimators inheriting NeighborsBase + UnsupervisedMixin\r\n- Handle `BallTree/KDTree` object: all estimators inheriting NeighborsBase + SupervisedFloatMixin/SupervisedIntegerMixin\r\n___\r\n### Issues:\r\n1. We don't have all NN parameters in all classes (e.g. `n_jobs` in TSNE).\r\n2. We can't give a custom NN estimators to these classes. (PR #3922 #8999)\r\n3. The handle of input X as a `NearestNeighbors/BallTree/KDTree` object is not consistent, and not well documented. Sometimes it is documented but does not work (e.g. Isomap), or not well documented but it does work (e.g. LocalOutlierFactor). Most classes almost handle it since `NearestNeighbors().fit(NearestNeighbors().fit(X))` works, but a call to `check_array(X)` prevents it (e.g. Isomap, DBSCAN, SpectralEmbedding, SpectralClustering, LocallyLinearEmbedding, TSNE).\r\n4. The handle of X as a precomputed distances matrix is not consistent, and sometimes does not work with sparse matrices (as given by `kneighbors_graph`) (e.g. TSNE #9691).\r\n\r\n### Proposed solutions:\r\n\r\nA. We could generalize the use of precomputed distances matrix, and use pipelines to chain `NearestNeighbors` with other estimators. Yet it might not be possible/efficient for some estimators. I this case one would have to adapt the estimators to allow for the following: `Estimator(neighbors='precomputed').fit(distance_matrix, y)`\r\n\r\nB. We could improve the checking of X to enable more widely having X as a `NearestNeighbors/BallTree/KDTree` fitted object. The changes would be probably limited, however, this solution is not pipeline-friendly.\r\n\r\nC. To be pipeline-friendly, a custom `NearestNeighbors` object could be passed in the params, unfitted. We could then put all NN-related parameters in this estimator parameter, and allow custom estimators with a clear API. This is essentially what is proposed in #8999.", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10482", "file_loc": {"base_commit": "c56bce482db698c7c7e7b583b8b2e08a211eb48b", "files": [{"path": "doc/glossary.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [699]}}}, {"path": "doc/modules/classes.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1236, 1239]}}}, {"path": "doc/modules/neighbors.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [511]}}}, {"path": "doc/whats_new/v0.22.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [71, 316, 399]}}}, {"path": "sklearn/cluster/dbscan_.py", "status": "modified", "Loc": {"(None, 'dbscan', 23)": {"mod": [54, 55]}, "('DBSCAN', None, 147)": {"mod": [175, 176]}, "('DBSCAN', 'fit', 284)": {"mod": [322, 323, 331, 332, 333, 334, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347]}}}, {"path": "sklearn/cluster/spectral.py", "status": "modified", "Loc": {"('SpectralClustering', 'fit', 448)": {"add": [481], "mod": [471]}, "(None, None, None)": {"mod": [16]}, "('SpectralClustering', None, 275)": {"mod": [329, 330, 331, 332]}, "('SpectralClustering', '_pairwise', 532)": {"mod": [533]}}}, {"path": "sklearn/cluster/tests/test_dbscan.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [97]}}}, {"path": "sklearn/cluster/tests/test_spectral.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19, 104]}}}, {"path": "sklearn/manifold/_utils.pyx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [16, 17, 23, 27, 28, 30, 31, 32, 33, 49, 64, 65, 67, 68, 88, 97]}}}, {"path": "sklearn/manifold/isomap.py", "status": "modified", "Loc": {"('Isomap', None, 15)": {"add": [66, 140], "mod": [61, 76, 77]}, "('Isomap', '__init__', 105)": {"add": [115], "mod": [107]}, "('Isomap', '_fit_transform', 117)": {"add": [120, 130], "mod": [118, 123]}, "(None, None, None)": {"mod": [9]}, "('Isomap', 'fit', 165)": {"mod": [170, 172]}, "('Isomap', 'fit_transform', 184)": {"mod": [189]}, "('Isomap', 'transform', 202)": {"mod": [215, 219, 221, 224, 225, 228, 229]}}}, {"path": "sklearn/manifold/locally_linear.py", "status": "modified", "Loc": {"(None, 'barycenter_kneighbors_graph', 67)": {"mod": [102]}}}, {"path": "sklearn/manifold/spectral_embedding_.py", "status": "modified", "Loc": {"('SpectralEmbedding', '_get_affinity_matrix', 458)": {"add": [479]}, "(None, None, None)": {"mod": [22]}, "(None, 'spectral_embedding', 135)": {"mod": [160]}, "('SpectralEmbedding', None, 353)": {"mod": [372, 373, 374]}, "('SpectralEmbedding', '_pairwise', 455)": {"mod": [456]}, "('SpectralEmbedding', 'fit', 505)": {"mod": [510, 515, 525, 529, 530]}, "('SpectralEmbedding', 'fit_transform', 545)": {"mod": [550, 555]}}}, {"path": "sklearn/manifold/t_sne.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21], "mod": [14, 17]}, "('TSNE', '_fit', 640)": {"add": [666], "mod": [641, 643, 644, 645, 646, 648, 649, 650, 651, 652, 653, 654, 655, 656, 658, 659, 660, 661, 662, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 733, 737, 740, 743, 753, 754, 757, 758, 769, 772, 773]}, "(None, '_joint_probabilities', 31)": {"mod": [56]}, "(None, '_joint_probabilities_nn', 63)": {"mod": [63, 73, 74, 76, 77, 93, 94, 95, 97, 102, 103]}, "('TSNE', 'fit_transform', 864)": {"mod": [872]}, "('TSNE', 'fit', 885)": {"mod": [894]}}}, {"path": "sklearn/manifold/tests/test_isomap.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 116]}}}, {"path": "sklearn/manifold/tests/test_spectral_embedding.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}, "(None, 'test_spectral_embedding_precomputed_affinity', 128)": {"mod": [128, 136, 137]}, "(None, 'test_spectral_embedding_callable_affinity', 143)": {"mod": [143, 155, 156]}}}, {"path": "sklearn/manifold/tests/test_t_sne.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10, 321], "mod": [9]}, "(None, 'test_binary_search', 104)": {"mod": [107, 108, 109, 110, 112, 113]}, "(None, 'test_binary_search_neighbors', 120)": {"mod": [127, 128, 129, 130, 131, 132, 135, 136, 137, 138, 139, 140, 141, 142, 143, 145, 146, 148, 149, 150, 151, 152, 153, 154]}, "(None, 'test_binary_perplexity_stability', 162)": {"mod": [166, 169, 170, 171, 172, 174, 175, 177, 178, 179]}, "(None, 'test_fit_csr_matrix', 265)": {"mod": [265, 272]}, "(None, 'test_non_square_precomputed_distances', 316)": {"mod": [316, 317, 319, 320]}, "(None, 'test_non_positive_precomputed_distances', 323)": {"mod": [323, 324, 325, 326, 327, 328, 329]}, "(None, 'test_no_sparse_on_barnes_hut', 566)": {"mod": [566, 567, 568, 569, 570, 571, 572, 573, 574]}, "(None, 'test_barnes_hut_angle', 609)": {"mod": [619, 620, 621, 622, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637]}}}, {"path": "sklearn/neighbors/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 23, 27]}}}, {"path": "sklearn/neighbors/base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [105], "mod": [29]}, "('NeighborsBase', '_fit', 164)": {"add": [194, 200, 206, 235, 239], "mod": [209]}, "('KNeighborsMixin', 'kneighbors', 339)": {"add": [429, 483], "mod": [345, 346, 360, 364, 409, 417, 418, 422, 424, 425, 428, 435, 436, 438, 459, 467, 468, 469, 470, 471, 474, 480, 482, 494, 497, 498, 499]}, "('KNeighborsMixin', 'kneighbors_graph', 502)": {"add": [564, 575], "mod": [508, 509, 525, 550, 551, 552, 553, 554, 555, 557, 558, 559, 563, 577]}, "('RadiusNeighborsMixin', 'radius_neighbors_graph', 787)": {"add": [808], "mod": [795, 811, 832, 833, 835, 846, 853, 862]}, "(None, '_tree_query_parallel_helper', 292)": {"mod": [292, 298]}, "(None, '_tree_query_radius_parallel_helper', 582)": {"mod": [582, 588]}, "('RadiusNeighborsMixin', None, 591)": {"mod": [628, 787]}, "('RadiusNeighborsMixin', 'radius_neighbors', 628)": {"mod": [650, 654, 659, 698, 706, 718, 723, 724, 727, 728, 729, 732, 734, 753, 754, 758, 759, 761, 772, 781, 784]}}}, {"path": "sklearn/neighbors/classification.py", "status": "modified", "Loc": {"('KNeighborsClassifier', None, 26)": {"add": [76]}, "('RadiusNeighborsClassifier', None, 252)": {"add": [305]}, "('KNeighborsClassifier', 'predict', 155)": {"mod": [160, 161, 166, 179, 182]}, "('KNeighborsClassifier', 'predict_proba', 197)": {"mod": [202, 203, 208, 223, 233]}, "('RadiusNeighborsClassifier', 'predict', 446)": {"mod": [451, 452, 457, 469, 470, 471]}, "('RadiusNeighborsClassifier', 'predict_proba', 489)": {"mod": [494, 495, 500, 507, 510, 538]}}}, {"path": "sklearn/neighbors/graph.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 7, 8]}, "(None, 'radius_neighbors_graph', 108)": {"add": [184], "mod": [146, 148, 149, 159, 183]}, "(None, '_query_include_self', 24)": {"mod": [24, 26, 27, 28, 29, 31]}, "(None, 'kneighbors_graph', 34)": {"mod": [68, 70, 71, 81, 104]}}}, {"path": "sklearn/neighbors/lof.py", "status": "modified", "Loc": {"('LocalOutlierFactor', None, 19)": {"mod": [63, 64, 121]}, "('LocalOutlierFactor', 'fit', 219)": {"mod": [242, 250, 251]}, "('LocalOutlierFactor', '_predict', 299)": {"mod": [323]}, "('LocalOutlierFactor', '_local_reachability_density', 470)": {"mod": [478, 482, 488]}}}, {"path": "sklearn/neighbors/regression.py", "status": "modified", "Loc": {"('KNeighborsRegressor', None, 24)": {"add": [80]}, "('RadiusNeighborsRegressor', None, 194)": {"add": [251]}, "(None, None, None)": {"mod": [16]}, "('KNeighborsRegressor', 'predict', 149)": {"mod": [154, 155, 160, 163, 164, 165, 166, 167]}, "('RadiusNeighborsRegressor', 'predict', 313)": {"mod": [318, 319, 324]}}}, {"path": "sklearn/neighbors/tests/test_neighbors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 10, 11, 16, 190, 823], "mod": [7]}, "(None, 'test_k_and_radius_neighbors_duplicates', 1297)": {"add": [1320]}, "(None, 'test_radius_neighbors_predict_proba', 1485)": {"add": [1500]}, "(None, 'test_precomputed', 136)": {"mod": [136, 139, 142, 143, 144, 178, 179, 180, 181, 182]}, "(None, 'test_kneighbors_regressor_sparse', 824)": {"mod": [849, 850, 851, 852]}}}, {"path": "sklearn/neighbors/unsupervised.py", "status": "modified", "Loc": {"('NearestNeighbors', None, 9)": {"mod": [43, 44, 46, 47, 48, 49, 50, 52, 54, 56, 57, 59, 60, 61, 62, 63, 65, 66]}}}, {"path": "sklearn/utils/estimator_checks.py", "status": "modified", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/neighbors/unsupervised.py", "sklearn/neighbors/regression.py", "sklearn/manifold/t_sne.py", "sklearn/neighbors/__init__.py", "sklearn/neighbors/base.py", "sklearn/manifold/locally_linear.py", "sklearn/manifold/_utils.pyx", "sklearn/cluster/dbscan_.py", "sklearn/manifold/spectral_embedding_.py", "sklearn/cluster/spectral.py", "sklearn/manifold/isomap.py", "sklearn/neighbors/lof.py", "sklearn/neighbors/classification.py", "sklearn/neighbors/graph.py", "sklearn/utils/estimator_checks.py"], "doc": ["doc/modules/neighbors.rst", "doc/glossary.rst", "doc/modules/classes.rst", "doc/whats_new/v0.22.rst"], "test": ["sklearn/manifold/tests/test_spectral_embedding.py", "sklearn/neighbors/tests/test_neighbors.py", "sklearn/cluster/tests/test_spectral.py", "sklearn/cluster/tests/test_dbscan.py", "sklearn/manifold/tests/test_isomap.py", "sklearn/manifold/tests/test_t_sne.py"], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "55d50d147c953fab37b273bca9ab010f40e067d3", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/102500", "iss_label": "type-feature\ntopic-typing\n3.12", "title": "Implement PEP 688: Making the buffer protocol accessible in Python", "body": "PEP-688 has just been accepted. I will use this issue to track its implementation in CPython.\r\n\r\n\r\n\r\n\r\n### Linked PRs\r\n* gh-102521\r\n* gh-102571\r\n* gh-104174\r\n* gh-104281\r\n* gh-104288\r\n* gh-104317\r\n\r\n", "pr_html_url": "https://github.com/python/cpython/pull/102521", "file_loc": {"base_commit": "55d50d147c953fab37b273bca9ab010f40e067d3", "files": [{"path": "Include/internal/pycore_global_objects_fini_generated.h", "status": "modified", "Loc": {"(None, '_PyStaticObjects_CheckRefcnt', 24)": {"add": [595, 694, 1124]}}}, {"path": "Include/internal/pycore_global_strings.h", "status": "modified", "Loc": {"(None, None, None)": {"add": [83, 182, 612]}}}, {"path": "Include/internal/pycore_runtime_init_generated.h", "status": "modified", "Loc": {"(None, None, None)": {"add": [589, 688, 1118]}}}, {"path": "Include/internal/pycore_typeobject.h", "status": "modified", "Loc": {"(None, None, None)": {"add": [140]}}}, {"path": "Include/internal/pycore_unicodeobject_generated.h", "status": "modified", "Loc": {"(None, '_PyUnicode_InitStaticStrings', 12)": {"add": [98, 395, 1685]}}}, {"path": "Include/pybuffer.h", "status": "modified", "Loc": {"(None, None, None)": {"mod": [107]}}}, {"path": "Lib/_collections_abc.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [441], "mod": [52]}}}, {"path": "Lib/inspect.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [45, 3314]}}}, {"path": "Lib/test/test_buffer.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19, 4440]}}}, {"path": "Lib/test/test_collections.py", "status": "modified", "Loc": {"('TestCollectionABCs', None, 1416)": {"add": [1951]}, "(None, None, None)": {"mod": [28]}}}, {"path": "Lib/test/test_doctest.py", "status": "modified", "Loc": {"('test_DocTestFinder', 'non_Python_modules', 700)": {"mod": [710]}}}, {"path": "Modules/Setup.stdlib.in", "status": "modified", "Loc": {"(None, None, None)": {"mod": [172]}}}, {"path": "Modules/_testcapi/parts.h", "status": "modified", "Loc": {"(None, None, None)": {"add": [40]}}}, {"path": "Modules/_testcapimodule.c", "status": "modified", "Loc": {"(None, 'PyInit__testcapi', 4162)": {"add": [4312]}}}, {"path": "Objects/clinic/memoryobject.c.h", "status": "modified", "Loc": {"(None, None, None)": {"add": [64], "mod": [359]}}}, {"path": "Objects/memoryobject.c", "status": "modified", "Loc": {"(None, None, 783)": {"add": [807], "mod": [795]}, "(None, None, None)": {"add": [970, 3186], "mod": [780]}, "(None, '_PyManagedBuffer_FromObject', 88)": {"mod": [88]}, "(None, None, 87)": {"mod": [96]}, "(None, 'PyMemoryView_FromObject', 784)": {"mod": [784]}, "(None, None, 838)": {"mod": [854]}}}, {"path": "Objects/object.c", "status": "modified", "Loc": {"(None, None, None)": {"add": [16, 2075]}}}, {"path": "Objects/typeobject.c", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 8061, 8897, 8964, 8983, 9064]}, "(None, None, 9203)": {"mod": [9211, 9212]}}}, {"path": "PCbuild/_testcapi.vcxproj", "status": "modified", "Loc": {"(None, None, None)": {"add": [112]}}}, {"path": "PCbuild/_testcapi.vcxproj.filters", "status": "modified", "Loc": {"(None, None, None)": {"add": [62]}}}, {"path": "Tools/build/generate_global_objects.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [123]}}}, {"path": "Tools/c-analyzer/cpython/globals-to-fix.tsv", "status": "modified", "Loc": {"(None, None, None)": {"add": [88]}}}, {"path": "Tools/c-analyzer/cpython/ignored.tsv", "status": "modified", "Loc": {"(None, None, None)": {"add": [406]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["Objects/typeobject.c", "Lib/inspect.py", "Tools/build/generate_global_objects.py", "Include/internal/pycore_global_objects_fini_generated.h", "Tools/c-analyzer/cpython/ignored.tsv", "Objects/clinic/memoryobject.c.h", "Lib/_collections_abc.py", "Tools/c-analyzer/cpython/globals-to-fix.tsv", "Include/internal/pycore_typeobject.h", "Include/internal/pycore_runtime_init_generated.h", "Modules/_testcapimodule.c", "Objects/memoryobject.c", "Modules/_testcapi/parts.h", "Include/pybuffer.h", "Include/internal/pycore_global_strings.h", "Include/internal/pycore_unicodeobject_generated.h", "Objects/object.c"], "doc": [], "test": ["Lib/test/test_doctest.py", "Lib/test/test_buffer.py", "Lib/test/test_collections.py"], "config": [], "asset": ["PCbuild/_testcapi.vcxproj.filters", "PCbuild/_testcapi.vcxproj", "Modules/Setup.stdlib.in"]}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "fd55f62207bbbb18d7758c8e2ef46fe9115eb2c5", "iss_html_url": "https://github.com/scrapy/scrapy/issues/5400", "iss_label": "bug\nCI", "title": "Tests broken with Twisted 22.1.0", "body": "`ImportError: cannot import name 'PayloadResource' from 'twisted.web.test.test_webclient'`\r\n\r\n`ImportError: cannot import name 'ForeverTakingResource' from 'twisted.web.test.test_webclient'`", "code": null, "pr_html_url": "https://github.com/scrapy/scrapy/pull/5405", "commit_html_url": null, "file_loc": {"base_commit": "fd55f62207bbbb18d7758c8e2ef46fe9115eb2c5", "files": [{"path": "pytest.ini", "status": "modified", "Loc": {"(None, None, 24)": {"mod": [24, 25]}}}, {"path": "tests/mockserver.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [17, 20]}, "('LeafResource', None, 38)": {"mod": [38]}, "('Root', None, 178)": {"mod": [178]}, "('Root', '__init__', 180)": {"mod": [181, 190]}}}, {"path": "tests/test_downloader_handlers.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [18, 19, 37]}}}, {"path": "tests/test_webclient.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24, 25, 26, 27, 28, 29, 30, 31, 39]}}}, {"path": "tox.ini", "status": "modified", "Loc": {"(None, None, 22)": {"mod": [22, 23]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/mockserver.py"], "doc": [], "test": ["tests/test_webclient.py", "tests/test_downloader_handlers.py"], "config": ["pytest.ini", "tox.ini"], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "04081f810270712ba3a69577c47e5dcfa850fa90", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1355", "iss_label": "bug", "title": "The exported label txt seems have problem", "body": "Hi, @glenn-jocher i manage to use `python detect.py --save-txt` to semi-auto label images, but when i set `Open Dir` and `Change Save Dir` in [labelImg](https://github.com/tzutalin/labelImg/releases/tag/v1.8.1)\uff0cthe labelImg can not display the exported bbox, and its command line window appears error:\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1268, in openNextImg\r\n File \"\", line 1035, in loadFile\r\n File \"\", line 1427, in loadYOLOTXTByFilename\r\n File \"Z:\\home\\darrenl\\tmp\\labelImg\\build-tools\\build\\labelImg\\out00-PYZ.pyz\\libs.yolo_io\", line 112, in __init__\r\n File \"Z:\\home\\darrenl\\tmp\\labelImg\\build-tools\\build\\labelImg\\out00-PYZ.pyz\\libs.yolo_io\", line 142, in parseYoloFormat\r\nValueError: too many values to unpack\r\n```\r\nIf i set `Change Save Dir` to another empty folder, it will not occur error, so i doubt it is the problem of exported label txt, could you have a try ?", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/1377", "file_loc": {"base_commit": "04081f810270712ba3a69577c47e5dcfa850fa90", "files": [{"path": ".github/workflows/ci-testing.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [69, 72]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [99]}}}, {"path": "detect.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [164], "mod": [13, 159, 160]}, "(None, 'detect', 17)": {"mod": [18, 19, 24, 25, 26, 27]}}}, {"path": "test.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [16, 282, 290, 291, 308]}, "(None, 'test', 20)": {"mod": [49, 50, 51, 52]}}}, {"path": "train.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [417], "mod": [30, 413, 414, 431, 433, 443, 469, 470, 517]}, "(None, 'train', 37)": {"mod": [39, 40, 41, 44, 45, 46, 49, 51, 123, 124, 191, 218, 299, 324, 372, 381]}}}, {"path": "tutorial.ipynb", "status": "modified", "Loc": {"(None, None, None)": {"mod": [600, 614, 890, 972, 989, 990, 1033, 1042, 1043, 1044, 1081, 1173, 1175]}}}, {"path": "utils/general.py", "status": "modified", "Loc": {"(None, 'get_latest_run', 63)": {"mod": [63]}, "(None, 'increment_dir', 954)": {"mod": [954, 955, 956, 957, 958, 959, 960, 962, 964, 965, 966, 967, 968, 969, 970]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tutorial.ipynb", "utils/general.py", "detect.py", "train.py"], "doc": ["README.md"], "test": ["test.py"], "config": [".github/workflows/ci-testing.yml"], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "be62645dd56580dd7576032b348cf79d880851d8", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1088", "iss_label": "Feature Request", "title": "Session pickling support is broken and tests for it are removed", "body": "The commit 42b029552190f6639642d0f62d27abcd1ceed51e removes the `__attrs__` attribute of the `Session` class, which is used in the pickle protocol's `__getstate__` method.\n\nThe tests that are testing this functionality (functions `test_session_pickling` and `test_unpickled_session_requests` in the once present `tests/test_requests.py`) are also removed.\n\nThe commit messages don't seem to indicate any reason for this, and I can't find anything searching in the issues.\n\nIf it is intended that pickling of Session objects not be supported, could you give the reason? And may be the `__getstate__` and `__setstate__` methods should be removed too, as they might send a wrong message.\n\nIf this is unintended (which is what I think is the case), I can work on a pull request to fix this. Please confirm.\n\nThank you.\n", "pr_html_url": "https://github.com/psf/requests/pull/1223", "file_loc": {"base_commit": "be62645dd56580dd7576032b348cf79d880851d8", "files": [{"path": "requests/sessions.py", "status": "modified", "Loc": {"('Session', None, 166)": {"add": [178]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/sessions.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "34261a15835390c5c464cef88c4a42b52a88b739", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/987", "iss_label": "", "title": "Massage about Pinecone initializing", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Summary \ud83d\udca1\n\nAdd a message like: \"Connecting Pinecone. This may take some time...\"\n\n### Examples \ud83c\udf08\n\n_No response_\n\n### Motivation \ud83d\udd26\n\nAt this point, if the Pinecone index setup takes a noticeable amount of time, the console just stops. It is necessary to notify the user that the index is being configured now and this may take some time.", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/1194", "file_loc": {"base_commit": "34261a15835390c5c464cef88c4a42b52a88b739", "files": [{"path": "autogpt/memory/pinecone.py", "status": "modified", "Loc": {"('PineconeMemory', '__init__', 10)": {"add": [40]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["autogpt/memory/pinecone.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "6f7ae911f18fda59669309582706f1aa1f36374d", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/19489", "iss_label": "Bug\nRegression\nmodule:feature_extraction", "title": "'feature_name' referenced before assignment", "body": "\r\n\r\n#### Describe the bug\r\n\r\nWhen I run some preprocessing on my data the line triggering the error is:\r\n\r\n```\r\nC:\\local_tools\\Anaconda3\\envs\\mother_env\\lib\\site-packages\\sklearn\\feature_extraction\\_dict_vectorizer.py in _transform(self, X, fitting)\r\n 226 indices=indices, values=values)\r\n 227 \r\n--> 228 if feature_name is not None:\r\n 229 if fitting and feature_name not in vocab:\r\n 230 vocab[feature_name] = len(feature_names)\r\n\r\nUnboundLocalError: local variable 'feature_name' referenced before assignment\r\n```\r\n\r\n#### Steps/Code to Reproduce\r\n\r\n\r\nIt involves a bit too much preprocessing to put here but from inspecting the respective source file (see above, sklearn\\feature_extraction\\_dict_vectorizer.py) I have the strong suspicion that ```feature_name``` can go through all if/elif checks without being assigned anything.\r\n\r\n\r\n#### Versions\r\n\r\n\r\nSystem:\r\n python: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)]\r\nexecutable: C:\\local_tools\\Anaconda3\\envs\\mother_env\\python.exe\r\n machine: Windows-10-10.0.18362-SP0\r\n\r\nPython dependencies:\r\n pip: 20.3.3\r\n setuptools: 52.0.0.post20210125\r\n sklearn: 0.24.1\r\n numpy: 1.19.2\r\n scipy: 1.6.0\r\n Cython: None\r\n pandas: 1.2.1\r\n matplotlib: 3.3.4\r\n joblib: 1.0.1\r\nthreadpoolctl: 2.1.0\r\n\r\nBuilt with OpenMP: True\r\n\r\n\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/19520", "file_loc": {"base_commit": "6f7ae911f18fda59669309582706f1aa1f36374d", "files": [{"path": "doc/whats_new/v1.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [346], "mod": [343]}}}, {"path": "sklearn/feature_extraction/_dict_vectorizer.py", "status": "modified", "Loc": {"('DictVectorizer', '_transform', 190)": {"add": [246], "mod": [229, 230, 231, 232, 233, 234, 235]}}}, {"path": "sklearn/feature_extraction/tests/test_dict_vectorizer.py", "status": "modified", "Loc": {"(None, 'test_dictvectorizer_dense_sparse_equivalence', 174)": {"add": [211]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/feature_extraction/_dict_vectorizer.py"], "doc": ["doc/whats_new/v1.0.rst"], "test": ["sklearn/feature_extraction/tests/test_dict_vectorizer.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "0fb9a50033574e36a8bd635d8e5c0a793428877c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/8996", "iss_label": "Easy\nSprint", "title": "Deprecate LSHForest", "body": "LSHForest should be deprecated and scheduled for removal in 0.21. It should also warn about having bad performance. cc @ogrisel ", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/9078", "file_loc": {"base_commit": "0fb9a50033574e36a8bd635d8e5c0a793428877c", "files": [{"path": "benchmarks/bench_plot_approximate_neighbors.py", "status": "removed", "Loc": {}}, {"path": "doc/modules/classes.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1062]}}}, {"path": "doc/modules/neighbors.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [515, 517, 518, 520, 521, 522, 523, 524, 525, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 538, 539, 541, 542, 543, 544, 545, 546, 547, 548, 550, 551, 552, 554, 555, 556, 557, 559, 560, 561, 562, 564, 565, 566, 568, 569, 570, 571, 572, 573, 574, 575, 577, 578, 579, 580, 582, 583, 584, 585, 587, 588, 589, 590, 592, 593, 594, 596, 598, 599, 601, 602, 604, 606, 607, 609, 610, 611, 612, 613, 614, 615, 616, 618, 619, 620, 621, 623, 624, 626, 627, 628, 629, 630, 632, 633, 634, 635, 636, 638, 639, 641, 642, 643, 644, 645, 646, 647, 648, 650, 651, 652, 653, 655, 656, 657, 658, 659, 660, 661, 662, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 679, 681, 682, 683, 684, 685, 687, 688, 689, 690]}}}, {"path": "doc/whats_new.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [467], "mod": [435]}}}, {"path": "examples/neighbors/plot_approximate_nearest_neighbors_hyperparameters.py", "status": "removed", "Loc": {}}, {"path": "examples/neighbors/plot_approximate_nearest_neighbors_scalability.py", "status": "removed", "Loc": {}}, {"path": "sklearn/neighbors/approximate.py", "status": "modified", "Loc": {"('LSHForest', None, 110)": {"add": [219]}}}, {"path": "sklearn/neighbors/tests/test_approximate.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [28]}, "(None, 'test_neighbors_accuracy_with_n_candidates', 29)": {"mod": [41]}, "(None, 'test_neighbors_accuracy_with_n_estimators', 65)": {"mod": [77]}, "(None, 'test_kneighbors', 100)": {"mod": [111]}, "(None, 'test_radius_neighbors', 149)": {"mod": [162]}, "(None, 'test_radius_neighbors_boundary_handling', 223)": {"mod": [233, 234]}, "(None, 'test_distances', 283)": {"mod": [291]}, "(None, 'test_fit', 309)": {"mod": [317]}, "(None, 'test_partial_fit', 336)": {"mod": [346]}, "(None, 'test_hash_functions', 371)": {"mod": [383, 384]}, "(None, 'test_candidates', 400)": {"mod": [410, 424]}, "(None, 'test_graphs', 438)": {"mod": [446]}, "(None, 'test_sparse_input', 458)": {"mod": [463, 464]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/neighbors/plot_approximate_nearest_neighbors_scalability.py", "benchmarks/bench_plot_approximate_neighbors.py", "examples/neighbors/plot_approximate_nearest_neighbors_hyperparameters.py", "sklearn/neighbors/approximate.py"], "doc": ["doc/modules/classes.rst", "doc/modules/neighbors.rst", "doc/whats_new.rst"], "test": ["sklearn/neighbors/tests/test_approximate.py"], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/150", "iss_label": "code to integrate", "title": "Multi-GPU training", "body": "I've read reports of people succesfully training on multiple GPU'S using the following code:\r\n\r\n```\r\nfrom keras.utils import multi_gpu_model \r\n\r\nautoencoder_A = multi_gpu_model( autoencoder_A ,2) \r\nautoencoder_B = multi_gpu_model( autoencoder_B ,2) \r\n```\r\n\r\nhttps://keras.io/utils/#multi_gpu_model\r\n\r\nI could add support for this but I can't test it as I only have a single GPU.\r\nAnyobe here with a multi-GPU setup that would like to have a go at this?", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/241", "file_loc": {"base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "files": [{"path": "scripts/train.py", "status": "modified", "Loc": {"('TrainingProcessor', 'parse_arguments', 25)": {"mod": [75]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scripts/train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "b8290ff8013366de16f7dd2ed14d74b56d1fb03b", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/10860", "iss_label": "", "title": "Internal Refactoring: Towards a Multi-Distribution Setup", "body": "Over the next few weeks and months we\u2019re refactoring the code in this repository to move toward a **multi-distribution setup**.\r\n\r\nFor now this only affects active contributors as well as any developers that depend on code existing in the published container under the path `/opt/code/localstack/localstack`. Most users should not be affected by this change.\r\n\r\n## Motivation\r\nThis will enable us to define clearer boundaries and allows for easier re-use of individual components.\r\nSome of the previously internal code has already been moved into external open repositories such as [localstack/rolo](https://github.com/localstack/rolo).\r\nOther parts of the codebase will keep living in this repository but under its own distribution.\r\nHow we will map this to PyPI items is still being discussed and should become clearer over the next weeks during the initial refactorings.\r\n\r\nThe code layout is not part of any official API or semver guarantees, nevertheless we still want to use this chance to give you a heads up and some guidance in how to make your existing code compatible with the new structure.\r\n\r\n## Detailed instructions\r\n\r\n### 1. Moving everything into `localstack-core`\r\n\r\nAs a first step, the entirety of the localstack module is moved into a `localstack-core` directory with https://github.com/localstack/localstack/pull/10800, which will make up one of the multiple distributions.\r\n\r\nIn this initial step on our way to a multi-distribution system, only the additional root level of `localstack-core` is introduced and the rest of the directory structure is unchanged.\r\n\r\n- If you have an open PR, you can rebase onto master after https://github.com/localstack/localstack/pull/10800 has been merged.\r\n- After the PR is merged, update your local repository with `git pull`, remove the now empty localstack directory `rm -r localstack`, and run a `make clean install`. You should see an `localstack_core.egg-info` directory in `localstack-core/`\r\n- If you are an active contributor and you're using the PyCharm IDE, you need to adapt your project structure by marking the new `localstack-core` module as a source folder. Otherwise you will encounter errors where it will complain about not being able to find the `localstack` module.\r\n - ![Untitled (1)](https://github.com/localstack/localstack/assets/620817/e7d9694f-1e2f-49dd-a272-f1e79df7eefb)\r\n\r\n- If you want to call code from the `localstack` module, you now need to perform an installation of the project (e.g. with `pip install -e .`). Previously, since `localstack` was a root-level module, python automatically included it in its import path. With a source directory layout there is a more strict boundary now which also helps avoiding unintentional imports. See [here](https://packaging.python.org/en/latest/discussions/src-layout-vs-flat-layout/) for more information on the differences between a flat and a src layout.\r\n\r\n- The location of test files is unchanged.\r\n- Locally the code moves from `.../localstack/localstack/...` => `.../localstack/localstack-core/localstack/...`\r\n- In the published container the code moves from `/opt/code/localstack/localstack` => `/opt/code/localstack/localstack-core/localstack`\r\n\r\n### ?. Next steps\r\nAfter the initial move is over the line, additional code will be extracted from `localstack-core` into new distributions such as `localstack-cli`.\r\nThis issue will be updated with new information as the project progresses.\r\n", "pr_html_url": "https://github.com/localstack/localstack/pull/10800", "file_loc": {"base_commit": "b8290ff8013366de16f7dd2ed14d74b56d1fb03b", "files": [{"path": ".circleci/config.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [137, 405, 468, 469, 559, 560, 561, 562]}}}, {"path": ".dockerignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}}}, {"path": ".github/workflows/asf-updates.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [61, 66]}}}, {"path": ".github/workflows/tests-pro-integration.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [294, 301, 337]}}}, {"path": ".github/workflows/tests-s3-image.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7, 8, 9, 10, 11, 12, 13, 14, 15, 26, 27, 28, 29, 30, 31, 32, 33, 34]}}}, {"path": "Dockerfile", "status": "modified", "Loc": {"(None, None, None)": {"add": [101], "mod": [179]}}}, {"path": "Dockerfile.s3", "status": "modified", "Loc": {"(None, None, None)": {"add": [6], "mod": [84]}}}, {"path": "Makefile", "status": "modified", "Loc": {"(None, None, None)": {"add": [239], "mod": [76, 83, 244]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [".circleci/config.yml"], "doc": [".dockerignore"], "test": [], "config": [".github/workflows/tests-pro-integration.yml", "Makefile", "Dockerfile", "Dockerfile.s3", ".github/workflows/tests-s3-image.yml", ".github/workflows/asf-updates.yml"], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "6f2fdc5ac4ad869a21c4c0281d7fa1eb8aa5a689", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/3628", "iss_label": "", "title": "Returning Response and headers causes duplicate headers", "body": "\r\n\r\n\r\n\r\n### Expected Behavior\r\n\r\n```\r\nfrom flask import Flask\r\napp = Flask(__name__)\r\n@app.route('/')\r\ndef issue():\r\n return {'test': 'test'}, {'Content-Type': 'test'}\r\n```\r\nUsing `curl -v http://127.0.0.1:5000/` to query the view I expect only one `Content-Type` header > `Content-Type: test`\r\n\r\n### Actual Behavior\r\n\r\nDuplicate headers are returned\r\n\r\n```\r\n< Content-Type: application/json\r\n< Content-Type: test\r\n```\r\n\r\n### Environment\r\n\r\n* Python version: 3.8.2\r\n* Flask version: 1.1.2\r\n* Werkzeug version: 1.0.1\r\n\r\n### Context\r\n\r\nThis issue also effects responses created with make_response when using a dict or jsonify body + the headers argument with a 'Content-Type':\r\n\r\n```\r\nfrom flask import Flask, make_response\r\napp = Flask(__name__)\r\n@app.route('/')\r\ndef issue():\r\n return make_response({'test': 'test'}, {'Content-Type': 'test'})\r\n```\r\n\r\nThis issue is caused by jsonify adding a 'Content-Type' header then make_response uses `extent` to add the additional headers, thus leading to the duplicate.\r\n\r\nReturning a str/bytes body does not have this problem as no 'Content-Type' is added by flask, if one is missing it is added by werkzeug.\r\n\r\nThe reason I came across this issue is we have older code which does `return json.dumps(data), 200, {'Content-Type': 'application/json+somecustomtype'}` and I assumed based on the flask docs that just returning the data and letting flask do the jsonify would be better.\r\n\r\n", "pr_html_url": "https://github.com/pallets/flask/pull/3684", "file_loc": {"base_commit": "6f2fdc5ac4ad869a21c4c0281d7fa1eb8aa5a689", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [30, 31, 32]}}}, {"path": "src/flask/app.py", "status": "modified", "Loc": {"('Flask', 'make_response', 1935)": {"mod": [2048]}}}, {"path": "tests/test_basic.py", "status": "modified", "Loc": {"(None, 'from_response_headers', 1118)": {"mod": [1120, 1121]}, "(None, 'test_response_types', 1092)": {"mod": [1158]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/flask/app.py"], "doc": ["CHANGES.rst"], "test": ["tests/test_basic.py"], "config": [], "asset": []}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "0a87da7dc1998e0073ba824c7f223cd331858b24", "iss_has_pr": 1, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/3502", "iss_label": "bug\ncan't reproduce\nfeedback pending", "title": "[Bug]: Unsupported image type in input when using input image", "body": "### Checklist\n\n- [ ] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)\n- [ ] The issue exists on a clean installation of Fooocus\n- [ ] The issue exists in the current version of Fooocus\n- [ ] The issue has not been reported before recently\n- [ ] The issue has been reported before but has not been fixed yet\n\n### What happened?\n\nWhether I'm using image prompt or inpaint/outpaint, I get the error: Unsupported image type in input.\r\nNormal image generation from text works fine, but using any input image throws this error.\r\nOthers have reported the same issue on Reddit, so it's not just me.\n\n### Steps to reproduce the problem\n\n1. Run foocus.\r\n2. Try to use inpaint on any image.\n\n### What should have happened?\n\nJust do the magic...\n\n### What browsers do you use to access Fooocus?\n\nMicrosoft Edge\n\n### Where are you running Fooocus?\n\nCloud (Google Colab)\n\n### What operating system are you using?\n\nWin 11\n\n### Console logs\n\n```Shell\nTraceback (most recent call last):\r\n File \"/content/Fooocus/modules/gradio_hijack.py\", line 279, in preprocess\r\n im = processing_utils.decode_base64_to_image(x)\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/processing_utils.py\", line 59, in decode_base64_to_image\r\n img = Image.open(BytesIO(base64.b64decode(image_encoded)))\r\n File \"/usr/local/lib/python3.10/dist-packages/PIL/Image.py\", line 3283, in open\r\n rawmode = mode\r\nPIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x782f4810f010>\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/routes.py\", line 488, in run_predict\r\n output = await app.get_blocks().process_api(\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/blocks.py\", line 1429, in process_api\r\n inputs = self.preprocess_data(fn_index, inputs, state)\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/blocks.py\", line 1239, in preprocess_data\r\n processed_input.append(block.preprocess(inputs[i]))\r\n File \"/content/Fooocus/modules/gradio_hijack.py\", line 281, in preprocess\r\n raise Error(\"Unsupported image type in input\")\r\ngradio.exceptions.Error: 'Unsupported image type in input'\n```\n\n\n### Additional information\n\nI noticed gradio was down today, I don't know if this has anything to do with the issue", "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/3506", "file_loc": {"base_commit": "0a87da7dc1998e0073ba824c7f223cd331858b24", "files": [{"path": "launch.py", "status": "modified", "Loc": {"(None, 'download_models', 104)": {"add": [104]}, "(None, None, None)": {"mod": [24]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["launch.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "5c69e5cb13127601aaba6ee04e522ead84b74f6a", "iss_has_pr": 1, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/181", "iss_label": "", "title": "help me", "body": "when i run the install.sh it gives me this error [\u2718] Installation Failed !!! [\u2718]\r\n[\u2714] Loading ...\r\nHit:1 http://kali.download/kali kali-rolling InRelease\r\nReading package lists... Done\r\nE: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)\r\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?\r\nReading package lists... Done\r\nBuilding dependency tree... Done\r\nReading state information... Done\r\nPackage python-pip is not available, but is referred to by another package.\r\nThis may mean that the package is missing, has been obsoleted, or\r\nis only available from another source\r\nHowever the following packages replace it:\r\n python3-pip\r\n\r\nE: Package 'python-pip' has no installation candidate\r\n[\u2714] Checking directories...\r\n[\u2714] Installing ...\r\n\r\nfatal: could not create work tree dir '/usr/share/doc/hackingtool': Permission denied\r\n\r\n[\u2714] Trying to installing Requirements ...\r\nRequirement already satisfied: lolcat in /usr/local/lib/python3.9/dist-packages (1.4)\r\nReading package lists... Done\r\nBuilding dependency tree... Done\r\nReading state information... Done\r\nfiglet is already the newest version (2.2.5-3+b1).\r\n0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.\r\nRequirement already satisfied: boxes in /usr/local/lib/python3.9/dist-packages (0.0.0)\r\nReading package lists... Done\r\nBuilding dependency tree... Done\r\nReading state information... Done\r\nboxes is already the newest version (2.1.1-2).\r\n0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.\r\nRequirement already satisfied: flask in /usr/local/lib/python3.9/dist-packages (2.0.2)\r\nRequirement already satisfied: click>=7.1.2 in /usr/local/lib/python3.9/dist-packages (from flask) (8.0.3)\r\nRequirement already satisfied: Jinja2>=3.0 in /usr/local/lib/python3.9/dist-packages (from flask) (3.0.3)\r\nRequirement already satisfied: Werkzeug>=2.0 in /usr/local/lib/python3.9/dist-packages (from flask) (2.0.2)\r\nRequirement already satisfied: itsdangerous>=2.0 in /usr/local/lib/python3.9/dist-packages (from flask) (2.0.1)\r\nRequirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.9/dist-packages (from Jinja2>=3.0->flask) (2.0.1)\r\nRequirement already satisfied: requests in /usr/local/lib/python3.9/dist-packages (2.27.1)\r\nRequirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/dist-packages (from requests) (3.3)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.9/dist-packages (from requests) (2.0.10)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/dist-packages (from requests) (1.26.8)\r\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/dist-packages (from requests) (2021.10.8)\r\n[\u2718] Installation Failed !!! [\u2718]", "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/348", "file_loc": {"base_commit": "5c69e5cb13127601aaba6ee04e522ead84b74f6a", "files": [{"path": "install.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 19, 33, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 82, 83, 84, 86, 88, 89, 90, 91, 92, 93, 95, 96, 98, 99, 100, 102]}}}, {"path": "update.sh", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 51, 53]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["update.sh", "install.sh"]}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6b5bdbe98a882a726ec9710e5e94baa94d470ad6", "iss_has_pr": 1, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/286", "iss_label": "", "title": "\u5f31\u5f31\u7684\u63d0\u95ee\uff0c\u600e\u4e48\u53ef\u4ee5\u89e3\u6790\u524d\u7aef\u9879\u76ee\u5462", "body": "\u5f31\u5f31\u7684\u63d0\u95ee\uff0c\u600e\u4e48\u53ef\u4ee5\u89e3\u6790\u524d\u7aef\u9879\u76ee\u5462", "pr_html_url": "https://github.com/binary-husky/gpt_academic/pull/290", "file_loc": {"base_commit": "6b5bdbe98a882a726ec9710e5e94baa94d470ad6", "files": [{"path": "functional_crazy.py", "status": "modified", "Loc": {"(None, 'get_crazy_functionals', 3)": {"mod": [46]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["functional_crazy.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "2b79665b90bd54fa59701090d5f608a1fc4dd33a", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/18408", "iss_label": "Bug\nmodule:ensemble", "title": "Data type mismatch problem when calling HistGradientBoostingClassifier.predict()", "body": "\r\n\r\n#### Describe the bug\r\nIt looks like HistGradientBoostingClassifier has problems on handling datasets with different data types. It works fine when X is `np.float`. However, when X is of the type `uint8`, HistGradientBoostingClassifier crushes when calling `predict()`.\r\n\r\n#### Steps/Code to Reproduce\r\n\r\n\r\n```\r\nfrom keras.datasets import mnist\r\nfrom sklearn.metrics import accuracy_score\r\nfrom sklearn.experimental import enable_hist_gradient_boosting\r\nfrom sklearn.ensemble import HistGradientBoostingClassifier\r\n\r\n\r\nif __name__ == '__main__':\r\n \r\n (X_train, y_train), (X_test, y_test) = mnist.load_data()\r\n X_train = X_train.reshape(X_train.shape[0], -1)\r\n X_test = X_test.reshape(X_test.shape[0], -1)\r\n \r\n model = HistGradientBoostingClassifier(max_iter=100,\r\n loss='categorical_crossentropy',\r\n validation_fraction=None,\r\n random_state=0)\r\n \r\n model.fit(X_train, y_train)\r\n y_pred = model.predict(X_test)\r\n acc = accuracy_score(y_test, y_pred)\r\n \r\n print('Testing Acc: {:.4f} %'.format(100.*acc))\r\n```\r\n\r\n#### Expected Results\r\nThe HistGradientBoostingClassifier successfully returns prediction results.\r\n\r\n#### Actual Results\r\n```\r\n File \"FILEPATH\", line 21, in \r\n y_pred = model.predict(X_test)\r\n\r\n File \"C:\\Software\\Anaconda\\lib\\site-packages\\sklearn\\ensemble\\_hist_gradient_boosting\\gradient_boosting.py\", line 1114, in predict\r\n encoded_classes = np.argmax(self.predict_proba(X), axis=1)\r\n\r\n File \"C:\\Software\\Anaconda\\lib\\site-packages\\sklearn\\ensemble\\_hist_gradient_boosting\\gradient_boosting.py\", line 1130, in predict_proba\r\n raw_predictions = self._raw_predict(X)\r\n\r\n File \"C:\\Software\\Anaconda\\lib\\site-packages\\sklearn\\ensemble\\_hist_gradient_boosting\\gradient_boosting.py\", line 667, in _raw_predict\r\n raw_predictions[k, :] += predict(X)\r\n\r\n File \"C:\\Software\\Anaconda\\lib\\site-packages\\sklearn\\ensemble\\_hist_gradient_boosting\\predictor.py\", line 47, in predict\r\n _predict_from_numeric_data(self.nodes, X, out)\r\n\r\n File \"sklearn\\ensemble\\_hist_gradient_boosting\\_predictor.pyx\", line 26, in sklearn.ensemble._hist_gradient_boosting._predictor._predict_from_numeric_data\r\n\r\nValueError: Buffer dtype mismatch, expected 'const X_DTYPE_C' but got 'unsigned char'\r\n```\r\n\r\n#### Versions\r\n\r\n\r\ncython == 0.29.21\r\nscikit-learn == 0.23.1\r\n\r\n\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/18410", "file_loc": {"base_commit": "2b79665b90bd54fa59701090d5f608a1fc4dd33a", "files": [{"path": "doc/whats_new/v0.24.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [213]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py", "status": "modified", "Loc": {"('BaseHistGradientBoosting', '_raw_predict', 635)": {"mod": [648, 649, 656]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/tests/test_gradient_boosting.py", "status": "modified", "Loc": {"(None, 'test_staged_predict', 760)": {"add": [796]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py"], "doc": ["doc/whats_new/v0.24.rst"], "test": ["sklearn/ensemble/_hist_gradient_boosting/tests/test_gradient_boosting.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "fee874cddc1af36344e1cdaedd6d80eb6aea8341", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/449", "iss_label": "", "title": "Fuck alias for fish", "body": "`~> fuck` says:\n\n> ```\n> Seems like fuck alias isn't configured!\n> Please put eval thefuck --alias in your ~/.config/fish/config.fish.\n> More details - https://github.com/nvbn/thefuck#manual-installation\n> ```\n\nbut https://github.com/nvbn/thefuck/wiki/Shell-aliases says:\n\n> Add this function to config.fish:\n> \n> ``` fish\n> eval (thefuck --alias | tr '\\n' ';')\n> ```\n\nWhat should I add to my `config.fish`? \n- `eval thefuck --alias`\n\nor\n- `eval (thefuck --alias | tr '\\n' ';')`\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/450", "file_loc": {"base_commit": "fee874cddc1af36344e1cdaedd6d80eb6aea8341", "files": [{"path": "thefuck/shells.py", "status": "modified", "Loc": {"('Fish', 'how_to_configure', 201)": {"mod": [202]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/shells.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "9f1814723f5596115a794a8bec0d053f25dbf32f", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/96828", "iss_label": "type-feature\ntopic-SSL", "title": "Add an `ssl.OP_ENABLE_KTLS` option for enabling the use of the kernel TLS", "body": "# Feature or enhancement\r\n\r\nA new `ssl.OP_ENABLE_KTLS` option for enabling the use of the kernel TLS.\r\n\r\n# Pitch\r\n\r\nKernel Transport Layer Security (kTLS) can improve performance of programs using TLS by reducing the number of switches between the user space and the kernel space. kTLS allows using the `sendfile` system call for sending data using TLS. Also, it may offload TLS to network interface controllers.\r\n\r\nkTLS is not enabled by default for various reasons which you can find in https://github.com/openssl/openssl/issues/13794. Even if a system supports the feature and OpenSSL was compiled with support for it, Python still has to set an OpenSSL's option `SSL_OP_ENABLE_KTLS` to use it.\r\n\r\nIn theory, it is possible to enable the kernel TLS in any Python compiled against OpenSSL 3 using this following code. If all other requirements are met, Python should start writing to and reading from a secure socket using the kernel TLS.\r\n```python\r\nimport ssl\r\ncontext = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)\r\ncontext.options |= 8 # SSL_OP_ENABLE_KTLS\r\n```\r\n\r\nSince Python's `ssl` module defines a few constants similar to `SSL_OP_ENABLE_KTLS`, it should provide an `ssl.OP_ENABLE_KTLS` option.\r\n\r\n\r\n# Previous discussion\r\n\r\nI created https://discuss.python.org/t/sslsocket-sendfile-and-kernel-tls/18886 previously to discuss benefiting from the OpenSSL's [SSL_sendfile](https://www.openssl.org/docs/manmaster/man3/SSL_sendfile.html) function. An option for enabling kTLS is a base for the work.\r\n", "pr_html_url": "https://github.com/python/cpython/pull/96830", "file_loc": {"base_commit": "9f1814723f5596115a794a8bec0d053f25dbf32f", "files": [{"path": "Doc/library/ssl.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [841]}}}, {"path": "Modules/_ssl.c", "status": "modified", "Loc": {"(None, 'sslmodule_init_constants', 5725)": {"add": [5883]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["Modules/_ssl.c"], "doc": ["Doc/library/ssl.rst"], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "fa876aee2adf525b597495c10ad9c96896953dbd", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/9620", "iss_label": "", "title": "SQuAD 2.0 metric not supported", "body": "Hello.\r\nI'm trying to run the official `run_qa.py` code for SQuAD 2.0.\r\n\r\nYou have an open TODO here that is causing a bug: https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py#L436\r\n\r\nI would like to know what is the status of this TODO, and if it is going to be updated, or is there a way around it.\r\n\r\nThis is the current code:\r\n\r\n```python\r\n current_dir = os.path.sep.join(os.path.join(__file__).split(os.path.sep)[:-1])\r\n metric = load_metric(os.path.join(current_dir, \"squad_v2_local\") if data_args.version_2_with_negative else \"squad\")\r\n```\r\n\r\nI receive: \r\n```\r\nFileNotFoundError: Couldn't find file locally at .../squad_v2_local/squad_v2_local.py,\r\n```\r\n\r\nI've tried to change it to: \r\n```python\r\nmetric = load_metric(\"squad_v2\" if data_args.version_2_with_negative else \"squad\")\r\n```\r\n\r\nBut this is the stacktrace I receive: \r\n```\r\nTraceback (most recent call last):\r\n File \"/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py\", line 557, in \r\n main()\r\n File \"/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py\", line 538, in main\r\n results = trainer.evaluate()\r\n File \"/data/users/yonatab/transformers_pip/QA/trainer_qa.py\", line 63, in evaluate\r\n metrics = self.compute_metrics(eval_preds)\r\n File \"/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py\", line 499, in compute_metrics\r\n return metric.compute(predictions=p.predictions, references=p.label_ids)\r\n File \"/data/users/yonatab/transformers_pip/trans_pip/lib/python3.6/site-packages/datasets/metric.py\", line 398, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/squad_v2.py\", line 108, in _compute\r\n exact_raw, f1_raw = get_raw_scores(dataset, predictions)\r\n File \"/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/evaluate.py\", line 111, in get_raw_scores\r\n gold_answers = [a[\"text\"] for a in qa[\"answers\"] if normalize_answer(a[\"text\"])]\r\n File \"/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/evaluate.py\", line 111, in \r\n gold_answers = [a[\"text\"] for a in qa[\"answers\"] if normalize_answer(a[\"text\"])]\r\nTypeError: string indices must be integers\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:05<00:00, 2.51it/s]\r\n```\r\n\r\nHow can I solve it? \r\n\r\nThanks", "pr_html_url": "https://github.com/huggingface/transformers/pull/9677", "file_loc": {"base_commit": "fa876aee2adf525b597495c10ad9c96896953dbd", "files": [{"path": "examples/question-answering/requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "examples/question-answering/run_qa.py", "status": "modified", "Loc": {"(None, 'main', 159)": {"mod": [436, 437, 438]}}}, {"path": "examples/question-answering/run_qa_beam_search.py", "status": "modified", "Loc": {"(None, 'main', 158)": {"mod": [475, 476, 477]}}}, {"path": "examples/question-answering/squad_v2_local/evaluate.py", "status": "removed", "Loc": {}}, {"path": "examples/question-answering/squad_v2_local/squad_v2_local.py", "status": "removed", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/question-answering/run_qa.py", "examples/question-answering/squad_v2_local/evaluate.py", "examples/question-answering/squad_v2_local/squad_v2_local.py", "examples/question-answering/run_qa_beam_search.py"], "doc": [], "test": [], "config": ["examples/question-answering/requirements.txt"], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "21fe11db51edcca881665694c4cc2a3fe6f1af54", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/113", "iss_label": "help wanted", "title": "Blackplanet false positive", "body": "Blackplanet is giving false positives. (request from germany)\r\n@Czechball you added this in #81 ; maybe you are able to fix it?", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/169", "file_loc": {"base_commit": "21fe11db51edcca881665694c4cc2a3fe6f1af54", "files": [{"path": "data.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031, 1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070, 1071, 1072, 1073, 1074, 1075, 1076, 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1087, 1088, 1089, 1090, 1091, 1092, 1093, 1094, 1095, 1096, 1097, 1098, 1099, 1100, 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128, 1129, 1130, 1131, 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1139, 1140, 1141, 1142, 1143, 1144]}}}, {"path": "sites.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 138]}}}, {"path": "tests/all.py", "status": "modified", "Loc": {"('SherlockSiteCoverageTests', 'test_coverage_true_via_message', 188)": {"add": [204]}}}, {"path": "tests/base.py", "status": "modified", "Loc": {"('SherlockBaseTest', 'detect_type_check', 109)": {"add": [168]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/base.py", "tests/all.py", "data.json"], "doc": ["sites.md"], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f201b2f5f32c2d48eab6632bf103e9b3a92fc999", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1239", "iss_label": "", "title": "RAG faiss AssertionError", "body": "**Bug description**\r\n\r\nexecute this demo\r\n```Python\r\nimport asyncio\r\n\r\nfrom metagpt.rag.engines import SimpleEngine\r\nfrom metagpt.rag.schema import FAISSRetrieverConfig\r\nfrom metagpt.const import EXAMPLE_DATA_PATH\r\n\r\nDOC_PATH = EXAMPLE_DATA_PATH / \"rag/travel.txt\"\r\n\r\nasync def main():\r\n engine = SimpleEngine.from_docs(input_files=[DOC_PATH], retriever_configs=[FAISSRetrieverConfig()])\r\n\r\n answer = await engine.aquery(\"What does Bob like?\")\r\n print(answer)\r\n\r\nif __name__ == \"__main__\":\r\n asyncio.run(main())\r\n\r\n```\r\nget error\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/home/wanfu/projects/llm/multi_agent_rag/src/simple_custom_object.py\", line 26, in \r\n asyncio.run(main())\r\n File \"/home/wanfu/data/miniconda3/envs/metagpt/lib/python3.9/asyncio/runners.py\", line 44, in run\r\n return loop.run_until_complete(main)\r\n File \"/home/wanfu/data/miniconda3/envs/metagpt/lib/python3.9/asyncio/base_events.py\", line 647, in run_until_complete\r\n return future.result()\r\n File \"/home/wanfu/projects/llm/multi_agent_rag/src/simple_custom_object.py\", line 21, in main\r\n engine.add_docs([DOC_PATH])\r\n File \"/mnt/data/work/development/projects/llm/MetaGPT/metagpt/rag/engines/simple.py\", line 195, in add_docs\r\n self._save_nodes(nodes)\r\n File \"/mnt/data/work/development/projects/llm/MetaGPT/metagpt/rag/engines/simple.py\", line 274, in _save_nodes\r\n self.retriever.add_nodes(nodes)\r\n File \"/mnt/data/work/development/projects/llm/MetaGPT/metagpt/rag/retrievers/faiss_retriever.py\", line 12, in add_nodes\r\n self._index.insert_nodes(nodes, **kwargs)\r\n File \"/home/wanfu/data/miniconda3/envs/metagpt/lib/python3.9/site-packages/llama_index/core/indices/vector_store/base.py\", line 320, in insert_nodes\r\n self._insert(nodes, **insert_kwargs)\r\n File \"/home/wanfu/data/miniconda3/envs/metagpt/lib/python3.9/site-packages/llama_index/core/indices/vector_store/base.py\", line 311, in _insert\r\n self._add_nodes_to_index(self._index_struct, nodes, **insert_kwargs)\r\n File \"/home/wanfu/data/miniconda3/envs/metagpt/lib/python3.9/site-packages/llama_index/core/indices/vector_store/base.py\", line 233, in _add_nodes_to_index\r\n new_ids = self._vector_store.add(nodes_batch, **insert_kwargs)\r\n File \"/home/wanfu/data/miniconda3/envs/metagpt/lib/python3.9/site-packages/llama_index/vector_stores/faiss/base.py\", line 121, in add\r\n self._faiss_index.add(text_embedding_np)\r\n File \"/home/wanfu/data/miniconda3/envs/metagpt/lib/python3.9/site-packages/faiss/__init__.py\", line 214, in replacement_add\r\n assert d == self.d\r\nAssertionError\r\n```\r\n\r\n**Bug solved method**\r\n\r\n\r\n\r\n**Environment information**\r\n\r\n\r\n- LLM type and model name: zhipuai\r\n- Embeddings : fastchat, BAAI/bge-large-zh\r\n- System version: Ubuntu 22.04\r\n- Python version: 3.9.19\r\n- MetaGPT version or branch: \r\n\r\n\r\n\r\n- packages version:\r\n- installation method: pip install from source\r\n\r\n**Screenshots or logs**\r\n\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1241", "file_loc": {"base_commit": "f201b2f5f32c2d48eab6632bf103e9b3a92fc999", "files": [{"path": "config/config2.example.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [20]}}}, {"path": "metagpt/configs/embedding_config.py", "status": "modified", "Loc": {"('EmbeddingConfig', None, 16)": {"add": [22, 27, 34, 43]}}}, {"path": "metagpt/rag/schema.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}, "('FAISSRetrieverConfig', 'check_dimensions', 45)": {"mod": [47]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/rag/schema.py", "metagpt/configs/embedding_config.py"], "doc": [], "test": [], "config": ["config/config2.example.yaml"], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "8f5592bcb61ff48c96560c8923e482db1076b54a", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/20324", "iss_label": "type:support\nkeras-team-review-pending", "title": "Reason for the recently added shape restriction in MultiHeadAttention", "body": "Hello,\r\n\r\nWondering why is there a restriction on the input shape of `query` and `value` to have a matching final dimension?\r\n\r\nThis blocks having cross-attention to a source that has a different shape than query, unless adding an extra projection layer. Given that all input tensors (`query`, `key`, `value`) are immediately projected by dense layers inside `MultiHeadAttention`, I don't think any restriction on final dims is necessary.\r\n\r\nFor reference, the [pytorch doc](https://keras.io/api/layers/attention_layers/multi_head_attention/) for `MultiHeadAttention` explicitly uses 3 distinct variables to describe expected dimensions for the three tensors. The tensorflow implementation does not enforce such restriction as well.\r\n\r\nThe restriction is enforced here: https://github.com/keras-team/keras/blob/5aa5f88dc200bbf2cd765d5a213c23c58da48e80/keras/src/layers/attention/multi_head_attention.py#L214-L219\r\n\r\nAnd was added as part of the PR #19973 (in response to the issue #19769)\r\n\r\nThanks", "pr_html_url": "https://github.com/keras-team/keras/pull/20340", "file_loc": {"base_commit": "8f5592bcb61ff48c96560c8923e482db1076b54a", "files": [{"path": "keras/src/layers/attention/multi_head_attention.py", "status": "modified", "Loc": {"('MultiHeadAttention', 'build', 199)": {"mod": [214, 215, 216, 217, 218, 219]}, "('MultiHeadAttention', 'compute_output_shape', 598)": {"mod": [607, 608, 609, 610, 611, 612]}}}, {"path": "keras/src/layers/attention/multi_head_attention_test.py", "status": "modified", "Loc": {"('MultiHeadAttentionTest', None, 17)": {"add": [106], "mod": [133]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["keras/src/layers/attention/multi_head_attention_test.py", "keras/src/layers/attention/multi_head_attention.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "2897cf43cea3d61b9673ce14ba796a663d99f19d", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/56571", "iss_label": "python3\nsupport:community\nbug\nhas_pr\naffects_2.7\ncollection\ncollection:community.general\nneeds_collection_redirect\nbot_closed", "title": "\"machinectl: invalid option -- 'c'\" when using become_method: machinectl", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\n\r\n`become_method: machinectl` fails with the error \"machinectl: invalid option -- 'c'\".\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\n`lib/ansible/plugins/become/machinectl.py`\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```paste below\r\nansible 2.7.10\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/thomas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.7/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\nHost: Arch Linux x64\r\nTarget: Ubuntu 16.04.6 Desktop 64-bits\r\n\r\nTarget machinectl version:\r\n```\r\n$ machinectl --version\r\nsystemd 229\r\n+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\n\r\n\r\n```yaml\r\n$ ansible -m ping --user thomas --become --become-user somebody --become-method machinectl target-machine\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\nNo error, `machinectl` should just work. (I need to use `machinectl` because I want to create/start a systemd user service running as `somebody`, while `somebody` may not be logged in.)\r\n\r\n##### ACTUAL RESULTS\r\n\r\n\r\nWith `-vvvv` (lots of spammy ssh output): [gist](https://gist.github.com/ttencate/36b75976a3564b8cd59ce1562c906c89)\r\n\r\nFrom the output we can see that Ansible is running this command on the target:\r\n\r\n```\r\nmachinectl shell -q somebody@ /bin/sh -c '\"'\"'\"'\"'\"'\"'\"'\"'echo BECOME-SUCCESS-kppmktulvryalmvucprbybyfjgvsiseh; /usr/bin/python /var/tmp/ansible-tmp-1558089659.8620167-87402387160364/AnsiballZ_ping.py'\"'\"'\"'\"'\"'\"'\"'\"' && sleep 0\r\n```\r\n\r\nOn the Ubuntu target (systemd 229), that fails:\r\n\r\n```\r\n$ machinectl shell -q somebody@ /bin/sh -c 'echo foo'\r\nmachinectl: invalid option -- 'c'\r\n```\r\n\r\nOn the Arch Linux host (systemd 242), it succeeds:\r\n\r\n```\r\n$ machinectl shell -q thomas@ /bin/sh -c 'echo foo'\r\n[...]\r\nfoo\r\n```\r\n\r\nThe cause seems to be [systemd issue #2420](https://github.com/systemd/systemd/issues/2420), which presumably was fixed just too late to make it into the Ubuntu 16.04 release. A simple workaround is to add `--` before the actual command, which terminates the option list and works on both old and new [edit 2019-07-05: no it doesn't, see below!]:\r\n\r\n```\r\n$ machinectl shell -q somebody@ -- /bin/sh -c 'echo foo'\r\n[...]\r\nfoo\r\n```", "pr_html_url": "https://github.com/ansible/ansible/pull/56572", "file_loc": {"base_commit": "2897cf43cea3d61b9673ce14ba796a663d99f19d", "files": [{"path": "lib/ansible/plugins/become/machinectl.py", "status": "modified", "Loc": {"('BecomeModule', 'build_become_command', 78)": {"mod": [87]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/plugins/become/machinectl.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "b5a5268dabb2a4dea1c3c543a1ddff501b87a447", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16870", "iss_label": "Docs\nGroupby\ngood first issue", "title": "(DOC) A `string` passed to `groupby` is hard to understand based on current doc", "body": "#### Code Sample, a copy-pastable example if possible\r\nFrom [Here](pandas/doc/source/groupby.rst)\r\n```rst\r\nFor DataFrame objects, a string indicating a column to be used to group. Of course \r\ndf.groupby('A') is just syntactic sugar for df.groupby(df['A']), but \r\nit makes life simpler\r\nFor DataFrame objects, a string indicating an index level to be used to group.\r\n\r\n```\r\n#### Problem description\r\n\r\nThese two sentences are in a kind of conflict with each other, until one read until she read the note below.\r\n#### Expected Output\r\nReword to make it clear that a string may indicate column or index level\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n
    \r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.5.3.final.0\r\npython-bits: 64\r\nOS: Darwin\r\nOS-release: 16.6.0\r\nmachine: x86_64\r\nprocessor: i386\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.21.0.dev+193.gb2b5dc32e\r\npytest: 3.1.2\r\npip: 9.0.1\r\nsetuptools: 36.0.1\r\nCython: 0.25.2\r\nnumpy: 1.13.0\r\nscipy: 0.19.0\r\nxarray: None\r\nIPython: 6.0.0\r\nsphinx: 1.6.2\r\npatsy: 0.4.1\r\ndateutil: 2.6.0\r\npytz: 2017.2\r\nblosc: None\r\nbottleneck: 1.2.1\r\ntables: None\r\nnumexpr: 2.6.2\r\nfeather: None\r\nmatplotlib: 2.0.2\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: 0.9999999\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: None\r\njinja2: 2.9.6\r\ns3fs: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n
    \r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/36238", "file_loc": {"base_commit": "b5a5268dabb2a4dea1c3c543a1ddff501b87a447", "files": [{"path": "doc/source/user_guide/groupby.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [90, 91, 92, 93, 94]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["doc/source/user_guide/groupby.rst"], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "48d0460ab9acbee223bae1be699344f8fd232224", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/12401", "iss_label": "Indexing\nAPI Design\nDeprecate\nNeeds Discussion", "title": "DEPR: filter & select", "body": "do we need label selectors? we should for sure just have a single method for this. maybe call it `query_labels`? to be consistent with `.query` as the workhorse for data selection.\r\n\r\n- [x] ``.select`` (#17633)\r\n- [ ] ``.filter``\r\n\r\nxref #6599 \r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pandas-dev/pandas/commit/48d0460ab9acbee223bae1be699344f8fd232224", "file_loc": {"base_commit": "48d0460ab9acbee223bae1be699344f8fd232224", "files": [{"path": "doc/source/whatsnew/v0.21.0.txt", "status": "modified", "Loc": {"(None, None, 669)": {"add": [669]}}}, {"path": "pandas/core/common.py", "status": "modified", "Loc": {"(None, '_apply_if_callable', 444)": {"add": [447]}}}, {"path": "pandas/core/generic.py", "status": "modified", "Loc": {"('NDFrame', 'select', 2338)": {"add": [2341, 2351]}, "('NDFrame', 'filter', 3061)": {"mod": [3104, 3123, 3124, 3127, 3128, 3130, 3131, 3132, 3135, 3136]}}}, {"path": "pandas/core/indexing.py", "status": "modified", "Loc": {"('_NDFrameIndexer', '__call__', 98)": {"add": [101]}, "('_NDFrameIndexer', '__getitem__', 110)": {"add": [119], "mod": [121, 123]}, "('_NDFrameIndexer', None, 88)": {"add": [198], "mod": [110, 195]}, "('_NDFrameIndexer', '_convert_tuple', 228)": {"add": [235]}, "('_NDFrameIndexer', '_getitem_iterable', 1110)": {"add": [1155], "mod": [1141]}, "('_NDFrameIndexer', '_convert_to_indexer', 1167)": {"add": [1260], "mod": [1258]}, "('_LocationIndexer', None, 1355)": {"add": [1358], "mod": [1357]}, "('_iLocIndexer', '_getitem_tuple', 1735)": {"add": [1744], "mod": [1751]}, "('_iLocIndexer', '_get_list_axis', 1778)": {"add": [1785], "mod": [1784]}, "('_NDFrameIndexer', '_get_label', 129)": {"mod": [138, 141]}, "('_NDFrameIndexer', '_get_setitem_indexer', 157)": {"mod": [176]}, "('_NDFrameIndexer', '_multi_take_opportunity', 882)": {"mod": [898]}, "('_NDFrameIndexer', '_convert_for_reindex', 916)": {"mod": [928]}, "('_NDFrameIndexer', '_getitem_lowerdim', 963)": {"mod": [1018]}, "('_NDFrameIndexer', '_getitem_nested_tuple', 1024)": {"mod": [1052]}, "('_NDFrameIndexer', '_getitem_axis', 1072)": {"mod": [1087]}, "('_IXIndexer', '__init__', 1324)": {"mod": [1328, 1336, 1337]}, "('_IXIndexer', '_has_valid_type', 1338)": {"mod": [1345, 1348]}, "('_LocIndexer', '_is_scalar_access', 1518)": {"mod": [1531]}, "('_iLocIndexer', '_is_valid_list_like', 1716)": {"mod": [1720, 1732]}, "('_iLocIndexer', '_getitem_axis', 1799)": {"mod": [1821]}}}, {"path": "pandas/tests/frame/test_alter_axes.py", "status": "modified", "Loc": {"('TestDataFrameAlterAxes', 'test_set_index_bug', 143)": {"add": [149], "mod": [146, 147]}}}, {"path": "pandas/tests/frame/test_axis_select_reindex.py", "status": "modified", "Loc": {"('TestDataFrameSelectReindex', None, 25)": {"add": [798]}, "('TestDataFrameSelectReindex', 'test_select', 798)": {"add": [806], "mod": [800, 801, 802, 803, 805, 808]}}}, {"path": "pandas/tests/frame/test_mutate_columns.py", "status": "modified", "Loc": {}}, {"path": "pandas/tests/groupby/test_groupby.py", "status": "modified", "Loc": {"('TestGroupBy', '_func', 3105)": {"mod": [3106]}}}, {"path": "pandas/tests/series/test_indexing.py", "status": "modified", "Loc": {"('TestSeriesIndexing', 'test_select', 2227)": {"mod": [2228, 2229, 2230, 2231, 2233, 2234, 2235]}}}, {"path": "pandas/tests/test_multilevel.py", "status": "modified", "Loc": {"('TestMultiLevel', 'test_groupby_level_no_obs', 1236)": {"mod": [1242]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/common.py", "pandas/core/generic.py", "pandas/core/indexing.py"], "doc": ["doc/source/whatsnew/v0.21.0.txt"], "test": ["pandas/tests/series/test_indexing.py", "pandas/tests/test_multilevel.py", "pandas/tests/frame/test_mutate_columns.py", "pandas/tests/groupby/test_groupby.py", "pandas/tests/frame/test_alter_axes.py", "pandas/tests/frame/test_axis_select_reindex.py"], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9dc151e5b58abb5f8862d2aa84124ed86156e0b8", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/355", "iss_label": "", "title": "when using GUI recent version, A converting error has occurred.", "body": "I am testing the gui version downloaded today. But when converting, the following error has occurred.\r\nCan anyone tell me what I am doing wrong or how to solve it?\r\n\r\n(1) error message \r\n\"Failed to convert image: ...\\faceA_source_gui\\out1.png. Reason: argument of type 'NoneType' is not iterable\"\r\n\r\n(1) train image : \r\nhttps://imgur.com/tLB15CB\r\n\r\n(2) convert error image : \r\nhttps://imgur.com/OAzWKdR\r\n\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/352", "file_loc": {"base_commit": "9dc151e5b58abb5f8862d2aa84124ed86156e0b8", "files": [{"path": "faceswap.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [30]}}}, {"path": "requirements-gpu-python35-cuda8.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}}}, {"path": "requirements-gpu-python36-cuda9.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [10]}}}, {"path": "requirements-python35.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}}}, {"path": "requirements-python36.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [10]}}}, {"path": "scripts/convert.py", "status": "modified", "Loc": {"('ConvertImage', 'get_optional_arguments', 26)": {"mod": [119]}}}, {"path": "scripts/gui.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 5, 377], "mod": [1, 4, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 44, 45, 46, 47]}, "('TKGui', None, 423)": {"add": [435, 470], "mod": [424, 425, 426]}, "('TKGui', 'extract_options', 436)": {"add": [441], "mod": [437, 438, 439, 443]}, "('TKGui', 'process', 480)": {"add": [482], "mod": [481]}, "('FaceswapGui', None, 49)": {"mod": [49, 50, 51, 52, 70, 71, 72, 73, 74, 75, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 159, 160, 161, 162, 163, 165, 166, 167]}, "('FaceswapGui', '__init__', 51)": {"mod": [54, 56, 57, 58, 59, 60, 61, 62, 64, 65, 67, 68]}, "('FaceswapGui', 'build_gui', 70)": {"mod": [77, 78, 79, 80]}, "('FaceswapGui', 'load_config', 97)": {"mod": [98, 107, 108]}, "('FaceswapGui', 'set_command_args', 110)": {"mod": [111]}, "('FaceswapGui', 'save_config', 118)": {"mod": [119, 120, 121, 122, 132]}, "('FaceswapGui', 'reset_config', 134)": {"mod": [135]}, "('FaceswapGui', 'clear_config', 145)": {"mod": [146]}, "('ActionFrame', None, 169)": {"mod": [169, 170, 171, 172, 173, 174, 175, 177, 178, 179, 180, 217, 218, 219, 220]}, "('ActionFrame', 'build_frame', 177)": {"mod": [182, 183, 184, 185, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 204, 205, 206, 207, 209, 210, 211, 212, 213, 214, 215]}, "('ActionFrame', 'add_util_buttons', 217)": {"mod": [222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 233, 234, 235, 236, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249]}, "('CommandTab', None, 251)": {"mod": [252, 253, 254, 272, 273, 274, 275, 277, 278, 279, 280, 292, 293, 294, 295, 296, 297, 299, 300, 301, 303, 304, 305, 306, 307, 308, 309, 326, 327, 328, 331, 332]}, "('CommandTab', 'build_tab', 260)": {"mod": [261, 262, 264, 265, 267, 268]}, "('CommandTab', 'add_right_frame', 277)": {"mod": [282, 283, 285, 287, 288, 290]}, "('CommandTab', 'build_tabs', 304)": {"mod": [312, 314, 315, 317, 318, 320, 321, 322, 323]}, "('CommandTab', 'build_control', 331)": {"mod": [335, 341, 342, 344, 345, 352, 354, 355]}, "('CommandTab', 'add_browser_buttons', 357)": {"mod": [358, 359, 361, 362]}, "('CommandTab', 'ask_folder', 365)": {"mod": [366]}, "('CommandTab', 'ask_load', 372)": {"mod": [373]}, "('FaceswapControl', None, 378)": {"mod": [379, 380, 381, 382, 383, 384, 386, 387, 388, 390, 391]}, "('FaceswapControl', 'execute_script', 390)": {"mod": [393, 394, 396, 397, 398, 403, 405, 406, 407, 408, 410, 411, 412]}, "('FaceswapControl', 'launch_faceswap', 410)": {"mod": [414, 415, 416, 417, 418, 419, 420, 421]}, "('TKGui', '__init__', 425)": {"mod": [428, 431, 433]}, "('TKGui', 'set_control_title', 449)": {"mod": [450, 452]}, "('TKGui', 'set_control', 456)": {"mod": [457, 459, 467]}, "('TKGui', 'parse_arguments', 470)": {"mod": [476, 477, 478]}}}, {"path": "scripts/train.py", "status": "modified", "Loc": {"('TrainingProcessor', 'get_argument_list', 40)": {"add": [108]}, "(None, None, None)": {"mod": [7]}, "('TrainingProcessor', 'process', 141)": {"mod": [164, 165]}, "('TrainingProcessor', 'show', 226)": {"mod": [228]}}}, {"path": "tools.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 29]}}}, {"path": "tools/sort.py", "status": "modified", "Loc": {"('SortProcessor', None, 35)": {"add": [40], "mod": [721, 722, 723, 724, 803, 804]}, "(None, None, None)": {"mod": [1, 11, 13, 30, 31, 32, 33, 817, 818]}, "(None, 'import_face_recognition', 17)": {"mod": [18]}, "(None, 'import_FaceLandmarksExtractor', 23)": {"mod": [24]}, "('SortProcessor', '__init__', 36)": {"mod": [37]}, "('SortProcessor', 'parse_arguments', 41)": {"mod": [53, 54, 55, 56, 57, 59, 60, 61, 62, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 112, 113, 114, 115, 116, 117, 118, 119, 120, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 133, 134, 135, 136, 137, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161]}, "('SortProcessor', 'add_optional_arguments', 166)": {"mod": [167]}, "('SortProcessor', 'process_arguments', 170)": {"mod": [171, 177, 178, 181, 182, 185, 186, 189, 190, 192, 194, 196, 199, 203, 204]}, "('SortProcessor', 'process', 208)": {"mod": [214, 215, 216, 218, 219, 220, 221, 224, 226, 234]}, "('SortProcessor', 'sort_blur', 237)": {"mod": [238, 240, 241, 242]}, "('SortProcessor', 'sort_face', 248)": {"mod": [251, 253, 255, 258, 260, 261, 272, 273]}, "('SortProcessor', 'sort_face_dissim', 277)": {"mod": [280, 282, 284, 287, 289, 300]}, "('SortProcessor', 'sort_face_cnn', 304)": {"mod": [307, 309, 312, 314, 317, 319, 320, 324, 329]}, "('SortProcessor', 'sort_face_cnn_dissim', 333)": {"mod": [336, 338, 341, 343, 346, 348, 353, 357]}, "('SortProcessor', 'sort_face_yaw', 362)": {"mod": [363, 364, 373, 376, 378, 380]}, "('SortProcessor', 'calc_landmarks_face_pitch', 363)": {"mod": [366]}, "('SortProcessor', 'calc_landmarks_face_yaw', 367)": {"mod": [368, 369, 370]}, "('SortProcessor', 'sort_hist', 385)": {"mod": [386, 388, 390, 393, 395, 396, 397, 401]}, "('SortProcessor', 'sort_hist_dissim', 405)": {"mod": [406, 408, 410, 413, 415, 418, 423]}, "('SortProcessor', 'group_blur', 429)": {"mod": [431, 438, 439]}, "('SortProcessor', 'group_face', 452)": {"mod": [453, 465]}, "('SortProcessor', 'group_face_cnn', 503)": {"mod": [504, 517, 521]}, "('SortProcessor', 'group_hist', 545)": {"mod": [546, 555]}, "('SortProcessor', 'final_process_rename', 578)": {"mod": [579, 581, 584, 585, 587, 593, 595, 598, 600, 601, 605, 608, 610, 611]}, "('SortProcessor', 'final_process_group', 613)": {"mod": [614, 616, 620, 622, 623, 624, 626, 628, 632, 634, 636, 638, 639, 641, 642]}, "('SortProcessor', 'reload_images', 645)": {"mod": [657, 660, 662, 667, 670]}, "('SortProcessor', 'find_images', 703)": {"mod": [709]}, "('SortProcessor', 'renaming', 759)": {"mod": [762, 763]}, "('SortProcessor', 'renaming', 769)": {"mod": [772, 773]}, "('SortProcessor', 'get_avg_score_hist', 778)": {"mod": [783]}, "('SortProcessor', 'get_avg_score_faces', 786)": {"mod": [792]}, "('SortProcessor', 'get_avg_score_faces_cnn', 795)": {"mod": [798, 800]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scripts/gui.py", "scripts/train.py", "faceswap.py", "tools.py", "tools/sort.py", "scripts/convert.py"], "doc": [], "test": [], "config": ["requirements-gpu-python36-cuda9.txt", "requirements-gpu-python35-cuda8.txt", "requirements-python35.txt", "requirements-python36.txt"], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "b9478049b3e8644be2de93015476b9111126d683", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/660", "iss_label": "bug", "title": "gpt4free useless: IndexError: list index out of range", "body": "**Bug description**\nTelegram bot using gpt4free not working\nmain.py:\n```import telebot\nfrom gpt4free import usesless\n\nbot = telebot.TeleBot('my_token')\n\n@bot.message_handler(commands=['start'])\ndef send_welcome(message):\n bot.reply_to(message, \"ChatGPT unlimited and free but without memory\")\n\n@bot.message_handler()\ndef test(message):\n prompt = \"\"\n req = usesless.Completion.create(prompt=prompt)\n prompt = message.text\n bot.reply_to(message, req[\"text\"])\n\nif __name__ == \"__main__\":\n bot.polling()\n```\nError:\n```\nTraceback (most recent call last): File \"main.py\", line 20, in bot.polling()\n\nFile \"/home/runner/Test/venv/lib/python3.1\n\n0/site-packages/telebot/__init__.py\", line 1 043, in polling self.__threaded_polling (non_stop=non_sto p, interval=interval, timeout=timeout, long_\n\npolling_timeout-long_polling_timeout,\n\nFile \"/home/runner/Test/venv/lib/python3.1\n\n0/site-packages/telebot/__init__.py\", line 1 118, in __threaded_polling\n\nraise e\n\nFile \"/home/runner/Test/venv/lib/python3.1 0/site-packages/telebot/__init__.py\", line 1 074, in threaded_polling\n\nself.worker_pool.raise_exceptions() File \"/home/runner/Test/venv/lib/python3.1 0/site-packages/telebot/util.py\", line 147, in raise_exceptions\n\nraise self.exception_info File \"/home/runner/Test/venv/lib/python3.1 0/site-packages/telebot/util.py\", line 90, i n run\n\ntask(*args, **kwargs) File \"/home/runner/Test/venv/lib/python3.1 0/site-packages/telebot/__init__.py\", line 6 770, in _run_middlewares_and_handler result = handler['function'](message)\n\nFile \"main.py\", line 15, in test\n\nreq = usesless.Completion.create(prompt=prompt) File \"/home/runner/Test/venv/lib/python3.1 0/site-packages/gpt4free/usesless/__init__.py\", line 46, in create\n\nresponse = Completion.__response_to_json (content) File \"/home/runner/Test/venv/lib/python3.10/site-packages/gpt4free/usesless/__init__.py\", line 53, in __response_to_json split_text = text.rsplit(\"\\n\", 1)[1]\n\nIndexError: list index out of range\n```\n\n**Environement**\n- python version 3.10\n- server location Poland\n\n**Additional context**\nIf you need more information to help me, please let me know.", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/664", "file_loc": {"base_commit": "b9478049b3e8644be2de93015476b9111126d683", "files": [{"path": "gpt4free/usesless/__init__.py", "status": "modified", "Loc": {"('Completion', '__response_to_json', 148)": {"mod": [151, 152, 153]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt4free/usesless/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "aab55e649c34f8a24f00ee63922d049d3417c979", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/8304", "iss_label": "", "title": "HDF5 Normalizer not working.", "body": "```\r\ndef preprocess_train(array):\r\n \"\"\" Given a batch of numpy arrays, it outputs a batch of numpy of arrays with all preprocessing\r\n\r\n size : (w, h)\r\n \"\"\"\r\n num1 = np.random.randint(0, 128 - 112)\r\n num2 = np.random.randint(0, 171 - 112)\r\n crop = array[ :, num1:num1+112, num2:num2+112, :]\r\n crop = crop/255.0\r\n return crop\r\n```\r\n```\r\nX_train = HDF5Matrix(train_loc, 'images', start=0, normalizer=preprocess_train)\r\ny_train = HDF5Matrix(train_loc, 'labels')\r\n````\r\n```\r\nmodel_final.fit(X_train, y_train, batch_size=16, shuffle='batch', validation_data = [X_test, y_test], epochs=10)\r\n```\r\n```\r\nValueError: Error when checking model input: expected conv1_input to have shape (None, 16, 112, 112, 3) but got array with shape (5797, 16, 128, 171, 3)\r\n```\r\nBasically I have a h5py file with shape (5797, 16, 128, 171, 3) and my preprocess function should output (16, 112, 112, 3). this is not happening.\r\n\r\nHowever when I run only X_train and used X_train.__getitem___(1). It outputs an array with (16, 112, 112, 3) shape. \r\n\r\nNot sure where I am going wrong. Can someone help me ?", "pr_html_url": "https://github.com/keras-team/keras/pull/10749", "file_loc": {"base_commit": "aab55e649c34f8a24f00ee63922d049d3417c979", "files": [{"path": "keras/utils/io_utils.py", "status": "modified", "Loc": {"('HDF5Matrix', '__init__', 44)": {"add": [60]}, "('HDF5Matrix', 'shape', 98)": {"mod": [104]}, "('HDF5Matrix', 'dtype', 107)": {"mod": [113]}}}, {"path": "tests/keras/utils/io_utils_test.py", "status": "modified", "Loc": {"(None, 'test_io_utils', 43)": {"add": [106]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["keras/utils/io_utils.py", "tests/keras/utils/io_utils_test.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "8c8feb95a9c9048d655bc1eb263f6bc6ee61ee74", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/4", "iss_label": "", "title": "Instructions don't result in homeassistant listening on any port", "body": "Neither of these result in homeassistant listening on port `8123`\n\n``` bash\npython3 -m homeassistant\npython3 -m homeassistant --config=config\n```\n\nIn fact, it isn't seeming to be listening on _any_ port.\n\n``` bash\n(ve)[jeff@omniscience home-assistant] (master)$ ./build_frontend\n(ve)[jeff@omniscience home-assistant] (master)$ git status\nOn branch master\nYour branch is up-to-date with 'origin/master'.\nChanges not staged for commit:\n (use \"git add/rm ...\" to update what will be committed)\n (use \"git checkout -- ...\" to discard changes in working directory)\n\n modified: build_frontend\n deleted: config/home-assistant.conf.example\n modified: homeassistant/components/http/frontend.py\n modified: homeassistant/components/http/www_static/frontend.html\n\nUntracked files:\n (use \"git add ...\" to include in what will be committed)\n\n ve/\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")\n(ve)[jeff@omniscience home-assistant] (master)$ python3 -m homeassistant --config config\nINFO:homeassistant.loader:Loaded component demo from homeassistant.components.demo\nERROR:homeassistant.loader:Error loading homeassistant.components.http\nTraceback (most recent call last):\n File \"/home/jeff/git/home-assistant/homeassistant/loader.py\", line 91, in _get_component\n comp = importlib.import_module(module)\n File \"/usr/lib64/python3.4/importlib/__init__.py\", line 109, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"\", line 2254, in _gcd_import\n File \"\", line 2237, in _find_and_load\n File \"\", line 2226, in _find_and_load_unlocked\n File \"\", line 1200, in _load_unlocked\n File \"\", line 1129, in _exec\n File \"\", line 1471, in exec_module\n File \"\", line 321, in _call_with_frames_removed\n File \"/home/jeff/git/home-assistant/homeassistant/components/http/__init__.py\", line 86, in \n import homeassistant.remote as rem\n File \"/home/jeff/git/home-assistant/homeassistant/remote.py\", line 18, in \n import requests\nImportError: No module named 'requests'\nERROR:homeassistant.loader:Unable to load component http\nINFO:homeassistant.loader:Loaded component group from homeassistant.components.group\nINFO:homeassistant.bootstrap:Home Assistant core initialized\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant.loader:Loaded component sun from homeassistant.components.sun\nERROR:homeassistant.components.sun:Error while importing dependency ephem.\nTraceback (most recent call last):\n File \"/home/jeff/git/home-assistant/homeassistant/components/sun.py\", line 66, in setup\n import ephem\nImportError: No module named 'ephem'\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nERROR:homeassistant:WorkerPool:All 4 threads are busy and 17 jobs pending\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nERROR:homeassistant:WorkerPool:All 4 threads are busy and 33 jobs pending\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant.bootstrap:component demo initialized\nINFO:homeassistant.bootstrap:component group initialized\nINFO:homeassistant:Bus:Handling \nINFO:homeassistant:Timer:starting\nINFO:homeassistant:Bus:Handling \n^C\n```\n\nSadly, I can't use the docker container due to [docker being broken in Fedora 21](https://github.com/docker/docker/issues/7952) right now. So while it is running, I tried `lsof -i tcp:8123` and `lsof -p $(pidof python3)`\n\nThis is with Python 3.4.1 on Fedora 21 (pre-release) x86_64.\n\nFYI: I work on python automation code and django apps for `$REAL_JOB` and would love to help you improve this software if at all possible. I've got a home Insteon network and have SONOS speakers throughout the house. Once I get this all working, one of the first things I'd like to write is the integration between this and the SONOS xml api\n", "pr_html_url": "https://github.com/home-assistant/core/pull/35811", "file_loc": {"base_commit": "8c8feb95a9c9048d655bc1eb263f6bc6ee61ee74", "files": [{"path": "homeassistant/components/google_assistant/helpers.py", "status": "modified", "Loc": {"('GoogleEntity', 'sync_serialize', 393)": {"add": [428]}}}, {"path": "tests/components/google_assistant/test_helpers.py", "status": "modified", "Loc": {"(None, 'test_google_entity_sync_serialize_with_local_sdk', 25)": {"mod": [47, 48, 49, 50, 51, 52, 53, 54, 55]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["homeassistant/components/google_assistant/helpers.py"], "doc": [], "test": ["tests/components/google_assistant/test_helpers.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "2203c3bccd5e4888a16d73247d540fd6e359d29c", "iss_html_url": "https://github.com/psf/requests/issues/1", "iss_label": "", "title": "Cookie support?", "body": "An feature request (not found in documentation).\n\nDoes this support cookies?\n\nUsecase: I can integrate this module inside an existings framework. This framework generate for me the authentication/session cookie, so to perform request using requests there I need to add the same auth cookie already generated.\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/psf/requests/commit/2203c3bccd5e4888a16d73247d540fd6e359d29c", "file_loc": {"base_commit": "2203c3bccd5e4888a16d73247d540fd6e359d29c", "files": [{"path": "requests/core.py", "status": "modified", "Loc": {"('Request', '__init__', 68)": {"add": [76]}, "('Request', None, 61)": {"add": [101]}, "('Request', '_get_opener', 101)": {"mod": [108, 109, 112, 113, 114]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "\u5e76\u6ca1\u6709\u627e\u5230\u5bf9\u5e94\u7684pr\uff0c\u8fd9\u4e00\u884c\u63d0\u4f9b\u7684pr\u4e5f\u65e0\u6cd5\u89e3\u51b3issue\u95ee\u9898\uff0c\u5728issue\u4e2d\u89e3\u51b3\u95ee\u9898\u7684\u662f\u4e00\u4e2acommit", "info_type": ""}, "loctype": {"code": ["requests/core.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "ac4e05874a1a983ca126185a0e4d4e74915f792e", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1859", "iss_label": "", "title": "Brittle test", "body": "The test `test_expires_valid_str` fails on my OS X box, in Python 2.7:\n\n``` python\n============================= test session starts ==============================\nplatform darwin -- Python 2.7.5 -- pytest-2.3.4\nplugins: cov\ncollected 116 items \n\ntest_requests.py .................................................................................................................F..\n\n=================================== FAILURES ===================================\n_______________ TestMorselToCookieExpires.test_expires_valid_str _______________\n\nself = \n\n def test_expires_valid_str(self):\n \"\"\"Test case where we convert expires from string time.\"\"\"\n\n morsel = Morsel()\n morsel['expires'] = 'Thu, 01-Jan-1970 00:00:01 GMT'\n cookie = morsel_to_cookie(morsel)\n> assert cookie.expires == 1\nE AssertionError: assert -3599 == 1\nE + where -3599 = Cookie(version=0, name=None, value=None, port=None, port_specified=False, domain='', domain_specified=False, domain_in...False, secure=False, expires=-3599, discard=False, comment='', comment_url=False, rest={'HttpOnly': ''}, rfc2109=False).expires\n\ntest_requests.py:1111: AssertionError\n==================== 1 failed, 115 passed in 23.32 seconds =====================\n```\n\nI've not yet got a good theory for this, though I think it's telling that the error is one hour. I don't know _what_ it's telling though, because time is complicated.\n\nAnyway, this test needs to be rewritten to be more accepting of breakage. It's also possible that the intermittent failure of this test represents a problem with the `morsel_to_cookie` function itself, in which case that needs rewriting.\n", "pr_html_url": "https://github.com/psf/requests/pull/1860", "file_loc": {"base_commit": "ac4e05874a1a983ca126185a0e4d4e74915f792e", "files": [{"path": "requests/cookies.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9]}, "(None, 'morsel_to_cookie', 388)": {"mod": [396, 397]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/cookies.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "58ddd4338adf12a3abc2ffed0e27794a398fa8d2", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/994", "iss_label": "help wanted\nhacktoberfest", "title": "UnicodeDecodeError when using thefuck", "body": "I followed the alias guide, but I got an error when running thefuck in PowerShell:\r\n```\r\nTraceback (most recent call last):\r\n File \"d:\\python36\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"d:\\python36\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"D:\\Python36\\Scripts\\thefuck.exe\\__main__.py\", line 9, in \r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\entrypoints\\main.py\", line 26, in main\r\n fix_command(known_args)\r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\entrypoints\\fix_command.py\", line 36, in fix_command\r\n command = types.Command.from_raw_script(raw_command)\r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\types.py\", line 82, in from_raw_script\r\n output = get_output(script, expanded)\r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\output_readers\\__init__.py\", line 20, in get_output\r\n return rerun.get_output(script, expanded)\r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\output_readers\\rerun.py\", line 62, in get_output\r\n output = result.stdout.read().decode('utf-8')\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 9: invalid start byte\r\n```", "pr_html_url": "https://github.com/nvbn/thefuck/pull/1214", "file_loc": {"base_commit": "58ddd4338adf12a3abc2ffed0e27794a398fa8d2", "files": [{"path": "tests/output_readers/test_rerun.py", "status": "modified", "Loc": {"('TestRerun', None, 9)": {"add": [24]}}}, {"path": "thefuck/output_readers/rerun.py", "status": "modified", "Loc": {"(None, 'get_output', 45)": {"mod": [63]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/output_readers/rerun.py"], "doc": [], "test": ["tests/output_readers/test_rerun.py"], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "c50287c23b3f35f54aa703823a8c3f9cbfc34377", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/233", "iss_label": "", "title": "Some faces with one eye hair covered can't be recognized", "body": "*First THANKS A LOT for all contributors' hard work!\r\n*Always make a compare test after big change, test with same source 1000 pics (kar801 -> kar1800) , compare with FakeApp1.1 & latest faceswap commit 232d931. \r\n*Test files [Link Removed]\r\n## Expected behavior\r\nNot sure, limitation ? or possible to improve ? \r\n\r\n## Actual behavior\r\nFakeApp1.1 extract rate is 988/1000\r\nfaceswap -D cnn extract rate is 943/1000\r\n\r\n[Image Removed]\r\n\r\nNotice that some faces - specially one eye covered by hair can't be extract. Example: kar1086 -> kar1090, these 5 pics can be extract normally in FakeApp, but failed in faceswap. Compare kar1085 with kar1086, no big gap in these 2 pics, just corner of the eye be covered by hair in kar1086. \u00a0\r\n\r\n## Steps to reproduce\r\npython faceswap.py extract -i D:/project4/data_A1/ -o D:/project4/data_A1/output/ -D cnn\r\n\r\n## Other relevant information\r\n\r\n- **Operating system and version:** Windows, macOS, Linux \r\nWindows10\r\nPython3.6.4\r\nCUDA9.0\r\ndlib 19.9\r\nThe others env same as requirements-gpu-python36-cuda9.txt\r\n\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/236", "file_loc": {"base_commit": "c50287c23b3f35f54aa703823a8c3f9cbfc34377", "files": [{"path": "lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13], "mod": [11, 12]}, "(None, 'extract', 114)": {"add": [162, 170], "mod": [114, 115, 117, 118, 121, 123, 124, 125, 126, 127, 129, 130, 132, 133, 136, 139, 141, 143, 145, 146, 147, 148, 149, 150, 151, 152, 153, 155, 156, 157, 158, 161, 165, 169, 172]}, "(None, 'onExit', 16)": {"mod": [17, 18, 25, 26, 28, 29]}}}, {"path": "lib/ModelAE.py", "status": "modified", "Loc": {"('TrainerAE', 'show_sample', 69)": {"add": [80], "mod": [82]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py", "lib/ModelAE.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "02e05fb0a532e572b56ba75dad6ba3db625bbdeb", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/9438", "iss_label": "Documentation", "title": "Doc styling utils adds parasites new lines", "body": "## Environment info\r\n \r\n- `transformers` version: 4.2.0dev0\r\n- Platform: Windows-10-10.0.18362-SP0\r\n- Python version: 3.7.9\r\n- PyTorch version (GPU?): 1.7.1 (False)\r\n- Tensorflow version (GPU?): 2.3.1 (False)\r\n- Using GPU in script?: Nope\r\n- Using distributed or parallel set-up in script?: Nope\r\n\r\n### Who can help\r\n\r\n@sgugger \r\n\r\n## Information\r\n\r\nRunning the python util to style docs adds parasite new lines in every single docstring. See:\r\n\r\n```bash\r\n$ python utils/style_doc.py src/transformers docs/source --max_len 119 --check_only\r\nTraceback (most recent call last):\r\n File \"utils/style_doc.py\", line 491, in \r\n main(*args.files, max_len=args.max_len, check_only=args.check_only)\r\n File \"utils/style_doc.py\", line 479, in main\r\n raise ValueError(f\"{len(changed)} files should be restyled!\")\r\nValueError: 345 files should be restyled!\r\n```\r\n\r\nSee this commit for an example of what it does: https://github.com/huggingface/transformers/pull/9150/commits/b4dedd5ca25f043c66d12c774fa00a34c74dffb2\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Checkout and update master branch\r\n2. run `python utils/style_doc.py src/transformers docs/source --max_len 119 --check-only` from transformers root\r\n\r\nOutput:\r\n```python\r\nTraceback (most recent call last):\r\n File \"utils/style_doc.py\", line 491, in \r\n main(*args.files, max_len=args.max_len, check_only=args.check_only)\r\n File \"utils/style_doc.py\", line 479, in main\r\n raise ValueError(f\"{len(changed)} files should be restyled!\")\r\nValueError: 345 files should be restyled!\r\n```\r\n\r\nIt might have something to do with Windows or a particular setup of my machine because behavior cannot be reproduced by @patrickvonplaten.\r\n\r\n## Expected behavior\r\n\r\nOn master branch, documentation should not need to be restyled\r\n", "pr_html_url": "https://github.com/huggingface/transformers/pull/9488", "file_loc": {"base_commit": "02e05fb0a532e572b56ba75dad6ba3db625bbdeb", "files": [{"path": "docs/source/benchmarks.rst", "status": "modified", "Loc": {}}, {"path": "utils/style_doc.py", "status": "modified", "Loc": {"(None, 'style_rst_file', 378)": {"mod": [384, 386]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["utils/style_doc.py"], "doc": ["docs/source/benchmarks.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "09e56ae43eb63641381e0d722a04536c2fe22c0d", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3616", "iss_label": "", "title": "Document LogFormatter", "body": "Currently, the `LogFormatter` class is only mentioned in the [Release notes](https://docs.scrapy.org/en/latest/news.html) page of the documentation. This class should be properly documented, both its API members and a small section introducing it on the documentation page about [Logging](https://docs.scrapy.org/en/latest/topics/logging.html).\r\n\r\nThe responses to [Scrapy - Silently drop an item](https://stackoverflow.com/q/13527921/939364) in StackOverflow would be a good starting point.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3660", "file_loc": {"base_commit": "09e56ae43eb63641381e0d722a04536c2fe22c0d", "files": [{"path": "docs/topics/logging.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [195]}}}, {"path": "docs/topics/settings.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [868]}}}, {"path": "scrapy/logformatter.py", "status": "modified", "Loc": {"('LogFormatter', None, 13)": {"add": [33, 34, 51, 65], "mod": [16, 17, 18, 21, 22, 23, 25, 26, 27, 29, 30, 31, 32]}, "('LogFormatter', 'crawled', 34)": {"mod": [43]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/logformatter.py"], "doc": ["docs/topics/logging.rst", "docs/topics/settings.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "47b9de93a9c7a514f4007439335facd8ea82a12d", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2905", "iss_label": "enhancement\ndocs\nhelp wanted", "title": "An error occurred while connecting: [Failure instance: Traceback: : filedescriptor out of range in select()", "body": "I'm trying crawl ~200k sites, only the home pages. In the beginning the crawl works fine but the logs quickly fill up with the following errors:\r\n\r\n2017-08-29 11:18:55,131 - scrapy.core.scraper - ERROR - Error downloading \r\nTraceback (most recent call last):\r\n File \"venv/lib/python3.6/site-packages/twisted/internet/defer.py\", line 1384, in _inlineCallbacks\r\n result = result.throwExceptionIntoGenerator(g)\r\n File \"venv/lib/python3.6/site-packages/twisted/python/failure.py\", line 393, in throwExceptionIntoGenerator\r\n return g.throw(self.type, self.value, self.tb)\r\n File \"venv/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py\", line 43, in process_request\r\n defer.returnValue((yield download_func(request=request,spider=spider)))\r\ntwisted.internet.error.ConnectError: An error occurred while connecting: [Failure instance: Traceback: : filedescriptor out of range in select()\r\nvenv/lib/python3.6/site-packages/twisted/internet/base.py:1243:run\r\nvenv/lib/python3.6/site-packages/twisted/internet/base.py:1255:mainLoop\r\nvenv/lib/python3.6/site-packages/twisted/internet/selectreactor.py:106:doSelect\r\nvenv/lib/python3.6/site-packages/twisted/internet/selectreactor.py:88:_preenDescriptors\r\n--- ---\r\nvenv/lib/python3.6/site-packages/twisted/internet/selectreactor.py:85:_preenDescriptors\r\n].\r\n\r\nlsof shows that the process indeed has >1024 open network connections, which I believe is the limit for select().\r\n\r\nI set CONCURRENT_REQUESTS = 100 and REACTOR_THREADPOOL_MAXSIZE = 20 based on https://doc.scrapy.org/en/latest/topics/broad-crawls.html.\r\n\r\nNot sure how the crawl ends up with so many open connections. Maybe it's leaking filedescriptors somewhere?\r\n\r\nI'm using:\r\nPython 3.6.2\r\nScrapy 1.4.0\r\nTwisted 17.5.0\r\nmacOS Sierra 10.12.6\r\n\r\nHappy to provide more info as needed.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4294", "file_loc": {"base_commit": "47b9de93a9c7a514f4007439335facd8ea82a12d", "files": [{"path": "docs/faq.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [363]}}}, {"path": "docs/topics/broad-crawls.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [213]}}}, {"path": "docs/topics/settings.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1465], "mod": [163, 165, 166, 168, 170, 172, 173, 174, 175, 176, 177, 178, 180, 181, 182]}}}, {"path": "pytest.ini", "status": "modified", "Loc": {"(None, None, None)": {"add": [23], "mod": [112, 132, 138]}}}, {"path": "scrapy/crawler.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [25], "mod": [16]}, "('CrawlerRunner', '__init__', 133)": {"mod": [141]}, "('CrawlerRunner', None, 114)": {"mod": [235, 236, 237, 238]}, "('CrawlerProcess', None, 241)": {"mod": [327, 328, 329, 330]}}}, {"path": "scrapy/settings/default_settings.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [293], "mod": [22]}}}, {"path": "scrapy/utils/asyncio.py", "status": "removed", "Loc": {}}, {"path": "scrapy/utils/defer.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6], "mod": [5, 12]}}}, {"path": "scrapy/utils/log.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 9], "mod": [3, 8, 12, 14]}, "(None, 'log_scrapy_info', 145)": {"mod": [152, 153]}}}, {"path": "scrapy/utils/reactor.py", "status": "modified", "Loc": {"('CallLaterOnce', '__call__', 42)": {"add": [44]}, "(None, None, None)": {"mod": [1]}}}, {"path": "tests/CrawlerProcess/asyncio_enabled_no_reactor.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13]}}}, {"path": "tests/CrawlerProcess/asyncio_enabled_reactor.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [18]}}}, {"path": "tests/test_commands.py", "status": "modified", "Loc": {"('RunSpiderCommandTest', 'test_asyncio_enabled_true', 298)": {"mod": [299, 300]}, "('RunSpiderCommandTest', 'test_asyncio_enabled_false', 302)": {"mod": [303, 304]}}}, {"path": "tests/test_crawler.py", "status": "modified", "Loc": {"('CrawlerProcessSubprocess', 'test_ipv6_alternative_name_resolver', 315)": {"add": [325]}, "('CrawlerRunnerHasSpider', 'test_crawler_runner_asyncio_enabled_true', 255)": {"mod": [257, 259, 261]}, "('CrawlerRunnerHasSpider', 'test_crawler_process_asyncio_enabled_true', 264)": {"mod": [267, 269, 271, 273]}, "('CrawlerRunnerHasSpider', 'test_crawler_process_asyncio_enabled_false', 276)": {"mod": [277, 280]}, "('CrawlerProcessSubprocess', 'test_simple', 294)": {"mod": [297]}, "('CrawlerProcessSubprocess', 'test_asyncio_enabled_no_reactor', 299)": {"mod": [302]}, "('CrawlerProcessSubprocess', 'test_asyncio_enabled_reactor', 304)": {"mod": [307]}}}, {"path": "tests/test_utils_asyncio.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}, "('AsyncioTest', 'test_install_asyncio_reactor', 15)": {"mod": [17]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/utils/reactor.py", "scrapy/crawler.py", "scrapy/utils/log.py", "tests/CrawlerProcess/asyncio_enabled_no_reactor.py", "scrapy/utils/defer.py", "scrapy/utils/asyncio.py", "tests/CrawlerProcess/asyncio_enabled_reactor.py", "scrapy/settings/default_settings.py"], "doc": ["docs/topics/broad-crawls.rst", "docs/topics/settings.rst", "docs/faq.rst"], "test": ["tests/test_utils_asyncio.py", "tests/test_crawler.py", "tests/test_commands.py"], "config": ["pytest.ini"], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "fb89745408cc02515815c792355c7e883b2d08a4", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/4602", "iss_label": "", "title": "Flask.auto_find_instance_path() can return wrong path for namespace packages installed in development mode", "body": "https://github.com/pallets/flask/blob/bd56d19b167822a9a23e2e9e2a07ccccc36baa8d/src/flask/scaffold.py#L798\r\n\r\nIf there are several packages under the same namespace, all installed in development mode, like:\r\n\r\n```\r\n~/namespace-package1/\r\n namespace/\r\n package1/\r\n __init__.py\r\n app.py\r\n instance/\r\n\r\n~/namespace-package2/\r\n namespace/\r\n package2/\r\n __init__.py\r\n app.py\r\n instance/\r\n```\r\nand the code in `namespace.package2` uses `app.instance_path`, then its expected value is `~/namespace-package2/instance` ([\"Uninstalled package\" decision path](https://flask.palletsprojects.com/en/2.1.x/config/#instance-folders)).\r\n\r\nInstead of that the following happens:\r\n* `find_package()` [cuts import info](https://github.com/pallets/flask/blob/bd56d19b167822a9a23e2e9e2a07ccccc36baa8d/src/flask/scaffold.py#L846) to the very top package name, `namespace`,\r\n* then `_find_package_path()` finds module specification for the whole namespace package, which contains several submodule search locations, like `ModuleSpec(name='namespace', loader=<_frozen_importlib_external._NamespaceLoader object at ...>, submodule_search_locations=_NamespacePath(['~/namespace-package1/namespace', '~/namespace-package2/namespace']))`\r\n* and then the quoted line returns first, i.e. _arbitrary_, package from that namespace, e.g. `~/namespace-package1`, which produces wrong instance path.\r\n\r\nSuggestion: pass also `import_name` into `_find_package_path` and use it for resolving ambiguity at this point, like:\r\n\r\n```\r\ndef _find_package_path(root_mod_name, import_name):\r\n...\r\n if spec.origin in {\"namespace\", None}:\r\n package_spec = importlib.util.find_spec(import_name)\r\n package_path = os.path.commonpath(package_spec.submodule_search_locations)\r\n return os.path.dirname(next(\r\n location for location in spec.submodule_search_locations\r\n if package_path.startswith(location)\r\n ))\r\n```", "pr_html_url": "https://github.com/pallets/flask/pull/4610", "file_loc": {"base_commit": "fb89745408cc02515815c792355c7e883b2d08a4", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}}}, {"path": "src/flask/scaffold.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, '_find_package_path', 783)": {"add": [784], "mod": [783, 786, 788, 794, 799, 800, 802, 803, 806]}, "(None, 'find_package', 835)": {"mod": [848, 849, 853]}}}, {"path": "tests/test_instance_config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [61], "mod": [1, 18, 19, 20, 21, 22, 24, 26, 27, 30, 45]}}}, {"path": "tox.ini", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/flask/scaffold.py"], "doc": ["CHANGES.rst"], "test": ["tests/test_instance_config.py"], "config": ["tox.ini"], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "0ee71188ff184ee5f8b70081665858301fe4afb1", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/20395", "iss_label": "", "title": "some tokenizer(s) don't save the updated attributes", "body": "### System Info\r\n\r\ntransformers version: 4.25.0.dev0\r\nTorch version: 1.13.0+cpu\r\nCuda available: False\r\nCuda version: None\r\nCuDNN version: None\r\nNumber of GPUs available: 0\r\n\r\n### Description\r\n\r\nFor `GPT2Tokenizer(Fast)`, Set `tokenizer.model_max_length` to `128` (originally `1024`), save it then reload, will give `tokenizer.model_max_length` being `1024`.\r\n\r\n### Reproduction\r\n\r\n```python\r\nfrom transformers import GPT2Tokenizer, GPT2TokenizerFast\r\n\r\ntokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")\r\nprint(tokenizer.model_max_length)\r\n\r\ntokenizer.model_max_length = 128\r\nprint(tokenizer.model_max_length)\r\n\r\ntokenizer.save_pretrained(\"my-gpt2\")\r\ntokenizer_loaded = GPT2TokenizerFast.from_pretrained(\"my-gpt2\")\r\nprint(tokenizer_loaded.model_max_length)\r\n```\r\n\r\nThe output is\r\n\r\n```bash\r\n1024\r\n128\r\n1024\r\n\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\n`tokenizer_loaded.model_max_length` should be `128` in the above example. In general, the updated attribute(s) should be saved.", "pr_html_url": "https://github.com/huggingface/transformers/pull/20401", "file_loc": {"base_commit": "0ee71188ff184ee5f8b70081665858301fe4afb1", "files": [{"path": "src/transformers/tokenization_utils_base.py", "status": "modified", "Loc": {"('PreTrainedTokenizerBase', 'save_pretrained', 2022)": {"add": [2084]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/tokenization_utils_base.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "c6dd5237428895c0ba6cda40e3b2b95012276a05", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/928", "iss_label": "bug\ntriage", "title": "KeyError in apply_edits breaking improve mode", "body": "I am running improve mode, creating c# and xaml. GPT Engineer is attempting to make updates to a xaml user control (here renamed to be \"myExistingUserControl.xaml\") and running into an issue where the filepath is invalid.\r\n\r\n```These edits will ensure that the code changes are in the correct format and can be found in the code.Traceback (most recent call last):\r\n\r\n File \"\", line 198, in _run_module_as_main\r\n\r\n File \"\", line 88, in _run_code\r\n\r\n File \"C:\\Users\\asdf\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\Scripts\\gpte.exe\\__main__.py\", line 7, in \r\n sys.exit(app())\r\n ^^^^^\r\n\r\n File \"C:\\Users\\asdf\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\gpt_engineer\\applications\\cli\\main.py\", line 194, in main\r\n files_dict = agent.improve(files_dict, prompt)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"C:\\Users\\asdf\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\gpt_engineer\\applications\\cli\\cli_agent.py\", line 131, in improve\r\n files_dict = self.improve_fn(\r\n ^^^^^^^^^^^^^^^^\r\n\r\n File \"C:\\Users\\asdf\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\gpt_engineer\\core\\default\\steps.py\", line 182, in improve\r\n overwrite_code_with_edits(chat, files_dict)\r\n\r\n File \"C:\\Users\\asdf\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\gpt_engineer\\core\\chat_to_files.py\", line 97, in overwrite_code_with_edits\r\n apply_edits(edits, files_dict)\r\n\r\n File \"C:\\Users\\asdf\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\gpt_engineer\\core\\chat_to_files.py\", line 185, in apply_edits\r\n occurrences_cnt = files_dict[filename].count(edit.before)\r\n ~~~~~~~~~~^^^^^^^^^^\r\n\r\nKeyError: 'some/dir/myExistingUserControl.xaml'```", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/930", "file_loc": {"base_commit": "c6dd5237428895c0ba6cda40e3b2b95012276a05", "files": [{"path": "gpt_engineer/preprompts/improve", "status": "modified", "Loc": {"(None, None, None)": {"add": [67], "mod": [11, 32, 41, 52]}}}, {"path": "tests/core/test_chat_to_files.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [185]}, "(None, 'test_apply_edit_new_file', 186)": {"mod": [188, 191]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["tests/core/test_chat_to_files.py"], "config": [], "asset": ["gpt_engineer/preprompts/improve"]}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "77415a42e5975ea356393c9f1d5cff0ae8acae2c", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2446", "iss_label": "enhancement", "title": "Images in MPO Format are considered corrupted", "body": "I am using images taken by a DJI drone. These images are deemed corrupted by the dataset loader, and are thus not used.\r\nThis happens because in datasets.py the `im.format` is checked against a list of formats that doesn't contain \"mpo\".\r\nIf I add that entry manually everything works as expected.\r\n\r\nMPO is a container format, that can contain any of the valid formats.\r\n\r\n## \ud83d\udc1b Bug\r\nImages that report \"MPO\" as PIL.Image.format are deemed corrupted.\r\n\r\n## To Reproduce (REQUIRED)\r\nTry to load MPO images.\r\n![DJI_0180](https://user-images.githubusercontent.com/5763229/110967292-819e6680-8356-11eb-97f0-35e6be8cc00b.JPG)\r\n\r\nI'm not sure whether Github tempers with the image. If necessary I can upload somewhere else.\r\n\r\n## Expected behavior\r\nImages should be considered valid.\r\n\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2615", "file_loc": {"base_commit": "77415a42e5975ea356393c9f1d5cff0ae8acae2c", "files": [{"path": "utils/datasets.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["utils/datasets.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "0bbd57b322aaa5aeca4f3af2dd7f802360d29673", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/2190", "iss_label": "Bug", "title": "crash in MeanShift tests after make cython (edited from k_means)", "body": "The crash:\n\n```\n[erg@pliny scikit-learn]$ [master*] nosetests -v\n/home/erg/python/scikit-learn/sklearn/feature_selection/selector_mixin.py:7: DeprecationWarning: sklearn.feature_selection.selector_mixin.SelectorMixin has been renamed sklearn.feature_selection.from_model._LearntSelectorMixin, and this alias will be removed in version 0.16\n DeprecationWarning)\nAffinity Propagation algorithm ... ok\nTests the DBSCAN algorithm with a similarity array. ... ok\nTests the DBSCAN algorithm with a feature vector array. ... ok\nTests the DBSCAN algorithm with a callable metric. ... ok\nsklearn.cluster.tests.test_dbscan.test_pickle ... ok\nCheck that we obtain the correct solution for structured ward tree. ... ok\nCheck that we obtain the correct solution for unstructured ward tree. ... ok\nCheck that the height of ward tree is sorted. ... ok\nCheck that we obtain the correct number of clusters with Ward clustering. ... ok\nCheck that we obtain the correct solution in a simplistic case ... ok\nTest scikit ward with full connectivity (i.e. unstructured) vs scipy ... ok\nCheck that connectivity in the ward tree is propagated correctly during ... ok\nCheck non regression of a bug if a non item assignable connectivity is ... ok\nsklearn.cluster.tests.test_k_means.test_square_norms ... ok\nsklearn.cluster.tests.test_k_means.test_kmeans_dtype ... ok\nsklearn.cluster.tests.test_k_means.test_labels_assignment_and_inertia ... ok\nCheck that dense and sparse minibatch update give the same results ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_plus_plus_init ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_check_fitted ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_new_centers ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_plus_plus_init_2_jobs ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_plus_plus_init_sparse ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_random_init ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_random_init_sparse ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_plus_plus_init_not_precomputed ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_random_init_not_precomputed ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_perfect_init ... ok\nsklearn.cluster.tests.test_k_means.test_mb_k_means_plus_plus_init_dense_array ... ok\nsklearn.cluster.tests.test_k_means.test_mb_kmeans_verbose ... ok\nsklearn.cluster.tests.test_k_means.test_mb_k_means_plus_plus_init_sparse_matrix ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_init_with_large_k ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_k_means_random_init_dense_array ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_k_means_random_init_sparse_csr ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_k_means_perfect_init_dense_array ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_k_means_perfect_init_sparse_csr ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_reassign ... ok\nsklearn.cluster.tests.test_k_means.test_sparse_mb_k_means_callable_init ... ok\nsklearn.cluster.tests.test_k_means.test_mini_batch_k_means_random_init_partial_fit ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_default_init_size ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_tol ... ok\nsklearn.cluster.tests.test_k_means.test_minibatch_set_init_size ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_invalid_init ... ok\nsklearn.cluster.tests.test_k_means.test_mini_match_k_means_invalid_init ... ok\nCheck if copy_x=False returns nearly equal X after de-centering. ... ok\nCheck k_means with a bad initialization does not yield a singleton ... ok\nsklearn.cluster.tests.test_k_means.test_predict ... ok\nsklearn.cluster.tests.test_k_means.test_score ... ok\nsklearn.cluster.tests.test_k_means.test_predict_minibatch_dense_input ... ok\nsklearn.cluster.tests.test_k_means.test_predict_minibatch_kmeanspp_init_sparse_input ... ok\nsklearn.cluster.tests.test_k_means.test_predict_minibatch_random_init_sparse_input ... ok\nsklearn.cluster.tests.test_k_means.test_input_dtypes ... ok\nsklearn.cluster.tests.test_k_means.test_transform ... ok\nsklearn.cluster.tests.test_k_means.test_fit_transform ... ok\nCheck that increasing the number of init increases the quality ... ok\nsklearn.cluster.tests.test_k_means.test_k_means_function ... ok\nTest MeanShift algorithm ... Segmentation fault (core dumped)\n```\n\nSome related warnings?\n\n```\n[erg@pliny ~]$ cython --version\nCython version 0.19.1\n\n[erg@pliny scikit-learn]$ [master*] make cython\nfind sklearn -name \"*.pyx\" | xargs cython\nwarning: sklearn/neighbors/binary_tree.pxi:1199:20: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1257:48: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1258:46: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1260:45: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1345:20: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1355:42: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1357:36: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1398:59: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1400:46: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1401:48: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1403:45: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1491:20: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1544:64: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1589:20: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1199:20: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1257:48: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1258:46: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1260:45: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1345:20: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1355:42: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1357:36: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1398:59: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1400:46: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1401:48: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1403:45: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1491:20: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1544:64: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\nwarning: sklearn/neighbors/binary_tree.pxi:1589:20: the result of using negative indices inside of code sections marked as 'wraparound=False' is undefined\n```\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/2230", "file_loc": {"base_commit": "0bbd57b322aaa5aeca4f3af2dd7f802360d29673", "files": [{"path": "sklearn/neighbors/binary_tree.pxi", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1199, 1257, 1258, 1260, 1345, 1355, 1357, 1398, 1400, 1401, 1403, 1491, 1544, 1589]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/neighbors/binary_tree.pxi"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "4c2a566acc37c8d95b07c023f8c52a1a2a5d15bf", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/2186", "iss_label": "bug\nneeds investigation\nAPI access", "title": "Azure support broken?", "body": "### \u26a0\ufe0f Search for existing issues first \u26a0\ufe0f\r\n\r\n- [X] I have searched the existing issues, and there is no existing issue for my problem\r\n\r\n### GPT-3 or GPT-4\r\n\r\n- [ ] I am using Auto-GPT with GPT-3 (GPT-3.5)\r\n\r\n### Steps to reproduce \ud83d\udd79\r\n\r\n```yaml\r\nazure.yaml:\r\nazure_api_type: azure\r\nazure_api_base: https://test.openai.azure.com/\r\nazure_api_version: 2023-03-15-preview\r\nazure_model_map:\r\n fast_llm_model_deployment_id: \"gpt-35-turbo\"\r\n smart_llm_model_deployment_id: \"gpt-4\"\r\n embedding_model_deployment_id: \"emb-ada\" \r\n```\r\n\r\n### Current behavior \ud83d\ude2f\r\n\r\nWhen I run \"python -m autogpt\", it just broken\r\nWelcome back! Would you like me to return to being Entrepreneur-GPT?\r\nContinue with the last settings?\r\nName: Entrepreneur-GPT\r\nRole: an AI designed to autonomously develop and run businesses with the\r\nGoals: ['Increase net worth', 'Grow Twitter Account', 'Develop and manage multiple businesses autonomously']\r\nContinue (y/n): y\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 198, in _run_module_as_main\r\n File \"\", line 88, in _run_code\r\n File \"/data/Auto-GPT/autogpt/__main__.py\", line 50, in \r\n main()\r\n File \"/data/Auto-GPT/autogpt/__main__.py\", line 46, in main\r\n agent.start_interaction_loop()\r\n File \"/data/Auto-GPT/autogpt/agent/agent.py\", line 75, in start_interaction_loop\r\n assistant_reply = chat_with_ai(\r\n ^^^^^^^^^^^^^\r\n File \"/data/Auto-GPT/autogpt/chat.py\", line 159, in chat_with_ai\r\n assistant_reply = create_chat_completion(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/Auto-GPT/autogpt/llm_utils.py\", line 84, in create_chat_completion\r\n deployment_id=CFG.get_azure_deployment_id_for_model(model),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/Auto-GPT/autogpt/config/config.py\", line 120, in get_azure_deployment_id_for_model\r\n return self.azure_model_to_deployment_id_map[\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: list indices must be integers or slices, not str\r\n\r\n\r\n### Expected behavior \ud83e\udd14\r\n\r\nIt should works well.\r\n\r\n### Your prompt \ud83d\udcdd\r\n\r\n```yaml\r\n# Paste your prompt here\r\n```\r\n", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/2351", "file_loc": {"base_commit": "4c2a566acc37c8d95b07c023f8c52a1a2a5d15bf", "files": [{"path": "autogpt/config/config.py", "status": "modified", "Loc": {"('Config', 'load_azure_config', 136)": {"mod": [157]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["autogpt/config/config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "f966ecd4f5b8221ee15e843f5ec287e1f7cca940", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/740", "iss_label": "", "title": "wrong suggestion with git push --set-upstream ", "body": "Thefuck is incorrectly adding the remote name at the end of the command suggestion:\r\n\r\n```\r\n$ git push myfork\r\nfatal: The current branch test-branch has no upstream branch.\r\nTo push the current branch and set the remote as upstream, use\r\n\r\n git push --set-upstream myfork test-branch\r\n\r\n$ fuck\r\ngit push --set-upstream myfork test-branch myfork [enter/\u2191/\u2193/ctrl+c]\r\nerror: src refspec myfork does not match any.\r\nerror: failed to push some refs to 'git@github.com:waldyrious/project-foo.git'\r\n```", "pr_html_url": "https://github.com/nvbn/thefuck/pull/745", "file_loc": {"base_commit": "f966ecd4f5b8221ee15e843f5ec287e1f7cca940", "files": [{"path": "tests/rules/test_git_push.py", "status": "modified", "Loc": {"(None, 'test_get_new_command', 23)": {"add": [25]}}}, {"path": "thefuck/rules/git_push.py", "status": "modified", "Loc": {"(None, 'get_new_command', 22)": {"add": [34]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/rules/git_push.py"], "doc": [], "test": ["tests/rules/test_git_push.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "e4b234834a79541f31be227aadce13f5aafda85a", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/16497", "iss_label": "WIP", "title": "[TODO] Investigate equivalence tests", "body": "**(add a lot of assignees just to make you informed and kept updated in the future. Don't hesitate to remove yourself if you think it's irrelevant)**\r\n\r\nCurrently the PT/TF/Flax equivalence tests use `1e-5` as the tolerance for the absolute differences of outputs.\r\n\r\nWe see that these tests failed with a non-negligible (although not carefully defined) frequency.\r\n\r\nCreate this page to track a list of models to investigate.\r\n\r\n- **FlaxWav2Vec2ModelTest** (2.2888184e-05 > 1e-5)\r\n - https://app.circleci.com/pipelines/github/huggingface/transformers/37363/workflows/a4b06424-0ba8-4fbc-9054-6ff52fbf8145/jobs/411654 \r\n\r\n- **TFGPT2EncoderDecoderModelTest** (0.001009281724691391 > 1e-3)\r\n - https://app.circleci.com/pipelines/github/huggingface/transformers/37358/workflows/43c12161-33d8-4df5-ba3c-3e62a4507ee7/jobs/411579\r\n - This also happens to **TFBERTEncoderDecoderModelTest**\r\n - This is caused by some sequence in a batch which gets all 0s as attention mask (generated by ids_tensor) - may happens on both encoder and decoder (especially after combining with the causal mask).\r\n - For **TFBERTEncoderDecoderModelTest**, the difference is smaller than *TFGPT2EncoderDecoderModelTest* (by a magnitude of 5x~10x) -> this is due to the last hidden states in GPT2 is after layer norm (not the case for BERT).\r\n - If we look the cross attention diff between PT/TF, it is clear that we have the same issue (both in the magnitude of `1e-3`)\r\n - The encoder attention diff between PT/TF is in the magnitude of `5e-8`: ~~**not very sure why this doesn't get much larger**~~.\r\n - This is because PT/TF (at least in BERT) has different `encoder_extended_attention_mask`: `1e-4` vs `1e-9`.\r\n\r\n- **TFViTMAEModelTest** (1.013279e-05 > 1e-5)\r\n - https://app.circleci.com/pipelines/github/huggingface/transformers/37319/workflows/5adfba7a-d12b-4e1e-9a7a-e33c7d5fd6ee/jobs/411002", "pr_html_url": "https://github.com/huggingface/transformers/pull/16517", "file_loc": {"base_commit": "e4b234834a79541f31be227aadce13f5aafda85a", "files": [{"path": "templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24]}, "(None, 'prepare_config_and_inputs', 90)": {"mod": [95]}}}, {"path": "tests/albert/test_modeling_tf_albert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24]}, "('TFAlbertModelTester', 'prepare_config_and_inputs', 94)": {"mod": [99]}}}, {"path": "tests/bert/test_modeling_tf_bert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24]}, "('TFBertModelTester', 'prepare_config_and_inputs', 94)": {"mod": [99]}}}, {"path": "tests/clip/test_modeling_tf_clip.py", "status": "modified", "Loc": {"('TFCLIPTextModelTester', 'prepare_config_and_inputs', 298)": {"add": [303]}}}, {"path": "tests/convbert/test_modeling_tf_convbert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFConvBertModelTester', 'prepare_config_and_inputs', 92)": {"mod": [97]}}}, {"path": "tests/ctrl/test_modeling_tf_ctrl.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFCTRLModelTester', 'prepare_config_and_inputs', 67)": {"mod": [72]}}}, {"path": "tests/deberta/test_modeling_tf_deberta.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFDebertaModelTester', 'prepare_config_and_inputs', 90)": {"mod": [95]}}}, {"path": "tests/deberta_v2/test_modeling_tf_deberta_v2.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFDebertaV2ModelTester', 'prepare_config_and_inputs', 93)": {"mod": [98]}}}, {"path": "tests/distilbert/test_modeling_tf_distilbert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFDistilBertModelTester', 'prepare_config_and_inputs', 68)": {"mod": [73]}}}, {"path": "tests/dpr/test_modeling_tf_dpr.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [22]}, "('TFDPRModelTester', 'prepare_config_and_inputs', 92)": {"mod": [97, 98, 99]}}}, {"path": "tests/electra/test_modeling_tf_electra.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFElectraModelTester', 'prepare_config_and_inputs', 69)": {"mod": [74]}}}, {"path": "tests/flaubert/test_modeling_tf_flaubert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [22]}, "('TFFlaubertModelTester', 'prepare_config_and_inputs', 76)": {"mod": [78]}}}, {"path": "tests/funnel/test_modeling_tf_funnel.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFFunnelModelTester', 'prepare_config_and_inputs', 109)": {"mod": [114]}}}, {"path": "tests/gpt2/test_modeling_tf_gpt2.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [22]}, "('TFGPT2ModelTester', 'prepare_config_and_inputs', 72)": {"mod": [77]}}}, {"path": "tests/gptj/test_modeling_tf_gptj.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFGPTJModelTester', 'prepare_config_and_inputs', 68)": {"mod": [73]}}}, {"path": "tests/layoutlm/test_modeling_tf_layoutlm.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24]}, "('TFLayoutLMModelTester', 'prepare_config_and_inputs', 90)": {"mod": [110]}}}, {"path": "tests/longformer/test_modeling_tf_longformer.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFLongformerModelTester', 'prepare_config_and_inputs', 77)": {"mod": [82]}}}, {"path": "tests/lxmert/test_modeling_tf_lxmert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [26]}, "('TFLxmertModelTester', 'prepare_config_and_inputs', 119)": {"mod": [127]}}}, {"path": "tests/mobilebert/test_modeling_tf_mobilebert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFMobileBertModelTester', 'prepare_config_and_inputs', 112)": {"mod": [117]}}}, {"path": "tests/mpnet/test_modeling_tf_mpnet.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFMPNetModelTester', 'prepare_config_and_inputs', 88)": {"mod": [93]}}}, {"path": "tests/openai/test_modeling_tf_openai.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFOpenAIGPTModelTester', 'prepare_config_and_inputs', 68)": {"mod": [73]}}}, {"path": "tests/rembert/test_modeling_tf_rembert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFRemBertModelTester', 'prepare_config_and_inputs', 93)": {"mod": [98]}}}, {"path": "tests/roberta/test_modeling_tf_roberta.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFRobertaModelTester', 'prepare_config_and_inputs', 70)": {"mod": [75]}}}, {"path": "tests/roformer/test_modeling_tf_roformer.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFRoFormerModelTester', 'prepare_config_and_inputs', 93)": {"mod": [98]}}}, {"path": "tests/t5/test_modeling_tf_t5.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFT5ModelTester', 'prepare_config_and_inputs', 56)": {"mod": [61]}}}, {"path": "tests/tapas/test_modeling_tf_tapas.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [41]}, "('TFTapasModelTester', 'prepare_config_and_inputs', 156)": {"mod": [161]}}}, {"path": "tests/test_modeling_tf_common.py", "status": "modified", "Loc": {"(None, 'random_attention_mask', 1440)": {"mod": [1443]}}}, {"path": "tests/xlm/test_modeling_tf_xlm.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23]}, "('TFXLMModelTester', 'prepare_config_and_inputs', 76)": {"mod": [78]}}}, {"path": "tests/xlnet/test_modeling_tf_xlnet.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [25]}, "('TFXLNetModelTester', 'prepare_config_and_inputs', 74)": {"mod": [78]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["tests/test_modeling_tf_common.py", "templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py", "tests/openai/test_modeling_tf_openai.py", "tests/funnel/test_modeling_tf_funnel.py", "tests/convbert/test_modeling_tf_convbert.py", "tests/bert/test_modeling_tf_bert.py", "tests/roformer/test_modeling_tf_roformer.py", "tests/t5/test_modeling_tf_t5.py", "tests/lxmert/test_modeling_tf_lxmert.py", "tests/mpnet/test_modeling_tf_mpnet.py", "tests/rembert/test_modeling_tf_rembert.py", "tests/layoutlm/test_modeling_tf_layoutlm.py", "tests/dpr/test_modeling_tf_dpr.py", "tests/gptj/test_modeling_tf_gptj.py", "tests/roberta/test_modeling_tf_roberta.py", "tests/flaubert/test_modeling_tf_flaubert.py", "tests/clip/test_modeling_tf_clip.py", "tests/tapas/test_modeling_tf_tapas.py", "tests/deberta/test_modeling_tf_deberta.py", "tests/electra/test_modeling_tf_electra.py", "tests/gpt2/test_modeling_tf_gpt2.py", "tests/xlm/test_modeling_tf_xlm.py", "tests/longformer/test_modeling_tf_longformer.py", "tests/deberta_v2/test_modeling_tf_deberta_v2.py", "tests/distilbert/test_modeling_tf_distilbert.py", "tests/albert/test_modeling_tf_albert.py", "tests/xlnet/test_modeling_tf_xlnet.py", "tests/mobilebert/test_modeling_tf_mobilebert.py", "tests/ctrl/test_modeling_tf_ctrl.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e", "iss_html_url": "https://github.com/pallets/flask/issues/1971", "iss_label": "", "title": "Implement RFC 7233", "body": "It would be great to support [RFC 7233 : Hypertext Transfer Protocol (HTTP/1.1): Range Requests](https://tools.ietf.org/html/rfc7233) for next major version, at least for non multipart/byteranges media type.\n\nI'm willing to implement this, so please share your thoughts about this.\n\nWhat must be done:\n- Modify `send_file` method to support Range Requests\n - Use existing `conditionnal` parameter to enable Range Requests support ?\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/2031", "commit_html_url": null, "file_loc": {"base_commit": "01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e", "files": [{"path": "CHANGES", "status": "modified", "Loc": {"(None, None, 20)": {"add": [20]}}}, {"path": "flask/helpers.py", "status": "modified", "Loc": {"(None, 'send_file', 430)": {"add": [448, 502], "mod": [538, 544, 578]}, "(None, None, None)": {"mod": [28, 29]}}}, {"path": "tests/test_helpers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}, "('TestSendfile', None, 356)": {"add": [464]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/helpers.py"], "doc": ["CHANGES"], "test": ["tests/test_helpers.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "673e5af658cf029e82d87047dcb7ebee3d343d10", "iss_html_url": "https://github.com/pallets/flask/issues/2823", "iss_label": "", "title": "Flask complains a .env file exists when not using python-dotenv, even though that .env is a directory", "body": "I place my virtualenvs in a `.env` directory in my project directory. Flask 1.x sees this directory and thinks it might be a \"dotenv\" file (even though it is a directory).\r\n\r\n### Expected Behavior\r\n\r\n`flask` should ignore a `.env` directory when `python-dotenv` is not installed.\r\n\r\n### Actual Behavior\r\n\r\n`flask` says:\r\n\r\n> * Tip: There are .env files present. Do \"pip install python-dotenv\" to use them.\r\n\r\n### Environment\r\n\r\n* Python version: 3.6.5\r\n* Flask version: 1.0.2\r\n* Werkzeug version: 0.14.1\r\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/2827", "commit_html_url": null, "file_loc": {"base_commit": "673e5af658cf029e82d87047dcb7ebee3d343d10", "files": [{"path": "flask/cli.py", "status": "modified", "Loc": {"(None, 'load_dotenv', 567)": {"mod": [587]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/cli.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "8e589daaf2cec6a10262b8ff88801127f2fa14fd", "iss_html_url": "https://github.com/pallets/flask/issues/4220", "iss_label": "", "title": "`template_filter` decorator typing does not support custom filters with multiple arguments", "body": "`template_filter` decorator typing does not support custom filters that take in multiple arguments. Consider:\r\n\r\n```py\r\nfrom flask import Flask\r\n\r\n\r\napp = Flask(__name__)\r\n\r\n\r\n@app.template_filter('foo_bar')\r\ndef foo_bar_filter(foo, bar):\r\n return f'{foo} {bar}'\r\n```\r\n`mypy` will return the following error message:\r\n```\r\nerror: Argument 1 has incompatible type \"Callable[[Any, Any], Any]\"; expected \"Callable[[Any], str]\" [arg-type]\r\n```\r\nAs custom filters with multiple arguments are supported by Jinja (https://jinja.palletsprojects.com/en/3.0.x/api/#custom-filters), I think this typing error is a false positive.\r\n\r\nEnvironment:\r\n\r\n- Python version: 3.6.13\r\n- Flask version: 2.0.1\r\n- Mypy version: 0.812\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pallets/flask/commit/8e589daaf2cec6a10262b8ff88801127f2fa14fd", "file_loc": {"base_commit": "8e589daaf2cec6a10262b8ff88801127f2fa14fd", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, 10)": {"add": [10]}}}, {"path": "src/flask/typing.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [43, 44, 45]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/flask/typing.py"], "doc": ["CHANGES.rst"], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "a4df5010f49044eb1f1713057e8914e6a5a104b3", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1073", "iss_label": "false positive", "title": "producthunt.com false positive", "body": "\r\n\r\n## Checklist\r\n\r\n- [X] I'm reporting a website that is returning **false positive** results\r\n- [X] I've checked for similar site support requests including closed ones\r\n- [X] I've checked for pull requests attempting to fix this false positive\r\n- [X] I'm only reporting **one** site (create a seperate issue for each site)\r\n\r\n## Description\r\n\r\n\r\nhttps://www.producthunt.com/@adasaaakzzzzzzzzsdsdsdasdadadasqe22aasd\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/sherlock-project/sherlock/commit/a4df5010f49044eb1f1713057e8914e6a5a104b3", "file_loc": {"base_commit": "a4df5010f49044eb1f1713057e8914e6a5a104b3", "files": [{"path": "sherlock/resources/data.json", "status": "modified", "Loc": {"(None, None, 1159)": {"mod": [1159]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sherlock/resources/data.json"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "d2803c0fb7d0ba9361dcba8eb9bcebbf2f774958", "iss_html_url": "https://github.com/keras-team/keras/issues/11023", "iss_label": "", "title": "Cannot load_model", "body": "Thank you!\r\n\r\n- [ ] Check that you are up-to-date with the master branch of Keras. You can update with:\r\npip install git+git://github.com/keras-team/keras.git --upgrade --no-deps\r\n\r\n- [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found [here](https://www.tensorflow.org/get_started/os_setup).\r\n\r\n- [ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\r\npip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\r\n\r\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\r\n\r\nI am using Google Colab to train a CNN and then save the entire model to a `.h5` file. The code is available here: [CNN-Colab](https://gist.github.com/abhisheksoni27/184c49ca703eb124e1b17eb8dd8af518)\r\n\r\nThe model gets saved but when I later try to load it back, I get the following error:\r\n\r\n```\r\nTypeError: float() argument must be a string or a number, not 'dict'\r\n```\r\n\r\nThe entire Output log is here: [CNN - Colab - Error](https://gist.github.com/abhisheksoni27/732bec240629d2dd721e80130cb2956b)\r\n", "code": null, "pr_html_url": "https://github.com/keras-team/keras/pull/10727", "commit_html_url": null, "file_loc": {"base_commit": "d2803c0fb7d0ba9361dcba8eb9bcebbf2f774958", "files": [{"path": "keras/engine/saving.py", "status": "modified", "Loc": {"(None, 'get_json_type', 61)": {"mod": [82, 83]}}}, {"path": "tests/test_model_saving.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14, 643]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/engine/saving.py"], "doc": [], "test": ["tests/test_model_saving.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "b28ece0f34e54d1c980e31223451f3b2f0f20ff9", "iss_html_url": "https://github.com/nvbn/thefuck/issues/1021", "iss_label": "", "title": "Git checkout should provide multiple corrections", "body": "When correcting git checkout, the default is to use the 'closest branch'. We have a lot of branches with similar names, but quite often, what I actually meant to do was supply the '-b' flag.\r\n\r\nCan the git checkout rule be updated to return all of the possible options, rather than trying to guess, based on some arbitrary priority?\r\n", "code": null, "pr_html_url": "https://github.com/nvbn/thefuck/pull/1022", "commit_html_url": null, "file_loc": {"base_commit": "b28ece0f34e54d1c980e31223451f3b2f0f20ff9", "files": [{"path": "tests/rules/test_git_checkout.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [59, 62, 66, 70]}}}, {"path": "thefuck/rules/git_checkout.py", "status": "modified", "Loc": {"(None, 'get_new_command', 31)": {"add": [36], "mod": [38, 39, 40, 41, 42, 43]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/git_checkout.py"], "doc": [], "test": ["tests/rules/test_git_checkout.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "2d81166213c403dce5c04d1fb73ba5d3e57d6676", "iss_html_url": "https://github.com/nvbn/thefuck/issues/660", "iss_label": "", "title": "Slow execution time", "body": "The command output is very slow on macOS w/ fish shell. Reproduction rate is ~80% for me.\r\n\r\nVersion: The Fuck 3.18 using Python 2.7.10\r\nShell: fish, version 2.6.0\r\nOS: macOS 10.12.5\r\nDebug Output:\r\n```\r\n\u276f fuck 333ms\r\nDEBUG: Run with settings: {'alter_history': True,\r\n 'debug': True,\r\n 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},\r\n 'exclude_rules': [],\r\n 'history_limit': None,\r\n 'no_colors': False,\r\n 'priority': {},\r\n 'repeat': False,\r\n 'require_confirmation': True,\r\n 'rules': [],\r\n 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],\r\n 'user_dir': PosixPath('/Users/sbennett/.config/thefuck'),\r\n 'wait_command': 3,\r\n 'wait_slow_command': 15}\r\nDEBUG: Execution timed out!\r\nDEBUG: Call: fish -ic \"fuck\"; with env: {'PYTHONIOENCODING': 'utf-8', 'VERSIONER_PYTHON_PREFER_32_BIT': 'no', 'TERM_PROGRAM_VERSION': '3.0.15', 'LOGNAME': 'sbennett', 'USER': 'sbennett', 'HOME': '/Users/sbennett', 'PATH': '/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin', 'TERM_PROGRAM': 'iTerm.app', 'LANG': 'C', 'THEFUCK_DEBUG': 'true', 'TERM': 'xterm-256color', 'Apple_PubSub_Socket_Render': '/private/tmp/com.apple.launchd.1eq3gwtm7Y/Render', 'COLORFGBG': '15;0', 'VERSIONER_PYTHON_VERSION': '2.7', 'SHLVL': '2', 'XPC_FLAGS': '0x0', 'ITERM_SESSION_ID': 'w1t1p0:E781FA41-C385-4CCE-A9E0-EBF190B3D246', 'TERM_SESSION_ID': 'w1t1p0:E781FA41-C385-4CCE-A9E0-EBF190B3D246', 'SSH_AUTH_SOCK': '/private/tmp/com.apple.launchd.leMomVKppy/Listeners', 'TF_ALIAS': 'fuck', 'XPC_SERVICE_NAME': '0', 'SHELL': '/usr/local/bin/fish', 'ITERM_PROFILE': 'Default', 'LC_ALL': 'C', 'TMPDIR': '/var/folders/0s/c0f2hl495352w24847p7ybwm35h1r_/T/', 'GIT_TRACE': '1', '__CF_USER_TEXT_ENCODING': '0x658070A:0x0:0x0', 'PWD': '/Users/sbennett'}; is slow: took: 0:00:03.018166\r\nDEBUG: Importing rule: ag_literal; took: 0:00:00.000511\r\nDEBUG: Importing rule: apt_get; took: 0:00:00.000571\r\nDEBUG: Importing rule: apt_get_search; took: 0:00:00.000224\r\nDEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000715\r\nDEBUG: Importing rule: aws_cli; took: 0:00:00.000235\r\nDEBUG: Importing rule: brew_install; took: 0:00:00.000279\r\nDEBUG: Importing rule: brew_link; took: 0:00:00.000217\r\nDEBUG: Importing rule: brew_uninstall; took: 0:00:00.000276\r\nDEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000105\r\nDEBUG: Importing rule: brew_update_formula; took: 0:00:00.000222\r\nDEBUG: Importing rule: brew_upgrade; took: 0:00:00.000061\r\nDEBUG: Importing rule: cargo; took: 0:00:00.000049\r\nDEBUG: Importing rule: cargo_no_command; took: 0:00:00.000223\r\nDEBUG: Importing rule: cd_correction; took: 0:00:00.000950\r\nDEBUG: Importing rule: cd_mkdir; took: 0:00:00.000342\r\nDEBUG: Importing rule: cd_parent; took: 0:00:00.000050\r\nDEBUG: Importing rule: chmod_x; took: 0:00:00.000058\r\nDEBUG: Importing rule: composer_not_command; took: 0:00:00.001520\r\nDEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000677\r\nDEBUG: Importing rule: cpp11; took: 0:00:00.000324\r\nDEBUG: Importing rule: dirty_untar; took: 0:00:00.001812\r\nDEBUG: Importing rule: dirty_unzip; took: 0:00:00.000257\r\nDEBUG: Importing rule: django_south_ghost; took: 0:00:00.000066\r\nDEBUG: Importing rule: django_south_merge; took: 0:00:00.000113\r\nDEBUG: Importing rule: docker_not_command; took: 0:00:00.000528\r\nDEBUG: Importing rule: dry; took: 0:00:00.000068\r\nDEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000396\r\nDEBUG: Importing rule: fix_alt_space; took: 0:00:00.000337\r\nDEBUG: Importing rule: fix_file; took: 0:00:00.003110\r\nDEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000506\r\nDEBUG: Importing rule: git_add; took: 0:00:00.000520\r\nDEBUG: Importing rule: git_add_force; took: 0:00:00.000252\r\nDEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000249\r\nDEBUG: Importing rule: git_branch_delete; took: 0:00:00.000232\r\nDEBUG: Importing rule: git_branch_exists; took: 0:00:00.000309\r\nDEBUG: Importing rule: git_branch_list; took: 0:00:00.000236\r\nDEBUG: Importing rule: git_checkout; took: 0:00:00.000254\r\nDEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000238\r\nDEBUG: Importing rule: git_diff_staged; took: 0:00:00.000228\r\nDEBUG: Importing rule: git_fix_stash; took: 0:00:00.000252\r\nDEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000231\r\nDEBUG: Importing rule: git_help_aliased; took: 0:00:00.000231\r\nDEBUG: Importing rule: git_not_command; took: 0:00:00.000363\r\nDEBUG: Importing rule: git_pull; took: 0:00:00.000242\r\nDEBUG: Importing rule: git_pull_clone; took: 0:00:00.000239\r\nDEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000244\r\nDEBUG: Importing rule: git_push; took: 0:00:00.000246\r\nDEBUG: Importing rule: git_push_force; took: 0:00:00.000238\r\nDEBUG: Importing rule: git_push_pull; took: 0:00:00.000221\r\nDEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000343\r\nDEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000250\r\nDEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000164\r\nDEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000159\r\nDEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000241\r\nDEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000493\r\nDEBUG: Importing rule: git_rm_staged; took: 0:00:00.000347\r\nDEBUG: Importing rule: git_stash; took: 0:00:00.000286\r\nDEBUG: Importing rule: git_stash_pop; took: 0:00:00.000281\r\nDEBUG: Importing rule: git_tag_force; took: 0:00:00.000268\r\nDEBUG: Importing rule: git_two_dashes; took: 0:00:00.000239\r\nDEBUG: Importing rule: go_run; took: 0:00:00.000217\r\nDEBUG: Importing rule: gradle_no_task; took: 0:00:00.000566\r\nDEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000227\r\nDEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000235\r\nDEBUG: Importing rule: grep_recursive; took: 0:00:00.000222\r\nDEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000479\r\nDEBUG: Importing rule: gulp_not_task; took: 0:00:00.000227\r\nDEBUG: Importing rule: has_exists_script; took: 0:00:00.000240\r\nDEBUG: Importing rule: heroku_not_command; took: 0:00:00.000310\r\nDEBUG: Importing rule: history; took: 0:00:00.000067\r\nDEBUG: Importing rule: hostscli; took: 0:00:00.000383\r\nDEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000296\r\nDEBUG: Importing rule: java; took: 0:00:00.000226\r\nDEBUG: Importing rule: javac; took: 0:00:00.000216\r\nDEBUG: Importing rule: lein_not_task; took: 0:00:00.000370\r\nDEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000237\r\nDEBUG: Importing rule: ln_s_order; took: 0:00:00.000241\r\nDEBUG: Importing rule: ls_all; took: 0:00:00.000208\r\nDEBUG: Importing rule: ls_lah; took: 0:00:00.000347\r\nDEBUG: Importing rule: man; took: 0:00:00.000241\r\nDEBUG: Importing rule: man_no_space; took: 0:00:00.000062\r\nDEBUG: Importing rule: mercurial; took: 0:00:00.000234\r\nDEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000085\r\nDEBUG: Importing rule: mkdir_p; took: 0:00:00.000252\r\nDEBUG: Importing rule: mvn_no_command; took: 0:00:00.000213\r\nDEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000260\r\nDEBUG: Importing rule: no_command; took: 0:00:00.000261\r\nDEBUG: Importing rule: no_such_file; took: 0:00:00.000066\r\nDEBUG: Importing rule: npm_missing_script; took: 0:00:00.000593\r\nDEBUG: Importing rule: npm_run_script; took: 0:00:00.000235\r\nDEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000378\r\nDEBUG: Importing rule: open; took: 0:00:00.000605\r\nDEBUG: Importing rule: pacman; took: 0:00:00.000366\r\nDEBUG: Importing rule: pacman_not_found; took: 0:00:00.000111\r\nDEBUG: Importing rule: path_from_history; took: 0:00:00.000099\r\nDEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000315\r\nDEBUG: Importing rule: port_already_in_use; took: 0:00:00.000183\r\nDEBUG: Importing rule: python_command; took: 0:00:00.000261\r\nDEBUG: Importing rule: python_execute; took: 0:00:00.000232\r\nDEBUG: Importing rule: quotation_marks; took: 0:00:00.000052\r\nDEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000224\r\nDEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000051\r\nDEBUG: Importing rule: rm_dir; took: 0:00:00.000242\r\nDEBUG: Importing rule: rm_root; took: 0:00:00.000235\r\nDEBUG: Importing rule: scm_correction; took: 0:00:00.000254\r\nDEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000222\r\nDEBUG: Importing rule: sl_ls; took: 0:00:00.000052\r\nDEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000239\r\nDEBUG: Importing rule: sudo; took: 0:00:00.000059\r\nDEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000231\r\nDEBUG: Importing rule: switch_lang; took: 0:00:00.000091\r\nDEBUG: Importing rule: systemctl; took: 0:00:00.000378\r\nDEBUG: Importing rule: test.py; took: 0:00:00.000051\r\nDEBUG: Importing rule: tmux; took: 0:00:00.000212\r\nDEBUG: Importing rule: touch; took: 0:00:00.000223\r\nDEBUG: Importing rule: tsuru_login; took: 0:00:00.000281\r\nDEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000223\r\nDEBUG: Importing rule: unknown_command; took: 0:00:00.000062\r\nDEBUG: Importing rule: vagrant_up; took: 0:00:00.000308\r\nDEBUG: Importing rule: whois; took: 0:00:00.000282\r\nDEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000309\r\nDEBUG: Importing rule: yarn_alias; took: 0:00:00.000219\r\nDEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000494\r\nDEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000357\r\nDEBUG: Importing rule: yarn_help; took: 0:00:00.000232\r\nDEBUG: Trying rule: dirty_unzip; took: 0:00:00.000568\r\nNo fucks given\r\nDEBUG: Total took: 0:00:03.282835\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/2d81166213c403dce5c04d1fb73ba5d3e57d6676", "file_loc": {"base_commit": "2d81166213c403dce5c04d1fb73ba5d3e57d6676", "files": [{"path": "tests/shells/test_fish.py", "status": "modified", "Loc": {"('TestFish', 'test_get_overridden_aliases', 29)": {"mod": [31, 32]}}}, {"path": "thefuck/shells/fish.py", "status": "modified", "Loc": {"('Fish', '_get_overridden_aliases', 40)": {"mod": [46]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/shells/fish.py"], "doc": [], "test": ["tests/shells/test_fish.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "6da0bc557f0fd94ea1397d3a7f508be896cc98d8", "iss_html_url": "https://github.com/nvbn/thefuck/issues/1120", "iss_label": "", "title": "Trying rule missing_space_before_subcommand taking so long", "body": "\r\n\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.29 using Python 3.8.2 and ZSH 5.8\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n ubuntu 20.04 on wsl2\r\n\r\nHow to reproduce the bug:\r\n\r\n env THEFUCK_DEBUG=true thefuck test\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\n DEBUG: Trying rule: missing_space_before_subcommand; took: 0:00:08.341279\r\n No fucks given\r\n\r\nAnything else you think is relevant:\r\n\r\nI have no idea why this taking so long. anyone else having this problem?\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/KiaraGrouwstra/thefuck/commit/6da0bc557f0fd94ea1397d3a7f508be896cc98d8", "file_loc": {"base_commit": "6da0bc557f0fd94ea1397d3a7f508be896cc98d8", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, 436)": {"add": [436]}, "(None, None, 468)": {"add": [468]}}}, {"path": "tests/test_conf.py", "status": "modified", "Loc": {"('TestSettingsFromEnv', 'test_from_env', 48)": {"add": [67], "mod": [57]}}}, {"path": "tests/test_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [96]}}}, {"path": "thefuck/conf.py", "status": "modified", "Loc": {"('Settings', '_val_from_env', 91)": {"mod": [104]}}}, {"path": "thefuck/const.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [46, 61]}}}, {"path": "thefuck/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [106]}, "(None, 'get_all_executables', 112)": {"add": [121]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/conf.py", "thefuck/utils.py", "thefuck/const.py"], "doc": ["README.md"], "test": ["tests/test_conf.py", "tests/test_utils.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "a84671dd3b7505d4d73f11ee9c7d057429542e24", "iss_html_url": "https://github.com/nvbn/thefuck/issues/20", "iss_label": "", "title": "Some Unicode error in Ubuntu 14.10", "body": "``` bash\n$ apt-get update\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u043e\u0442\u043a\u0440\u044b\u0442\u044c \u0444\u0430\u0439\u043b \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0438 /var/lib/apt/lists/lock - open (13: \u041e\u0442\u043a\u0430\u0437\u0430\u043d\u043e \u0432 \u0434\u043e\u0441\u0442\u0443\u043f\u0435)\nE: \u041d\u0435\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e \u0437\u0430\u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u043a\u0430\u0442\u0430\u043b\u043e\u0433 /var/lib/apt/lists/\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u043e\u0442\u043a\u0440\u044b\u0442\u044c \u0444\u0430\u0439\u043b \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0438 /var/lib/dpkg/lock - open (13: \u041e\u0442\u043a\u0430\u0437\u0430\u043d\u043e \u0432 \u0434\u043e\u0441\u0442\u0443\u043f\u0435)\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u0432\u044b\u043f\u043e\u043b\u043d\u0438\u0442\u044c \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0443 \u0443\u043f\u0440\u0430\u0432\u043b\u044f\u044e\u0449\u0435\u0433\u043e \u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0430 (/var/lib/dpkg/); \u0443 \u0432\u0430\u0441 \u0435\u0441\u0442\u044c \u043f\u0440\u0430\u0432\u0430 \u0441\u0443\u043f\u0435\u0440\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044f?\n$ fuck\nTraceback (most recent call last):\n File \"/usr/local/bin/thefuck\", line 9, in \n load_entry_point('thefuck==1.7', 'console_scripts', 'thefuck')()\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/main.py\", line 91, in main\n matched_rule = get_matched_rule(command, rules, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/main.py\", line 67, in get_matched_rule\n if rule.match(command, settings):\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/utils.py\", line 41, in wrapper\n return fn(command, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/rules/no_command.py\", line 19, in match\n output = _get_output(command, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/rules/no_command.py\", line 13, in _get_output\n return result.stderr.read().decode()\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/a84671dd3b7505d4d73f11ee9c7d057429542e24", "file_loc": {"base_commit": "a84671dd3b7505d4d73f11ee9c7d057429542e24", "files": [{"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}}}, {"path": "thefuck/rules/no_command.py", "status": "modified", "Loc": {"(None, '_get_output', 9)": {"mod": [13]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/no_command.py", "setup.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "622298549172754afff07a8ea1f55358062e17a7", "iss_html_url": "https://github.com/nvbn/thefuck/issues/330", "iss_label": "", "title": "Add command options (--version, --help, --update/--upgrade)", "body": "And perhaps a manpage too, even if it only says \"Please use fuck --help for documentation\"\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/622298549172754afff07a8ea1f55358062e17a7", "file_loc": {"base_commit": "622298549172754afff07a8ea1f55358062e17a7", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, 110)": {"mod": [110]}, "(None, None, 112)": {"mod": [112]}}}, {"path": "thefuck/main.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 3], "mod": [83, 99, 100]}, "(None, 'print_alias', 100)": {"add": [101]}, "(None, 'fix_command', 86)": {"mod": [97]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/main.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "284d49da8d0ab3252b5426423b608033d39c2669", "iss_html_url": "https://github.com/nvbn/thefuck/issues/786", "iss_label": "next release", "title": "\"TypeError: 'module' object is not callable\" On any invocation of thefuck", "body": "\r\n\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0`):\r\n\r\n The Fuck 3.25 using Python 3.6.4+\r\n\r\nYour shell and its version (`bash`, `zsh`, *Windows PowerShell*, etc.):\r\n\r\n GNU bash, version 4.4.18(1)-release (x86_64-pc-linux-gnu)\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n Ubuntu 18.04, Bionic Beaver\r\n\r\nHow to reproduce the bug:\r\n\r\n Execute any bad command (I tested with `cd..` and `apt install whatever`. Then enter `fuck`.\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\n```\r\nDEBUG: Run with settings: {'alter_history': True,\r\n 'debug': True,\r\n 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},\r\n 'exclude_rules': [],\r\n 'history_limit': None,\r\n 'instant_mode': False,\r\n 'no_colors': False,\r\n 'priority': {},\r\n 'repeat': False,\r\n 'require_confirmation': True,\r\n 'rules': [],\r\n 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],\r\n 'user_dir': PosixPath('/home/thomasokeeffe/.config/thefuck'),\r\n 'wait_command': 3,\r\n 'wait_slow_command': 15}\r\nDEBUG: Received output: \r\nDEBUG: Call: export THEFUCK_DEBUG=true; with env: {'CLUTTER_IM_MODULE': 'xim', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_MENU_PREFIX': 'gnome-', 'LANG': 'C', 'GDM_LANG': 'en_US', 'MANAGERPID': '1425', 'DISPLAY': ':0', 'INVOCATION_ID': '09b52cf5b26f4acf8d4fcf48e96663bb', 'UNITY_DEFAULT_PROFILE': 'unity', 'COMPIZ_CONFIG_PROFILE': 'ubuntu', 'GTK2_MODULES': 'overlay-scrollbar', 'DOOMWADDIR': '/opt/doom', 'GTK_CSD': '0', 'COLORTERM': 'truecolor', 'TF_SHELL_ALIASES': 'alias alert=\\'notify-send --urgency=low -i \"$([ $? = 0 ] && echo terminal || echo error)\" \"$(history|tail -n1|sed -e \\'\\\\\\'\\'s/^\\\\s*[0-9]\\\\+\\\\s*//;s/[;&|]\\\\s*alert$//\\'\\\\\\'\\')\"\\'\\nalias dfhack=\\'~/df_linux/dfhack\\'\\nalias dwarff=\\'/home/thomasokeeffe/df_linux/df\\'\\nalias egrep=\\'egrep --color=auto\\'\\nalias fgrep=\\'fgrep --color=auto\\'\\nalias grep=\\'grep --color=auto\\'\\nalias l=\\'ls -CF\\'\\nalias la=\\'ls -A\\'\\nalias ll=\\'ls -alF\\'\\nalias ls=\\'ls --color=auto\\'\\nalias pip=\\'pip3\\'\\nalias python=\\'python3\\'', 'JAVA_HOME': '/usr/lib/jvm/java-8-oracle/', 'J2SDKDIR': '/usr/lib/jvm/java-9-oracle', 'PYTHONIOENCODING': 'utf-8', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'MANDATORY_PATH': '/usr/share/gconf/unity.mandatory.path', 'XDG_GREETER_DATA_DIR': '/var/lib/lightdm-data/thomasokeeffe', 'DERBY_HOME': '/usr/lib/jvm/java-9-oracle/db', 'USER': 'thomasokeeffe', 'DESKTOP_SESSION': 'unity', 'QT4_IM_MODULE': 'xim', 'TEXTDOMAINDIR': '/usr/share/locale/', 'DEFAULTS_PATH': '/usr/share/gconf/unity.default.path', 'PWD': '/home/thomasokeeffe', 'HOME': '/home/thomasokeeffe', 'JOURNAL_STREAM': '9:28556', 'TEXTDOMAIN': 'im-config', 'J2REDIR': '/usr/lib/jvm/java-9-oracle', 'QT_ACCESSIBILITY': '1', 'XDG_SESSION_TYPE': 'x11', 'COMPIZ_BIN_PATH': '/usr/bin/', 'XDG_DATA_DIRS': '/usr/share/unity:/usr/share/unity:/usr/local/share:/usr/share:/var/lib/snapd/desktop:/var/lib/snapd/desktop', 'XDG_SESSION_DESKTOP': 'unity', 'WINEDEBUG': '-all', 'SSH_AGENT_LAUNCHER': 'gnome-keyring', 'GTK_MODULES': 'gail:atk-bridge:unity-gtk-module', 'GNOME_SESSION_XDG_SESSION_PATH': '/org/freedesktop/DisplayManager/Session0', 'TERM': 'xterm-256color', 'VTE_VERSION': '5002', 'SHELL': '/bin/bash', 'XDG_SEAT_PATH': '/org/freedesktop/DisplayManager/Seat0', 'QT_IM_MODULE': 'ibus', 'XMODIFIERS': '@im=ibus', 'IM_CONFIG_PHASE': '2', 'XDG_CURRENT_DESKTOP': 'Unity:Unity7:ubuntu', 'GPG_AGENT_INFO': '/home/thomasokeeffe/.gnupg/S.gpg-agent:0:1:', 'TF_ALIAS': 'fuck', 'UNITY_HAS_3D_SUPPORT': 'true', 'SHLVL': '2', 'LANGUAGE': 'en_US', 'WINDOWID': '67108870', 'GDMSESSION': 'unity', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'LOGNAME': 'thomasokeeffe', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'XDG_RUNTIME_DIR': '/run/user/1000', 'XAUTHORITY': '/home/thomasokeeffe/.Xauthority', 'TF_HISTORY': '\\t python\\n\\t fuck\\n\\t source ~/.bashrc\\n\\t fuck\\n\\t apt install whatever\\n\\t fuck\\n\\t cd..\\n\\t fuck\\n\\t fuck --version\\n\\t export THEFUCK_DEBUG=true', 'XDG_SESSION_PATH': '/org/freedesktop/DisplayManager/Session0', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-unity:/etc/xdg/xdg-unity:/etc/xdg', 'PATH': '/usr/bin/ski:/home/thomasokeeffe/.local/bin:/opt/doom:/usr/bin/python3:/usr/bin/ski:/home/thomasokeeffe/.local/bin:/opt/doom:/usr/bin/python3:/home/thomasokeeffe/.local/share/umake/bin:/home/thomasokeeffe/bin:/home/thomasokeeffe/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-9-oracle/bin:/usr/lib/jvm/java-9-oracle/db/bin', 'THEFUCK_DEBUG': 'true', 'LD_PRELOAD': 'libgtk3-nocsd.so.0', 'SESSION_MANAGER': 'local/Wirecat:@/tmp/.ICE-unix/1738,unix/Wirecat:/tmp/.ICE-unix/1738', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'GTK_IM_MODULE': 'ibus', '_': '/home/thomasokeeffe/.local/bin/thefuck', 'LC_ALL': 'C', 'GIT_TRACE': '1'}; is slow: took: 0:00:00.001356\r\nDEBUG: Importing rule: ag_literal; took: 0:00:00.000609\r\nDEBUG: Importing rule: apt_get; took: 0:00:00.001838\r\nDEBUG: Total took: 0:00:00.028332\r\nTraceback (most recent call last):\r\n File \"/home/thomasokeeffe/.local/bin/thefuck\", line 11, in \r\n sys.exit(main())\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/entrypoints/main.py\", line 25, in main\r\n fix_command(known_args)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/entrypoints/fix_command.py\", line 41, in fix_command\r\n corrected_commands = get_corrected_commands(command)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 89, in get_corrected_commands\r\n corrected for rule in get_rules()\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 49, in get_rules\r\n key=lambda rule: rule.priority)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 17, in get_loaded_rules\r\n rule = Rule.from_path(path)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/types.py\", line 140, in from_path\r\n rule_module = load_source(name, str(path))\r\n File \"/usr/lib/python3.6/imp.py\", line 172, in load_source\r\n module = _load(spec)\r\n File \"\", line 696, in _load\r\n File \"\", line 677, in _load_unlocked\r\n File \"\", line 678, in exec_module\r\n File \"\", line 219, in _call_with_frames_removed\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/rules/apt_get.py\", line 8, in \r\n command_not_found = CommandNotFound()\r\nTypeError: 'module' object is not callable\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/fb39d0bbd349e916ae12a77f04efd151dd046e6b\n\nhttps://github.com/nvbn/thefuck/commit/284d49da8d0ab3252b5426423b608033d39c2669", "file_loc": {"base_commit": "284d49da8d0ab3252b5426423b608033d39c2669", "files": [{"path": "tests/rules/test_apt_get.py", "status": "modified", "Loc": {"(None, 'test_match', 13)": {"mod": [15, 16, 17]}, "(None, 'test_not_match', 30)": {"mod": [33, 34, 35]}, "(None, 'test_get_new_command', 49)": {"mod": [52, 53, 54]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": ["tests/rules/test_apt_get.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "2b5e7c26111e447c2714284151c2e7555abd11e4", "iss_html_url": "https://github.com/home-assistant/core/issues/27175", "iss_label": "integration: google_assistant", "title": "Google assistant: something went wrong when using alarm", "body": "\r\n\r\n**Home Assistant release with the issue:**\r\n0.100.0b0\r\n\r\n\r\n\r\n\r\n**Last working Home Assistant release (if known):**\r\n\r\n\r\n**Operating environment (Hass.io/Docker/Windows/etc.):**\r\nhassio\r\n\r\n**Integration:**\r\n\r\nnabu casa cloud\r\ngoogle assistant\r\nenvisalink\r\n\r\n**Description of problem:**\r\nUsing the google assistant to arm home/arm away/disarm causes the google assistant to indicate that \"something went wrong\" although it actually performed the action.\r\nI am using the envisalink component which allows you to specify the code so that it is sent with each service call. I tried with/without the code configuration and it made no difference. \r\n\r\n\r\n**Problem-relevant `configuration.yaml` entries and (fill out even if it seems unimportant):**\r\n```yaml\r\n\r\n```\r\n\r\n**Traceback (if applicable):**\r\n```\r\n\r\n```\r\n\r\n**Additional information:**\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/36942", "commit_html_url": null, "file_loc": {"base_commit": "2b5e7c26111e447c2714284151c2e7555abd11e4", "files": [{"path": "homeassistant/components/google_assistant/trait.py", "status": "modified", "Loc": {"('ArmDisArmTrait', None, 974)": {"add": [990, 1000]}, "('ArmDisArmTrait', 'sync_attributes', 1001)": {"mod": [1005]}, "('ArmDisArmTrait', 'execute', 1031)": {"mod": [1034, 1038]}}}, {"path": "tests/components/google_assistant/test_trait.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1033]}, "(None, 'test_arm_disarm_arm_away', 865)": {"mod": [876, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913]}, "(None, 'test_arm_disarm_disarm', 1035)": {"mod": [1046, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/google_assistant/trait.py"], "doc": [], "test": ["tests/components/google_assistant/test_trait.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "fb7fb0ea78ee335cd23f3647223a675718ccf048", "iss_html_url": "https://github.com/home-assistant/core/issues/40316", "iss_label": "integration: knx", "title": "KNX problem with 0.115.0 and 0.115.1", "body": "## The problem\r\nKNX integration has changed behavior and don't work fine:\r\n1) it is possible to read the status of a scene only if it is launched from the KNX bus but not if it is launched from the HA\r\n2) KNX climate don't read operation_mode_state_address correctly, when the operation mode is changed it reads the correct state then it is changed to \"standby\"\r\n\r\n## Environment\r\nHome Assistant 0.115.1\r\nFrontend: 20200917.1 - latest\r\nRaspberry 3\r\narch | armv7l\r\nchassis | embedded\r\ndev | false\r\ndocker | true\r\ndocker_version | 19.03.11\r\nhassio | true\r\nhost_os | HassOS 4.13\r\ninstallation_type | Home Assistant OS\r\nos_name | Linux\r\nos_version | 4.19.127-v7\r\npython_version | 3.8.5\r\nsupervisor | 245\r\ntimezone | Europe/Rome\r\nversion | 0.115.1\r\nvirtualenv | false\r\n\r\n- Home Assistant Core release with the issue: 0.115.1\r\n- Last working Home Assistant Core release (if known): 0.113.3\r\n- Operating environment (OS/Container/Supervised/Core): 4.12 \r\n- Integration causing this issue: KNX\r\n- Link to integration documentation on our website: https://www.home-assistant.io/integrations/knx/\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/40472", "commit_html_url": null, "file_loc": {"base_commit": "fb7fb0ea78ee335cd23f3647223a675718ccf048", "files": [{"path": "homeassistant/components/knx/manifest.json", "status": "modified", "Loc": {"(None, None, 5)": {"mod": [5]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, 2268)": {"mod": [2268]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Doc\nJson"}, "loctype": {"code": ["homeassistant/components/knx/manifest.json"], "doc": [], "test": [], "config": ["requirements_all.txt"], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "faedba04079d2c999a479118b5189ef4c0bff060", "iss_html_url": "https://github.com/home-assistant/core/issues/77928", "iss_label": "integration: velux\nstale", "title": "Somfy blind motors cannot be assigned to a room", "body": "### The problem\n\nSomfy motors will return `None` as serial number via the Velux KLF-200:\r\n[Handle devices without serial numbers.](https://github.com/Julius2342/pyvlx/pull/42/commits/d409d66db8732553e928f5dd9d00d458ba638dea)\r\n\r\nThis serial is usesd as unique id here:\r\n[core/homeassistant/components/velux/__init__.py#L114](https://github.com/home-assistant/core/blob/dev/homeassistant/components/velux/__init__.py#L114)\r\n\r\nCould it be reasonable to return the node name instead of `None`?\r\n```python\r\n if self.node.serial_number:\r\n return self.node.serial_number\r\n elif self.node.name:\r\n return self.node.name\r\n else:\r\n return \"velux_#\" + str(self.node.node_id)\r\n```\n\n### What version of Home Assistant Core has the issue?\n\n2022.8.7\n\n### What was the last working version of Home Assistant Core?\n\n_No response_\n\n### What type of installation are you running?\n\nHome Assistant OS\n\n### Integration causing the issue\n\nVelux\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/velux/\n\n### Diagnostics information\n\n_No response_\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\nRelated issues:\r\n[66262](https://github.com/home-assistant/core/issues/66262)\r\n[35935](https://github.com/home-assistant/core/issues/35935)\r\n[74009](https://github.com/home-assistant/core/issues/74009)\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/117508", "commit_html_url": null, "file_loc": {"base_commit": "faedba04079d2c999a479118b5189ef4c0bff060", "files": [{"path": "homeassistant/components/velux/__init__.py", "status": "modified", "Loc": {"('VeluxEntity', None, 106)": {"mod": [111]}, "('VeluxEntity', '__init__', 111)": {"mod": [114]}}}, {"path": "homeassistant/components/velux/cover.py", "status": "modified", "Loc": {"(None, 'async_setup_entry', 26)": {"mod": [32]}, "('VeluxCover', None, 38)": {"mod": [44]}, "('VeluxCover', '__init__', 44)": {"mod": [46]}}}, {"path": "homeassistant/components/velux/light.py", "status": "modified", "Loc": {"(None, 'async_setup_entry', 19)": {"mod": [26]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/velux/light.py", "homeassistant/components/velux/cover.py", "homeassistant/components/velux/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "551a584ca69771804b6f094eceb67dcb25a2f627", "iss_html_url": "https://github.com/home-assistant/core/issues/68620", "iss_label": "needs-more-information\nintegration: overkiz", "title": "Polling interval for stateless (e.g. Somfy (Oceania)) is not applied in Overkiz", "body": "### The problem\n\nEvery day I get a \"Gateway ID\" error in Overkiz error that reads as below. Same problem as [#66606](https://github.com/home-assistant/core/issues/66606) \r\n\r\n\"Translation Error: The intl string context variable \"gateway id\" was not provided to the string \"Gateway: {gateway id}\" Overkiz (by Somfy)\". \r\n\r\nWhen I click \"Reconfigure\" and reenter my password, the problem is corrected. But then it reoccurs in the next day or so.\r\n\r\nLooking at the log, it seems like there's some really aggressive polling going on? \r\n\r\n\n\n### What version of Home Assistant Core has the issue?\n\ncore-2022.3.5\n\n### What was the last working version of Home Assistant Core?\n\n_No response_\n\n### What type of installation are you running?\n\nHome Assistant Supervised\n\n### Integration causing the issue\n\nOverkiz (by Somfy)\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/overkiz\n\n### Diagnostics information\n\n[config_entry-overkiz-0bf20335f9aeaa86644cb071861f6ef1.json.txt](https://github.com/home-assistant/core/files/8341651/config_entry-overkiz-0bf20335f9aeaa86644cb071861f6ef1.json.txt)\r\n\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/133617", "commit_html_url": null, "file_loc": {"base_commit": "551a584ca69771804b6f094eceb67dcb25a2f627", "files": [{"path": "homeassistant/components/overkiz/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [43]}, "(None, 'async_setup_entry', 57)": {"mod": [116, 117, 118, 119, 122]}}}, {"path": "homeassistant/components/overkiz/const.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [46]}}}, {"path": "homeassistant/components/overkiz/coordinator.py", "status": "modified", "Loc": {"('OverkizDataUpdateCoordinator', None, 36)": {"add": [38]}, "('OverkizDataUpdateCoordinator', '__init__', 39)": {"add": [67], "mod": [48, 62, 63, 64]}, "('OverkizDataUpdateCoordinator', '_async_update_data', 69)": {"add": [104], "mod": [106]}, "(None, None, None)": {"add": [126], "mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/overkiz/const.py", "homeassistant/components/overkiz/__init__.py", "homeassistant/components/overkiz/coordinator.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "f7590d47641cedbf630b909aa8f53930c4a9ce5c", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/983", "iss_label": "site-bug", "title": "VRV - NoneType object is not iterable", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n\r\n- [X] I'm reporting a bug unrelated to a specific site\r\n- [X] I've verified that I'm running yt-dlp version **2021.09.02**\r\n- [X] I've checked that all provided URLs are alive and playable in a browser\r\n- [X] The provided URLs do not contain any DRM to the best of my knowledge\r\n- [X] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [X] I've searched the bugtracker for similar bug reports including closed ones\r\n- [X] I've read bugs section in FAQ\r\n\r\n\r\n## Verbose log\r\n\r\n\r\n\r\n```\r\nytdl -F -u PRIVATE -p PRIVATE \"https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend\" --verbose\r\n[debug] Command-line config: ['-F', '-u', 'PRIVATE', '-p', 'PRIVATE', 'https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend', '--verbose']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8, pref cp1252\r\n[debug] yt-dlp version 2021.09.02 (exe)\r\n[debug] Python version 3.8.10 (CPython 64bit) - Windows-10-10.0.18363-SP0\r\n[debug] exe versions: ffmpeg 4.4-full_build-www.gyan.dev, ffprobe 4.4-full_build-www.gyan.dev\r\n[debug] Optional libraries: mutagen, pycryptodome, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[vrv] None: Downloading webpage\r\n[vrv] Downloading Token Credentials JSON metadata\r\n[debug] [vrv] Extracting URL: https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend\r\n[vrv] GRP5G39JR: Downloading resource path JSON metadata\r\n[vrv] GRP5G39JR: Downloading CMS Signing JSON metadata\r\n[vrv] GRP5G39JR: Downloading object JSON metadata\r\n[vrv] GRP5G39JR: Downloading video JSON metadata\r\n[vrv] GRP5G39JR: Downloading streams JSON metadata\r\n[vrv] GRP5G39JR: Downloading dash-audio-en-US information\r\n[vrv] GRP5G39JR: Downloading hls-audio-en-US information\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\nERROR: 'NoneType' object is not iterable\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\YoutubeDL.py\", line 1214, in wrapper\r\n File \"yt_dlp\\YoutubeDL.py\", line 1239, in __extract_info\r\n File \"yt_dlp\\extractor\\common.py\", line 584, in extract\r\n File \"yt_dlp\\extractor\\vrv.py\", line 221, in _real_extract\r\nTypeError: 'NoneType' object is not iterable\r\n```\r\n\r\n\r\n\r\n## Description\r\n\r\n\r\n\r\nI've noticed this is happening a little more often but it seems that the entire series for this one does this but then it works just fine on other series. So haven't really noticed where this is hanging up but i used `--write-pages` and got an extra dump file for this one vs. one that actually downloads, which looks like this.\r\n\r\n```\r\n\r\n \r\n \r\n \r\n VRV - Home of Your Favorite Channels\r\n \r\n
      What is Vrv?Vrv
      View All

      Watch The Best Stuff Ever

      The best in anime, gaming, tech, cartoons, +\u00a0more! Create a free account to keep watching across our apps, build a watchlist, or go premium to sync & watch videos offline.

      What's Available on VRV?\u00a0View All

      Create Account

      Create My Free Account

      Create Account
      Existing user?

      By creating an account you\u2019re agreeing to our Terms & Privacy Policy, and you confirm that you are at least 16 years of age.

      \"Miss
      \"CrunchyrollCrunchyroll

      Miss Kobayashi's Dragon Maid

      SeriesSubtitled

      Miss Kobayashi is your average office worker who lives a boring life, alone in her small apartment\u2013until she saves the life of a female dragon in distress. The dragon, named Tohru, has the ability to magically transform into an adorable human girl (albeit with horns and a long tail!), who will do anything to pay off her debt of gratitude, whether Miss Kobayashi likes it or not. With a very persistent and amorous dragon as a roommate, nothing comes easy, and Miss Kobayashi\u2019s normal life is about to go off the deep end!

      Most Popular

      TSUKIMICHI -Moonlit Fantasy-

      You\u2019ve reached the end of the feed.

      View All

      VRV Premium

      Everything on VRV, Ad-free, $9.99 +tax/mo

      Get newest episodes, exclusive series, and ad-free viewing to everything on VRV such as HarmonQuest, Dragon Ball Super, Bravest Warriors, and more!
      30-DAY FREE TRIAL
      \r\n
      \"Close

      Ancient browser detected!

      Some old stuff is cool. Stuff like Stonehenge, ancient remains,\r\n and that picture of your dad next to that sweet car. What's not\r\n cool? Old browsers. VRV doesn't work on old browsers, so it looks\r\n like it's time for an upgrade. Here are some we officially\r\n support.

      • \"Google
        Google Chrome

        Version 55+

        GET IT
      • \"Mozilla
        Mozilla Firefox

        Version 60+

        GET IT
      • \"Microsoft
        Microsoft Edge

        Version 15+

        GET IT
      • \"Apple
        Apple Safari

        Version 10+

        GET IT
      \r\n \r\n
      \r\n \r\n
      \r\n \r\n ```\r\n\r\nNot sure itf it is helpful but that's all I got for now.", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/f7590d47641cedbf630b909aa8f53930c4a9ce5c", "file_loc": {"base_commit": "f7590d47641cedbf630b909aa8f53930c4a9ce5c", "files": [{"path": "yt_dlp/extractor/vrv.py", "status": "modified", "Loc": {"('VRVIE', '_real_extract', 168)": {"mod": [221]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/vrv.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "50e93e03a7ca6ae35a319ea310104f7d6d91eee3", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/3183", "iss_label": "geo-blocked\nsite-bug", "title": "Tele5 has an extraction error", "body": "### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Description\n\ntrying to download the curernt andromenda series:\r\n`yt-dlp -F https://tele5.de/mediathek/gene-roddenberrys-andromeda/`\r\n`[Tele5] gene-roddenberrys-andromeda: Downloading webpage`\r\n`ERROR: gene-roddenberrys-andromeda: An extractor error has occurred. (caused by KeyError('assetid')); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the \"Broken site\" issue template properly. Confirm you are on the latest ver`\r\n\r\n\n\n### Verbose log\n\n```shell\nERROR: gene-roddenberrys-andromeda: An extractor error has occurred. (caused by KeyError('assetid')); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the \"Broken site\" issue template properly. Confirm you are on the latest version using yt-dlp -U\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/common.py\", line 617, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/tele5.py\", line 81, in _real_extract\r\n asset_id, country, realm = (player_info[x] for x in ('assetid', 'locale', 'realm', ))\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/tele5.py\", line 81, in \r\n asset_id, country, realm = (player_info[x] for x in ('assetid', 'locale', 'realm', ))\r\nKeyError: 'assetid'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/50e93e03a7ca6ae35a319ea310104f7d6d91eee3", "file_loc": {"base_commit": "50e93e03a7ca6ae35a319ea310104f7d6d91eee3", "files": [{"path": "yt_dlp/YoutubeDL.py", "status": "modified", "Loc": {}}, {"path": "yt_dlp/extractor/aliexpress.py", "status": "modified", "Loc": {"('AliExpressLiveIE', None, 12)": {"mod": [21]}}}, {"path": "yt_dlp/extractor/applepodcasts.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 6]}, "('ApplePodcastsIE', None, 15)": {"add": [26], "mod": [17, 22, 24, 25, 42, 43, 44, 45, 46]}, "('ApplePodcastsIE', '_real_extract', 42)": {"add": [52, 61], "mod": [50, 56]}}}, {"path": "yt_dlp/extractor/arte.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}, "('ArteTVPlaylistIE', '_real_extract', 230)": {"add": [255]}}}, {"path": "yt_dlp/extractor/audiomack.py", "status": "modified", "Loc": {"('AudiomackIE', None, 16)": {"add": [31]}}}, {"path": "yt_dlp/extractor/bbc.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13]}, "('BBCIE', None, 604)": {"add": [786, 791], "mod": [796, 797, 799, 800, 801]}, "('BBCCoUkIE', None, 39)": {"mod": [41]}, "('BBCCoUkIE', '_process_media_selector', 363)": {"mod": [397, 398, 399]}, "('BBCIE', '_real_extract', 906)": {"mod": [1174, 1175, 1176]}, "('BBCIE', 'parse_media', 1206)": {"mod": [1217]}}}, {"path": "yt_dlp/extractor/bigo.py", "status": "modified", "Loc": {"('BigoIE', '_real_extract', 30)": {"add": [36], "mod": [39, 47]}}}, {"path": "yt_dlp/extractor/extractors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [70, 93, 308]}}}, {"path": "yt_dlp/extractor/nuvid.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2], "mod": [8]}, "('NuvidIE', None, 15)": {"add": [22, 25, 30, 48], "mod": [29]}, "('NuvidIE', '_real_extract', 53)": {"mod": [58, 59, 60, 61, 62, 63, 64, 70]}}}, {"path": "yt_dlp/extractor/rutv.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [9]}, "('RUTVIE', '_real_extract', 126)": {"mod": [182]}}}, {"path": "yt_dlp/extractor/streamcz.py", "status": "modified", "Loc": {"('StreamCZIE', None, 14)": {"add": [24], "mod": [34]}}}, {"path": "yt_dlp/extractor/tele5.py", "status": "modified", "Loc": {"('Tele5IE', None, 12)": {"add": [30], "mod": [16, 45, 67, 68, 70, 71, 73, 74, 75, 76, 78, 80, 81, 82, 83, 84, 86, 87, 88, 90, 91, 92, 93, 94, 95, 97, 98, 99, 101, 102, 104, 105, 106, 107, 108]}, "(None, None, None)": {"mod": [4, 6, 7, 8, 10, 11, 12]}}}, {"path": "yt_dlp/extractor/tv2dk.py", "status": "modified", "Loc": {"('TV2DKIE', '_real_extract', 79)": {"add": [98], "mod": [94]}, "('TV2DKIE', None, 16)": {"mod": [44, 45]}}}, {"path": "yt_dlp/extractor/uol.py", "status": "modified", "Loc": {"('UOLIE', '_real_extract', 67)": {"mod": [98]}}}, {"path": "yt_dlp/extractor/urplay.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6, 7]}, "('URPlayIE', None, 16)": {"add": [28], "mod": [26, 53, 54, 55, 56, 57]}, "('URPlayIE', '_real_extract', 54)": {"add": [113], "mod": [75, 76, 77, 78, 79, 101]}}}, {"path": "yt_dlp/extractor/videa.py", "status": "modified", "Loc": {"('VideaIE', '_real_extract', 112)": {"mod": [149, 166, 167, 168]}}}, {"path": "yt_dlp/extractor/vimeo.py", "status": "modified", "Loc": {"('VimeoIE', None, 297)": {"add": [638]}}}, {"path": "yt_dlp/extractor/wdr.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12, 24]}, "('WDRIE', None, 25)": {"add": [43], "mod": [31, 39]}, "('WDRPageIE', None, 139)": {"add": [209, 234], "mod": [173, 175, 177, 186, 194, 197, 248]}, "('WDRPageIE', '_real_extract', 258)": {"add": [273], "mod": [293, 295, 296, 299, 300, 301, 302]}, "('WDRElefantIE', '_real_extract', 324)": {"add": [336]}, "('WDRIE', '_real_extract', 47)": {"mod": [129, 130, 132]}}}, {"path": "yt_dlp/extractor/zdf.py", "status": "modified", "Loc": {"('ZDFIE', None, 136)": {"add": [138], "mod": [198, 199, 200, 201, 202, 203, 204]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/extractors.py", "yt_dlp/extractor/streamcz.py", "yt_dlp/extractor/bbc.py", "yt_dlp/extractor/zdf.py", "yt_dlp/extractor/tv2dk.py", "yt_dlp/extractor/rutv.py", "yt_dlp/extractor/aliexpress.py", "yt_dlp/extractor/wdr.py", "yt_dlp/extractor/videa.py", "yt_dlp/extractor/nuvid.py", "yt_dlp/extractor/arte.py", "yt_dlp/extractor/vimeo.py", "yt_dlp/extractor/urplay.py", "yt_dlp/extractor/bigo.py", "yt_dlp/YoutubeDL.py", "yt_dlp/extractor/applepodcasts.py", "yt_dlp/extractor/tele5.py", "yt_dlp/extractor/uol.py", "yt_dlp/extractor/audiomack.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "80e8493ee7c3083f4e215794e4a67ba5265f24f7", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2885", "iss_label": "site-request\npatch-available", "title": "Add Filmarkivet.se as a Supported Site", "body": "### Checklist\n\n- [X] I'm reporting a new site support request\n- [X] I've verified that I'm running yt-dlp version **2022.02.04**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required\n\n### Region\n\nUnited States\n\n### Example URLs\n\nhttps://www.filmarkivet.se/movies/paris-d-moll/\n\n### Description\n\nPlease add Filmarkivet.se as a supported site. I already watched the YouTube video \"The Secret Logos Of SF Studios (1919 - 1999)\" by CCGFilms, which has some SF Studios logos. I need to capture its logos.\n\n### Verbose log\n\n```shell\n[debug] Command-line config: ['-v', 'https://www.filmarkivet.se/movies/paris-d-moll/']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8 (No ANSI), err utf-8 (No ANSI), pref cp1252\r\n[debug] yt-dlp version 2022.02.04 [c1653e9] (win_exe)\r\n[debug] Python version 3.8.10 (CPython 64bit) - Windows-7-6.1.7601-SP1\r\n[debug] exe versions: ffmpeg N-105662-ge534d98af3-20220217 (setts), ffprobe N-105038-g30322ebe3c-sherpya\r\n[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[debug] [generic] Extracting URL: https://www.filmarkivet.se/movies/paris-d-moll/\r\n[generic] paris-d-moll: Requesting header\r\nWARNING: [generic] Falling back on generic information extractor.\r\n[generic] paris-d-moll: Downloading webpage\r\nWARNING: [generic] URL could be a direct video link, returning it as such.\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] paris-d-moll: Downloading 1 format(s): 0\r\n[debug] Invoking downloader on \"https://www.filmarkivet.se/movies/paris-d-moll/\"\r\n[download] Destination: paris-d-moll [paris-d-moll].unknown_video\r\n[download] 100% of 373.64KiB in 00:01\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/80e8493ee7c3083f4e215794e4a67ba5265f24f7", "file_loc": {"base_commit": "80e8493ee7c3083f4e215794e4a67ba5265f24f7", "files": [{"path": "yt_dlp/extractor/generic.py", "status": "modified", "Loc": {"('GenericIE', None, 143)": {"add": [2529]}}}, {"path": "yt_dlp/utils.py", "status": "modified", "Loc": {"(None, 'is_html', 3283)": {"add": [3292], "mod": [3294, 3295, 3296, 3297, 3298]}, "(None, None, None)": {"mod": [3300]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/utils.py", "yt_dlp/extractor/generic.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "5da08bde9e073987d1aae2683235721e4813f9c6", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/5424", "iss_label": "site-enhancement", "title": "[VLIVE.TV] Extract release timestamp", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nIs there a way to change `upload_date` from UTC to a specific GMT? This video (https://www.vlive.tv/post/1-18318601) was posted on Nov. 28, 2018 KST (Korean Standard Time) but yt-dlp downloads it as 20181127.\r\n\r\nI know you can prefer not to use UTC for YouTube videos but don't know how for other sites.\r\n\r\nHere is my command:\r\n`!yt-dlp -vU --embed-metadata --embed-thumbnail --merge-output-format \"mkv/mp4\" --write-subs --sub-langs all,-live_chat --embed-subs --compat-options no-keep-subs \"https://www.vlive.tv/post/1-18318601\" -o \"%(upload_date)s - %(creator)s - %(title)s.%(ext)s\" -P \"/content/drive/Shareddrives/VLIVE\" -P temp:\"/content/drive/Shareddrives/VLIVE/!temp\"`\r\n\r\nThanks!\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', '--embed-metadata', '--embed-thumbnail', '--merge-output-format', 'mkv/mp4', '--write-subs', '--sub-langs', 'all,-live_chat', '--embed-subs', '--compat-options', 'no-keep-subs', 'https://www.vlive.tv/post/1-18318601', '-o', '%(upload_date)s - %(creator)s - %(title)s.%(ext)s', '-P', '/content/drive/Shareddrives/VLIVE', '-P', 'temp:/content/drive/Shareddrives/VLIVE/!temp']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8\r\n[debug] yt-dlp version 2022.10.04 [4e0511f27]\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Compatibility options: no-keep-subs\r\n[debug] Python 3.7.15 (CPython 64bit) - Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic (glibc 2.26)\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] exe versions: ffmpeg 3.4.11, ffprobe 3.4.11\r\n[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1706 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.10.04, Current version: 2022.10.04\r\nyt-dlp is up to date (2022.10.04)\r\n[debug] [vlive:post] Extracting URL: https://www.vlive.tv/post/1-18318601\r\n[vlive:post] 1-18318601: Downloading post JSON metadata\r\n[debug] [vlive] Extracting URL: http://www.vlive.tv/video/101216\r\n[vlive] 101216: Downloading officialVideoPost JSON metadata\r\n[vlive] 101216: Downloading inkey JSON metadata\r\n[vlive] 101216: Downloading JSON metadata\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[info] 101216: Downloading subtitles: en_US, es_PA, es_ES, fr_FR, in_ID, pt_PT, vi_VN, jp, zh_CN, ko_KR\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] 101216: Downloading 1 format(s): avc1_720P\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_06_3/aa339b4a-f89e-11e8-bc80-3ca82a21f531-1544022097899_en_US_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt\r\n[download] 100% of 55.30KiB in 00:00:00 at 154.65KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3b1a6fdd-f30e-11e8-8111-3ca82a220799-1543410308156_es_PA_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt\r\n[download] 100% of 24.65KiB in 00:00:00 at 317.51KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/2020_11_16/a3003ab1-27e4-11eb-9a2e-0050569c085d-1605514850566_es_ES_fan.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt\r\n[download] 100% of 59.17KiB in 00:00:00 at 197.86KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/2020_12_14/8d2ea1db-3dcf-11eb-9b2a-0050569c085d-1607924720110_fr_FR_fan.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt\r\n[download] 100% of 55.22KiB in 00:00:00 at 528.98KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3ace9972-f30e-11e8-8606-3ca82a22c1e9-1543410307659_in_ID_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt\r\n[download] 100% of 24.42KiB in 00:00:00 at 157.16KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3b090ad1-f30e-11e8-9c04-3ca82a225339-1543410308041_pt_PT_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt\r\n[download] 100% of 25.00KiB in 00:00:00 at 267.88KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_07_4/dac66573-f9fc-11e8-98b0-3ca82a22d7a5-1544172503245_vi_VN_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt\r\n[download] 100% of 64.32KiB in 00:00:00 at 555.77KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3aee2f6b-f30e-11e8-bb16-3ca82a21e509-1543410307868_jp_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt\r\n[download] 100% of 23.29KiB in 00:00:00 at 258.58KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_10_4/077db2f1-fc58-11e8-8818-3ca82a22c1e9-1544431564794_zh_CN_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt\r\n[download] 100% of 52.85KiB in 00:00:00 at 581.89KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_06_2/cc2e3feb-f921-11e8-8285-3ca82a2243c9-1544078418977_ko_KR_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt\r\n[download] 100% of 64.29KiB in 00:00:00 at 450.71KiB/s\r\n[info] Downloading video thumbnail 1 ...\r\n[info] Writing video thumbnail 1 to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.png\r\n[download] /content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4 has already been downloaded\r\n[EmbedSubtitle] Embedding subtitles in \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\r\n[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt' -map 0 -dn -ignore_unknown -c copy -c:s mov_text -map -0:s -map 1:0 -metadata:s:s:0 language=eng -map 2:0 -metadata:s:s:1 language=spa -map 3:0 -metadata:s:s:2 language=spa -map 4:0 -metadata:s:s:3 language=fra -map 5:0 -metadata:s:s:4 language=ind -map 6:0 -metadata:s:s:5 language=por -map 7:0 -metadata:s:s:6 language=vie -map 8:0 -metadata:s:s:7 language=jp -map 9:0 -metadata:s:s:8 language=zho -map 10:0 -metadata:s:s:9 language=kor -movflags +faststart 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.temp.mp4'\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt (pass -k to keep)\r\n[Metadata] Adding metadata to \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\r\n[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=\u2665\ub3c4\uc694\uc77c\u2665 12/1 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601' -metadata date=20181127 -metadata purl=http://www.vlive.tv/video/101216 -metadata comment=http://www.vlive.tv/video/101216 -metadata 'artist=NCT\uc758 night night!' -movflags +faststart 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.temp.mp4'\r\n[EmbedThumbnail] mutagen: Adding thumbnail to \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/HHeroin/yt-dlp/commit/5da08bde9e073987d1aae2683235721e4813f9c6", "file_loc": {"base_commit": "5da08bde9e073987d1aae2683235721e4813f9c6", "files": [{"path": "yt_dlp/extractor/vlive.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15]}, "('VLiveIE', None, 69)": {"add": [83, 100]}, "('VLiveIE', '_real_extract', 148)": {"add": [171]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/vlive.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "51c22ef4e2af966d6100d0d97d9e8019022df8ad", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2996", "iss_label": "bug", "title": "'<' not supported between instances of 'float' and 'str' and --throttled-rate error after update?", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Description\n\nAfter update I get error\r\n\r\n'<' not supported between instances of 'float' and 'str'\r\n\r\nI found out that it is somewhat related to --throttled-rate setting? When I remove it I can download from YT no issues\r\nIf I leave it, I get the following message\r\n\r\n[download] 0.0% of 714.94MiB at 499.98KiB/s ETA 24:24ERROR: '<' not supported between instances of 'float' and 'str'\r\n\n\n### Verbose log\n\n```shell\nMicrosoft Windows [Version 6.1.7601]\r\n\r\n>yt-dlp https://www.youtube.com/watch?v=XUp9pe1T-UE --throttled-rate 999k\r\n[youtube] XUp9pe1T-UE: Downloading webpage\r\n[youtube] XUp9pe1T-UE: Downloading android player API JSON\r\n[info] XUp9pe1T-UE: Downloading 1 format(s): 571+251\r\nWARNING: Requested formats are incompatible for merge and will be merged into mkv\r\n[download] Destination: 8k VIDEOS _ Beauty of Nature 8K (60 FPS) HDR UltraHD _ Sony Demo [XUp9pe1T-UE].f571.mp4\r\n[download] 0.0% of 505.86MiB at 90.90KiB/s ETA 01:34:58ERROR: '<' not supported between instances of 'float' and 'str'\r\n\r\nyt>\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/51c22ef4e2af966d6100d0d97d9e8019022df8ad", "file_loc": {"base_commit": "51c22ef4e2af966d6100d0d97d9e8019022df8ad", "files": [{"path": "yt_dlp/__init__.py", "status": "modified", "Loc": {"(None, 'validate_options', 156)": {"mod": [258]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "6f638d325e1878df304822c6bf4e231e06dae89a", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/3467", "iss_label": "docs/meta/cleanup\nhigh-priority\nregression", "title": "Error since commit 43cc91a", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.04.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Description\n\nAfter commit 43cc91a, I get the error shown in the verbose log.\n\n### Verbose log\n\n```shell\nyt-dlp -Uv\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/bin/yt-dlp/__main__.py\", line 13, in \r\n File \"\", line 259, in load_module\r\n File \"/usr/local/bin/yt-dlp/yt_dlp/__init__.py\", line 12, in \r\nModuleNotFoundError: No module named 'yt_dlp.compat'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/6f638d325e1878df304822c6bf4e231e06dae89a", "file_loc": {"base_commit": "6f638d325e1878df304822c6bf4e231e06dae89a", "files": [{"path": "Makefile", "status": "modified", "Loc": {"(None, None, 61)": {"add": [61]}, "(None, None, 64)": {"mod": [64]}, "(None, None, 68)": {"mod": [68]}, "(None, None, 70)": {"mod": [70]}}}, {"path": "yt_dlp/extractor/anvato.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7], "mod": [22, 23, 24, 25, 26, 27, 28, 29, 30]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/anvato.py"], "doc": [], "test": [], "config": ["Makefile"], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "14a086058a30a0748b5b716e9b21481f993518f3", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/1601", "iss_label": "site-bug", "title": "ARD:mediathek doesn't work anymore", "body": "### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2021.10.22**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Description\n\nDownloading from ARDmediathek dosen\u2019t work anymore\n\n### Verbose log\n\n```shell\n$ /repositories/yt-dlp/yt-dlp --no-config --verbose https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/\r\n[debug] Command-line config: ['--no-config', '--verbose', 'https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2021.10.22 (zip)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']\r\n[debug] Python version 3.9.7 (CPython 64bit) - Linux-5.13.0-21-generic-x86_64-with-glibc2.34\r\n[debug] exe versions: ffmpeg 4.4 (setts), ffprobe 4.4, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite\r\n[debug] Proxy map: {}\r\n[debug] Using fake IP 53.36.205.78 (DE) as X-Forwarded-For\r\n[debug] [ARD:mediathek] Extracting URL: https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/\r\n[ARD:mediathek] Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll: Downloading webpage\r\n[ARD:mediathek] 10049223: Downloading media JSON\r\nERROR: [ARD:mediathek] Unable to download JSON metadata: HTTP Error 404: Not Found (caused by ); please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; type yt-dlp -U to update. Be sure to call yt-dlp with the --verbose flag and include its complete output.\r\n File \"/repositories/yt-dlp/yt-dlp/yt_dlp/extractor/common.py\", line 713, in _request_webpage\r\n return self._downloader.urlopen(url_or_request)\r\n File \"/repositories/yt-dlp/yt-dlp/yt_dlp/YoutubeDL.py\", line 3288, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 632, in http_response\r\n response = self.parent.error(\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 555, in error\r\n result = self._call_chain(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 494, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 747, in http_error_302\r\n return self.parent.open(new, timeout=req.timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 632, in http_response\r\n response = self.parent.error(\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 561, in error\r\n return self._call_chain(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 494, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 641, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/14a086058a30a0748b5b716e9b21481f993518f3", "file_loc": {"base_commit": "14a086058a30a0748b5b716e9b21481f993518f3", "files": [{"path": "yt_dlp/extractor/ard.py", "status": "modified", "Loc": {"('ARDBetaMediathekIE', None, 390)": {"add": [405, 428], "mod": [391]}, "('ARDBetaMediathekIE', '_ARD_extract_playlist', 512)": {"mod": [528, 529, 530, 531, 532, 533, 534, 536, 537, 538, 539, 540, 541]}, "('ARDBetaMediathekIE', '_real_extract', 551)": {"mod": [577]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/ard.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "ab7d4f784892c275e888d71aa80a3a2ed59d9b83", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/2019", "iss_label": "", "title": "[Bug] text of collapsed node still present ", "body": "On latest commit https://github.com/comfyanonymous/ComfyUI/commit/d66b631d74e6f6ac95c61c63d4a0da150bf74903.\r\nDragging the node also doesn't do anything until it's uncollapsed.\r\n\"Screenshot\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/comfyanonymous/ComfyUI/commit/ab7d4f784892c275e888d71aa80a3a2ed59d9b83", "file_loc": {"base_commit": "ab7d4f784892c275e888d71aa80a3a2ed59d9b83", "files": [{"path": "web/scripts/domWidget.js", "status": "modified", "Loc": {"(None, None, None)": {"add": [235, 292]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["web/scripts/domWidget.js"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "3e589bf1356024fb471a9d17738e4626f21a953b", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/1153", "iss_label": "bug\ntriage", "title": "Azure Deployment Name Bug", "body": "## Policy and info\r\n - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.\r\n - Adding the label \"sweep\" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/\r\n\r\n## Expected Behavior\r\n\r\nThere shouldn't be an error with the model name.\r\n\r\n## Current Behavior\r\n\r\n### Deployment name seems to mix with model name.\r\n\r\nEverything seems to work perfectly and code is being made:\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/9fd4fcf5-d78e-4179-9406-a98867a9dfc1)\r\n\r\nBut then an error pops up telling me that the model doesn't exist and it takes my Azure OpenAI deployment name and says it's not a model.\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/de5d275e-aa79-4d55-899e-ecf87d7a4261)\r\n\r\nHere is the command style I used following these instructions from here: https://gpt-engineer.readthedocs.io/en/latest/open_models.html\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/987113ca-0616-4a38-9f35-ccec2cebda5d)\r\n\r\n`gpt-engineer --azure [redacted_endpoint_url] ./snake_game/ [redacted_deployment_name]`\r\n\r\n\r\n## Additional Failure Information\r\n\r\nUsing Azure OpenAI with gpt-4-turbo deployed with a different deployment name. Only installed gpt-engineer in a virtual environment.", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1170", "commit_html_url": null, "file_loc": {"base_commit": "3e589bf1356024fb471a9d17738e4626f21a953b", "files": [{"path": ".github/CONTRIBUTING.md", "status": "modified", "Loc": {"(None, None, 114)": {"add": [114]}}}, {"path": "gpt_engineer/core/ai.py", "status": "modified", "Loc": {"('AI', '_create_chat_model', 330)": {"mod": [349]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/core/ai.py"], "doc": [".github/CONTRIBUTING.md"], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "c4c1203fc07b2e23c3e5a5e9277266a711ab9466", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/35", "iss_label": "", "title": ".py files are not being created. I just get all_output.txt that I manually have to create from.", "body": "Hi, I absolutely love this script. This is the most accurate auto-GPT development script I have tried yet, it's so powerful!\r\n\r\nIn the demo video it shows the script creating each of the development files, in my case .py files within the workspace folder automatically. My build isn't doing this I just get an all_output.txt file with all .py files codes in one place and a single python file.\r\n\r\nHow do I ensure that GPT-Engineer automatically creates the .py files for me. Thanks", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/120", "commit_html_url": null, "file_loc": {"base_commit": "c4c1203fc07b2e23c3e5a5e9277266a711ab9466", "files": [{"path": "gpt_engineer/chat_to_files.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "(None, 'parse_chat', 6)": {"add": [11], "mod": [6, 7, 8, 10, 13, 14, 15, 16, 17, 18, 19]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/chat_to_files.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "7b91676a0c2ccd4589a42f2cadbf1e69f93ad81b", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/1128", "iss_label": "bug\ntriage", "title": "Applying diffs failing silently ", "body": "## Expected Behavior\r\n\r\nI would expect GPT engineer to either successfully apply all diffs sent by the AI or fail in a way that lets you know which diffs have been applied, which failed, and allows you to manually salvage the failed diff parts by copy and pasting \r\n\r\n## Current Behavior\r\n\r\nThe current behaviour seems to be that it applies the sections of the diff which it can and silently throws the rest of the code away. From a users perspective it seems like everything has gone well - but in reality its only applied a portion of the diff. \r\n\r\nThis is really bad from a usability perspective - for one, a partially applied diff is obviously never going to be working code so applying it is pointless. Also, the knowledge that this is the behaviour pf gpte means i need to manually check every single output to verify its applied the whole diff which is a complete waste of time for diffs which do apply succesfully. \r\n\r\nNot applying any of the diffs at all would actually be a better outcome for me, as at least i would have a consistent workflow of copy and pasting... however a more sensible sollution is applying the diffs it can, and if it cant apply a diff for a file, not apply any change to it at all, and instead providing an error output which is convenient for the use to copy and paste manually into the file \r\n\r\n### Failure Logs\r\nI cant upload failure logs as the code im working on is sensitive", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1138", "commit_html_url": null, "file_loc": {"base_commit": "7b91676a0c2ccd4589a42f2cadbf1e69f93ad81b", "files": [{"path": "gpt_engineer/core/diff.py", "status": "modified", "Loc": {"('Diff', 'validate_and_correct', 340)": {"mod": [357]}}}, {"path": "tests/core/test_salvage_correct_hunks.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [82]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/core/diff.py"], "doc": [], "test": ["tests/core/test_salvage_correct_hunks.py"], "config": [], "asset": []}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "f7bb578a1409b1f96aff534ff5ed2bd10502296f", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1527", "iss_label": "", "title": "Add copy to clipboard in plaintext for image details", "body": "Add copy to clipboard in plaintext for image details\r\n\r\nA button we can click to copy to clipboard all of the image details shown in the log output file. If not on the log page then on the app itself.\r\n\r\nThe quick copying of these settings enables us to share our work methods with others in the community more smoothly, thereby assisting them in a more efficient and effective way.\r\n\r\n![chrome_ybBu4Zoryf](https://github.com/lllyasviel/Fooocus/assets/57927413/a1ad7fa5-5a99-43e5-8420-e2c4aeb055de)\r\n\r\n![chrome_6zslF9Z3UD](https://github.com/lllyasviel/Fooocus/assets/57927413/ded36d98-377a-4130-b20f-01defbee1e6b)\r\n\r\nWhen I copy the text manually from the log file it looks like a garbled mess. See example below.\r\n\r\n```\r\nPrompt | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body\r\n-- | --\r\nNegative Prompt | \u00a0\r\nFooocus V2 Expansion | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body, intricate, elegant, highly detailed, sharp focus, illuminated, sunny, magical, scenic, artistic, true colors, deep aesthetic, very inspirational, cute, cozy, inspired, original, fine detail, professional, winning, enhanced, polished\r\nStyles | ['SAI Photographic', 'Fooocus V2', 'Artstyle Hyperrealism', 'MRE Artistic Vision']\r\nPerformance | Quality\r\nResolution | (1024, 1024)\r\nSharpness | 3\r\nGuidance Scale | 1.7\r\nADM Guidance | (1.5, 0.8, 0.3)\r\nBase Model | dreamshaperXL_turboDpmppSDEKarras.safetensors\r\nRefiner Model | None\r\nRefiner Switch | 0.5\r\nSampler | dpmpp_sde\r\nScheduler | karras\r\nSeed | 5044578018584347060\r\nVersion | v2.1.853\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/lllyasviel/Fooocus/commit/f7bb578a1409b1f96aff534ff5ed2bd10502296f", "file_loc": {"base_commit": "f7bb578a1409b1f96aff534ff5ed2bd10502296f", "files": [{"path": "fooocus_version.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "modules/async_worker.py", "status": "modified", "Loc": {"(None, 'handler', 116)": {"mod": [400, 401, 780, 782]}}}, {"path": "modules/private_logger.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "(None, 'log', 21)": {"add": [38, 61], "mod": [42, 60]}}}, {"path": "update_log.md", "status": "modified", "Loc": {"(None, None, 1)": {"add": [0]}}}, {"path": "webui.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 14, 111, 512], "mod": [103]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/private_logger.py", "webui.py", "modules/async_worker.py", "fooocus_version.py"], "doc": ["update_log.md"], "test": [], "config": [], "asset": []}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/2561", "iss_label": "enhancement", "title": "[Feature Request]: Prompt embedded LoRAs", "body": "### Is there an existing issue for this?\r\n\r\n- [x] I have searched the existing issues and checked the recent builds/commits\r\n\r\n### What would your feature do?\r\n\r\nSimilar to how A1111 handles LoRAs by default, I believe there should be an option to embed LoRAs in the prompt by using the following structure:\r\n```csharp\r\n\r\n```\r\n\r\nThe current workflow works well, but has a few limitations, namely being able to use wildcards and LoRAs together for more dynamic prompts. Additionally, this feature already exists for embeddings, so I reckon adding it for LoRAs should be trivial.\r\n\r\n### Proposed workflow\r\n\r\n1. Enter LoRAs in the prompt using the `` structure\r\n2. Generate images, and LoRAs are loaded for each iteration\r\n\r\n### Additional information\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/2323", "commit_html_url": null, "file_loc": {"base_commit": "3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f", "files": [{"path": "modules/async_worker.py", "status": "modified", "Loc": {"(None, 'handler', 134)": {"add": [435], "mod": [155, 453, 454, 655, 865, 908, 912]}, "(None, 'worker', 19)": {"mod": [47, 50, 51, 72]}, "(None, 'callback', 806)": {"mod": [810]}}}, {"path": "modules/config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23], "mod": [11]}}}, {"path": "modules/sdxl_styles.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5, 7, 12]}, "(None, 'apply_wildcards', 68)": {"mod": [68, 69, 70, 71, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 95]}, "(None, 'get_words', 95)": {"mod": [104]}}}, {"path": "modules/util.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 16], "mod": [1]}, "(None, 'get_files_from_folder', 166)": {"mod": [166, 167, 168, 170, 172, 173, 174, 175, 176, 177, 178, 179, 180, 182]}, "('PromptStyle', None, 358)": {"mod": [358]}, "(None, 'get_enabled_loras', 396)": {"mod": [397]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/async_worker.py", "modules/sdxl_styles.py", "modules/config.py", "modules/util.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "8e62a72a63b30a3067d1a1bc3f8d226824bd9283", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1671", "iss_label": "bug (AMD)", "title": "Cannot use image prompts", "body": "I am trying to use 2x images as an image prompt but when I press generate this is what I'm getting (I can generate just fine without image prompts):\r\n\r\nFull console log:\r\n\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 3\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 1.5\r\n[Parameters] Seed = 953753918774495193\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 6 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\nmodel_type EPS\r\nUNet ADM Dimension 2816\r\nUsing split attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing split attention in VAE\r\nextra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'}\r\nBase model loaded: H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['None', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors].\r\nRequested to load SDXLClipModel\r\nLoading 1 new model\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Image processing ...\r\nTraceback (most recent call last):\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 806, in worker\r\n handler(task)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 647, in handler\r\n task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\ip_adapter.py\", line 185, in preprocess\r\n cond = image_proj_model.model(cond).to(device=ip_adapter.load_device, dtype=ip_adapter.dtype)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 117, in forward\r\n latents = attn(x, latents) + latents\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 55, in forward\r\n latents = self.norm2(latents)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\normalization.py\", line 190, in forward\r\n return F.layer_norm(\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\functional.py\", line 2515, in layer_norm\r\n return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu!\r\nTotal time: 37.40 seconds\r\n\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1678", "commit_html_url": null, "file_loc": {"base_commit": "8e62a72a63b30a3067d1a1bc3f8d226824bd9283", "files": [{"path": "extras/ip_adapter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10], "mod": [5]}, "(None, 'load_ip_adapter', 90)": {"mod": [119, 120, 121, 122, 123, 124, 125, 126]}}}, {"path": "fooocus_version.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fooocus_version.py", "extras/ip_adapter.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d57afc88a48359bc1642c2ae30a091f0426eff43", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1063", "iss_label": "", "title": "Faceswap crashes ", "body": "**Describe the problem**\r\nThe program crashes when trying to use an image as prompt and selecting the faceswap advanced option\r\n\r\n**Full Console Log**\r\nRequirement already satisfied: pygit2==1.12.2 in /usr/local/lib/python3.10/dist-packages (1.12.2)\r\nRequirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)\r\nRequirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.21)\r\n/content\r\nfatal: destination path 'Fooocus' already exists and is not an empty directory.\r\n/content/Fooocus\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py', '--preset', 'realistic', '--share']\r\nLoaded preset: /content/Fooocus/presets/realistic.json\r\nPython 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]\r\nFooocus version: 2.1.824\r\nRunning on local URL: http://127.0.0.1:7865/\r\nRunning on public URL: https://fb6371be5d9ced0c1d.gradio.live/\r\n\r\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)\r\nTotal VRAM 15102 MB, total RAM 12983 MB\r\n2023-11-29 21:03:50.202601: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-11-29 21:03:50.202658: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-11-29 21:03:50.202708: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-11-29 21:03:52.244376: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nSet vram state to: NORMAL_VRAM\r\nDisabling smart memory management\r\nDevice: cuda:0 Tesla T4 : native\r\nVAE dtype: torch.float32\r\nUsing pytorch cross attention\r\nRefiner unloaded.\r\nmodel_type EPS\r\nadm 2816\r\nUsing pytorch attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing pytorch attention in VAE\r\nextra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}\r\nBase model loaded: /content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors].\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25.\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25.\r\nFooocus V2 Expansion: Vocab with 642 words.\r\nFooocus Expansion engine loaded for cuda:0, use_fp16 = True.\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.30 seconds\r\nApp started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://fb6371be5d9ced0c1d.gradio.live/\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 604471590939558783\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 60 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Preparing Fooocus text #1 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full light, gorgeous, amazing, elegant, intricate, highly detailed, dynamic, rich deep vivid colors, beautiful, very inspirational, inspiring, thought, fancy, sharp focus, colorful, epic, professional, artistic, new, charismatic, cool, brilliant, awesome, attractive, shiny, fine detail, pretty, focused, creative\r\n[Fooocus] Preparing Fooocus text #2 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full pretty, attractive, fine detail, intricate, elegant, luxury, elite, dramatic light, highly detailed, cinematic, complex, sharp focus, illuminated, amazing, marvelous, thought, epic, fabulous, colorful, shiny, brilliant, symmetry, great, excellent composition, ambient, dynamic, vibrant colors, relaxed, beautiful\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus Model Management] Moving model(s) has taken 0.11 seconds\r\n[Fooocus] Encoding positive #2 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Encoding negative #2 ...\r\n[Parameters] Denoising Strength = 1.0\r\n[Parameters] Initial Latent shape: Image Space (1152, 896)\r\nPreparation time: 3.60 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.40 seconds\r\n100% 60/60 [00:55<00:00, 1.09it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 60.73 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.01 seconds\r\n100% 60/60 [00:56<00:00, 1.06it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 61.85 seconds\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.57 seconds\r\nTotal time: 131.21 seconds\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 7513856776859948774\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\nextra keys clip vision: ['vision_model.embeddings.position_ids']\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1710", "commit_html_url": null, "file_loc": {"base_commit": "d57afc88a48359bc1642c2ae30a091f0426eff43", "files": [{"path": "fooocus_colab.ipynb", "status": "modified", "Loc": {"(None, None, 15)": {"mod": [15]}}}, {"path": "readme.md", "status": "modified", "Loc": {"(None, None, 127)": {"add": [127]}, "(None, None, 118)": {"mod": [118]}, "(None, None, 124)": {"mod": [124]}}}, {"path": "ldm_patched/modules/args_parser.py", "Loc": {"(None, None, None)": {"mod": [99]}}, "base_commit": "cca0ca704a713ab153938e78de6787609c723cad"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fooocus_colab.ipynb", "ldm_patched/modules/args_parser.py"], "doc": ["readme.md"], "test": [], "config": [], "asset": []}}, {"organization": "odoo", "repo_name": "odoo", "base_commit": "72ec0050b442214c9be93907fc01a48832243c15", "iss_html_url": "https://github.com/odoo/odoo/issues/7306", "iss_label": "", "title": "[v8.0] Bank statement : Customer Import invoice wizard do not auto-fill the right field", "body": "Step to reproduce:\n\ncreate a customer invoice\ncreate a new bank statement and import this invoice\nclick on 'Reconcile'\nProblem: No match proposition between the bank statement line and the invoice move line can be found since the communication field is '/'. (The invoice number is in the field 'Reference' instead)\n\nSo please the ref must go to communication\n\nThanks\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/odoo/odoo/commit/72ec0050b442214c9be93907fc01a48832243c15", "file_loc": {"base_commit": "72ec0050b442214c9be93907fc01a48832243c15", "files": [{"path": "addons/account/account_bank_statement.py", "status": "modified", "Loc": {"('account_bank_statement_line', 'get_reconciliation_proposition', 537)": {"mod": [575]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["addons/account/account_bank_statement.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "197287fc303119bf71caf9b3f72280cab08da749", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1147", "iss_label": "", "title": "[Bug]: \u7ffb\u8bd1arxiv\u6587\u6863\u62a5\u9519\uff0c\u65e0\u8bba\u672c\u5730\u81ea\u5df1\u642d\u5efa\u8fd8\u662f\u5b98\u65b9\u5728\u7ebf\u5747\u62a5\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOneKeyInstall (\u4e00\u952e\u5b89\u88c5\u811a\u672c-windows)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n\u5b98\u65b9\u5728\u7ebf\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \"./toolbox.py\", line 165, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 249, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 141, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \"./toolbox.py\", line 507, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"/usr/lib/python3.8/tarfile.py\", line 1608, in open\r\n> raise ReadError(\"file could not be opened successfully\")\r\n> tarfile.ReadError: file could not be opened successfully\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://localhost:7890, \u4ee3\u7406\u6240\u5728\u5730\uff1aJapan\r\n\r\n\u672c\u5730\u642d\u5efa\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> [Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \".\\toolbox.py\", line 150, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 250, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 139, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \".\\toolbox.py\", line 461, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"D:\\academic-gpt\\installer_files\\env\\lib\\tarfile.py\", line 1811, in open\r\n> raise ReadError(f\"file could not be opened successfully:\\n{error_msgs_summary}\")\r\n> tarfile.ReadError: file could not be opened successfully:\r\n> - method gz: ReadError('invalid header')\r\n> - method bz2: ReadError('not a bzip2 file')\r\n> - method xz: ReadError('not an lzma file')\r\n> - method tar: ReadError('invalid header')\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://127.0.0.1:12341, \u4ee3\u7406\u6240\u5728\u5730\uff1aHong Kong - Cloudflare, Inc.\r\n\r\n\u6240\u7ffb\u8bd1\u7684arxiv\u6587\u6863\u7684\u5730\u5740\u4e3a\uff1ahttps://arxiv.org/abs/2112.10551\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![msedge_HmO7O9M6OT](https://github.com/binary-husky/gpt_academic/assets/10786234/51e6ff95-9b95-47cd-b671-322aa1808389)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/197287fc303119bf71caf9b3f72280cab08da749", "file_loc": {"base_commit": "197287fc303119bf71caf9b3f72280cab08da749", "files": [{"path": "shared_utils/handle_upload.py", "status": "modified", "Loc": {"(None, 'extract_archive', 91)": {"mod": [107, 108, 109, 110, 111, 112, 113, 114, 116, 117]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["shared_utils/handle_upload.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "65317e33af87640b68c84c9f6ee67188b76c6d7a", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/558", "iss_label": "", "title": "\u80fd\u5426\u5229\u7528EdgeGPT\uff0c\u652f\u6301\u8c03\u7528\u5fae\u8f6fBing\u63a5\u53e3", "body": "\u5927\u4f6c\u4eec\u6c42\u6c42\u4e86\uff0c\u770b\u770b\u8fd9\u4e2a\u9879\u76ee\u5427\uff0chttps://github.com/acheong08/EdgeGPT\r\n\u5982\u679c\u53ef\u4ee5\u65b9\u4fbf\u5730\u8c03\u7528Bing\u63a5\u53e3\uff0c\u6216\u8005\u672a\u6765\u7684\u767e\u5ea6\u3001\u963f\u91cc\u7b49\u7b2c\u4e09\u65b9\u63a5\u53e3\uff0c\u5bf9\u4e8e\u6ca1\u6709openAI-key\u4e5f\u6ca1\u6cd5\u672c\u5730\u90e8\u7f72GLM\u7684\u540c\u5b66\u662f\u798f\u97f3\u554a", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/65317e33af87640b68c84c9f6ee67188b76c6d7a", "file_loc": {"base_commit": "65317e33af87640b68c84c9f6ee67188b76c6d7a", "files": [{"path": "config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [65], "mod": [47, 48]}}}, {"path": "request_llm/bridge_all.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21, 119]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e359fff0405c4cb865b809b4ecfc0a95a54d2512", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1554", "iss_label": "", "title": "[Bug]: docker\u5b89\u88c5\u7248\u672c\u9002\u914dspark api\u62a5\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nDocker-Compose\uff08Windows/Mac\uff09\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u5728mac\u672c\u5730\u4f7f\u7528conda\u5b89\u88c5\u65b9\u5f0f\uff0c\u9002\u914dspark api\u53ef\u4ee5\u6b63\u5e38\u8fd0\u884c\u3002\u4f46\u662f\u901a\u8fc7docker compose\u65b9\u5f0f\u5b89\u88c5\u4e4b\u540e\u901a\u8fc7spark api\u4f1a\u51fa\u73b0\u62a5\u9519\uff0c\u4e0d\u8fc7\u5343\u5e06api\u5219\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n\r\n\"Snipaste_2024-02-14_21-12-27\"\r\n\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\ngpt_academic_nolocalllms-1 | error: Connection to remote host was lost.\r\ngpt_academic_nolocalllms-1 | Exception ignored in thread started by: .run at 0x2aaaf7fdfa60>\r\ngpt_academic_nolocalllms-1 | Traceback (most recent call last):\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/com_sparkapi.py\", line 113, in run\r\ngpt_academic_nolocalllms-1 | ws.send(data)\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/websocket/_app.py\", line 284, in send\r\ngpt_academic_nolocalllms-1 | raise WebSocketConnectionClosedException(\"Connection is already closed.\")\r\ngpt_academic_nolocalllms-1 | websocket._exceptions.WebSocketConnectionClosedException: Connection is already closed.\r\ngpt_academic_nolocalllms-1 | Traceback (most recent call last):\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/routes.py\", line 422, in run_predict\r\ngpt_academic_nolocalllms-1 | output = await app.get_blocks().process_api(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/blocks.py\", line 1323, in process_api\r\ngpt_academic_nolocalllms-1 | result = await self.call_function(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/blocks.py\", line 1067, in call_function\r\ngpt_academic_nolocalllms-1 | prediction = await utils.async_iteration(iterator)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 336, in async_iteration\r\ngpt_academic_nolocalllms-1 | return await iterator.__anext__()\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 329, in __anext__\r\ngpt_academic_nolocalllms-1 | return await anyio.to_thread.run_sync(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/to_thread.py\", line 56, in run_sync\r\ngpt_academic_nolocalllms-1 | return await get_async_backend().run_sync_in_worker_thread(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 2134, in run_sync_in_worker_thread\r\ngpt_academic_nolocalllms-1 | return await future\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 851, in run\r\ngpt_academic_nolocalllms-1 | result = context.run(func, *args)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 312, in run_sync_iterator_async\r\ngpt_academic_nolocalllms-1 | return next(iterator)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/gpt/toolbox.py\", line 115, in decorated\r\ngpt_academic_nolocalllms-1 | yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/bridge_all.py\", line 765, in predict\r\ngpt_academic_nolocalllms-1 | yield from method(inputs, llm_kwargs, *args, **kwargs)\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/bridge_spark.py\", line 60, in predict\r\ngpt_academic_nolocalllms-1 | if response == f\"[Local Message] \u7b49\u5f85{model_name}\u54cd\u5e94\u4e2d ...\":\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^\r\ngpt_academic_nolocalllms-1 | UnboundLocalError: cannot access local variable 'response' where it is not associated with a value\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/e359fff0405c4cb865b809b4ecfc0a95a54d2512", "file_loc": {"base_commit": "e359fff0405c4cb865b809b4ecfc0a95a54d2512", "files": [{"path": "request_llms/bridge_qianfan.py", "status": "modified", "Loc": {"(None, 'predict', 135)": {"add": [148, 151], "mod": [161, 162, 163, 164, 165, 166]}}}, {"path": "request_llms/bridge_qwen.py", "status": "modified", "Loc": {"(None, 'predict', 25)": {"add": [53]}}}, {"path": "request_llms/bridge_skylark2.py", "status": "modified", "Loc": {"(None, 'predict', 32)": {"add": [58]}}}, {"path": "request_llms/bridge_spark.py", "status": "modified", "Loc": {"(None, 'predict', 36)": {"add": [54]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_qwen.py", "request_llms/bridge_qianfan.py", "request_llms/bridge_skylark2.py", "request_llms/bridge_spark.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "c17fc2a9b55b1c7447718a06a3eac4378828bb22", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1021", "iss_label": "waiting feedback", "title": "[Feature]: \u901a\u4e49\u5343\u95ee\u7684\u6a21\u578b\u5f00\u6e90\u4e86,\u5efa\u8bae\u52a0\u5165.", "body": "### Class | \u7c7b\u578b\n\nNone\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\n\u9644\uff1a\u5f00\u6e90\u5730\u5740\r\n\r\n\u9b54\u642dModelScope\uff1a\r\n\r\nhttps://modelscope.cn/models/qwen/Qwen-7B/summary\r\n\r\nhttps://modelscope.cn/models/qwen/Qwen-7B-Chat/summary\r\n\r\nHugging Face\uff1ahttps://huggingface.co/Qwen\r\n\r\nGitHub\uff1ahttps://github.com/QwenLM/Qwen-7B", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/c17fc2a9b55b1c7447718a06a3eac4378828bb22", "file_loc": {"base_commit": "c17fc2a9b55b1c7447718a06a3eac4378828bb22", "files": [{"path": "config.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [74]}}}, {"path": "request_llm/bridge_all.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [337]}}}, {"path": "request_llm/bridge_qwen.py", "status": "modified", "Loc": {"('GetONNXGLMHandle', 'load_model_and_tokenizer', 26)": {"mod": [35, 37, 38, 39, 40]}, "('GetONNXGLMHandle', None, 19)": {"mod": [43, 57, 58]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "request_llm/bridge_qwen.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "19bd0c35ed05e6f99c8e3c0a8c994b1385341cae", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1053", "iss_label": "ToDo", "title": "[Bug]: \u672c\u5730\u7ffb\u8bd1Latex\u51fa\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n* \u95ee\u9898\uff1a\u627e\u4e0d\u5230\u6240\u8c13\u7684\u201cfp\u201d\uff08\u6587\u4ef6\u6307\u9488\uff09\r\n![image](https://github.com/binary-husky/gpt_academic/assets/62052010/40949aca-4ebf-4b24-ade6-a8423654b228)\r\n\r\n\r\n* stack\uff1a\u8fd9\u91cc\u662f\u5c06\u5728**tex\u6587\u4ef6\u5408\u5e76\uff08merge_tex_files\uff09** \u51fd\u6570\u4e2d\u7684\u4e00\u4e2a\u5b50\u51fd\u6570\u7684\u8c03\u7528\uff08merge_tex_files_\uff09\uff0c\u4e3b\u8981\u4f5c\u7528\u5c31\u662f\u5c06\u539f\u59cbtex\u4e2d\u7684\\input\u547d\u4ee4\u5185\u5bb9\u8fdb\u884c\u5408\u5e76\uff0c\u4f46\u5b9e\u9645\u8fc7\u7a0b\u4e2d\u5b58\u5728\u4e00\u4e2a\u95ee\u9898\uff0c\u901a\u8fc7debug\u627e\u5230\uff0c\u5177\u4f53\u7684debug\u4ee3\u7801\uff08\u4e5f\u5c31\u52a0\u4e86\u70b9print\uff09\u548c\u7ed3\u679c\u56fe\u9644\u5728\u4e86\u4e0b\u9762\r\n\r\n```python\r\ndef merge_tex_files_(project_foler, main_file, mode):\r\n \"\"\"\r\n Merge Tex project recrusively\r\n \"\"\"\r\n main_file = rm_comments(main_file)\r\n for s in reversed([q for q in re.finditer(r\"\\\\input\\{(.*?)\\}\", main_file, re.M)]):\r\n f = s.group(1)\r\n fp = os.path.join(project_foler, f)\r\n fp = find_tex_file_ignore_case(fp)\r\n if fp:\r\n with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()\r\n else:\r\n raise RuntimeError(f'\u627e\u4e0d\u5230{fp}\uff0cTex\u6e90\u6587\u4ef6\u7f3a\u5931\uff01')\r\n c = merge_tex_files_(project_foler, c, mode)\r\n main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]\r\n return main_file\r\n```\r\n\r\n**debug\u7684\u4ee3\u7801**\r\n```python\r\ndef merge_tex_files_(project_foler, main_file, mode):\r\n \"\"\"\r\n Merge Tex project recrusively\r\n \"\"\"\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print('======== IN merge_tex_files_(SUB FUN) Function ===========')\r\n # print('project_foler:{}\\nmain_file:{}\\nmode:{}'.format(project_foler,main_file,mode))\r\n ## ===\r\n\r\n main_file = rm_comments(main_file)\r\n for s in reversed([q for q in re.finditer(r\"\\\\input\\{(.*?)\\}\", main_file, re.M)]):\r\n ## === AAS ADDED FOR TEST ===\r\n print('======== IN LOOP of merge_tex_files_(SUB FUN)===========')\r\n print(\"s:\",s)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n f = s.group(1)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"f:\",f)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n fp = os.path.join(project_foler, f)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"fp1:\",fp)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n fp = find_tex_file_ignore_case(fp)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"fp2:\",fp)\r\n ## === AAS ADDED FOR TEST === \r\n\r\n if fp:\r\n with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()\r\n else:\r\n raise RuntimeError(f'\u627e\u4e0d\u5230{fp}\uff0cTex\u6e90\u6587\u4ef6\u7f3a\u5931\uff01')\r\n c = merge_tex_files_(project_foler, c, mode)\r\n main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]\r\n return main_file\r\n```\r\n\r\n**\u7ed3\u679c\u56fe**\r\n![image](https://github.com/binary-husky/gpt_academic/assets/62052010/a0e9b9c7-e416-4396-8ff8-5321807d23f7)\r\n\r\n**\u51fa\u9519\u90e8\u5206\u7684\u4ee3\u7801**\r\n```python\r\ndef find_tex_file_ignore_case(fp):\r\n dir_name = os.path.dirname(fp)\r\n base_name = os.path.basename(fp)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print('============ IN find_tex_file_ignore_case Fun ==========')\r\n print('dir_name:',dir_name)\r\n print('base_name',base_name)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n ## \u51fa\u9519\u7684\u95ee\u9898\u5728\u4e8e\u662fbbl\u6587\u4ef6\u5bfc\u5165\uff0c\u800c\u4e0d\u662ftex\uff0c\u5c1d\u8bd5\u4e00\u4e0b\u53d6\u6d88tex\u9650\u5236\r\n if not base_name.endswith('.tex'): base_name+='.tex'\r\n ## === AAS ADDED FOR TEST ===\r\n \r\n if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)\r\n # go case in-sensitive\r\n import glob\r\n for f in glob.glob(dir_name+'/*.tex'):\r\n base_name_s = os.path.basename(fp)\r\n if base_name_s.lower() == base_name.lower(): return f\r\n return None\r\n```\r\n\r\n* \u5b9e\u9645\u9519\u8bef\u539f\u56e0\uff1a\u7b80\u5355\u800c\u8a00\u5c31\u662f\uff1a\u8fd9\u4e2a\u5730\u65b9**\u53ea\u8003\u8651\u5230\u4e86tex\u5408\u5e76**\uff08`find_tex_file_ignore_case`\u51fd\u6570\uff09\uff0c\u8fd8\u5b58\u5728**\u6bd4\u5982`bbl`**\uff08\u53e6\u4e00\u79cd\u4e00\u79cd\u6587\u732e\u5f15\u7528\u7684\u683c\u5f0f\uff0c\u53ef\u4ee5\u76f4\u63a5\u585e\u5230tex\u91cc\u9762\u7f16\u8bd1\uff0c\u76f8\u5bf9\u6bd4\u8f83\u539f\u59cb\u4f46\u5c0f\u5de7\uff0c\u548c\u666e\u901a\u7684`.bib`\u5229\u7528`references`\u7a0d\u6709\u5dee\u5f02\uff09**\u6ca1\u6709\u8003\u8651\u5230**\uff0c\u6240\u4ee5\u9020\u6210\u4e86\u5408\u5e76\u8fc7\u7a0b\u4e2d\u7684tex\u6587\u4ef6\u7f3a\u5931\r\n\r\n* \u6539\u8fdb\u5efa\u8bae\uff1ainput\u8fd9\u4e2a\u5730\u65b9\u6765\u8bf4\u4e00\u822c\u786e\u5b9e\u53ea\u6709tex\uff0c\u7528find_tex_file_ignore_case\u8fd9\u4e2a\u51fd\u6570\u4e5f\u633a\u597d\u7684\uff0c\u4f46\u662f\u53ef\u4ee5\u8003\u8651\u4ee5\u4e0b\u5176\u4ed6\u60c5\u51b5\uff0c\u6bd4\u5982\u8bf4\u7eaf\u6587\u672c\uff08.txt)\uff0c\u5176\u4ed6code\uff08`.c, .cpp, .py`\u7b49\u7b49\uff09\uff0c\r\n\r\n- \u65b9\u6848\uff1a**\u76f4\u63a5\u53bb\u6389tex\u7684\u9650\u5236**\uff0c\u4ec0\u4e48\u90fd\u76f4\u63a5\u5f80\u91cc\u63d2\u5165\u5373\u53ef\uff0c\u7136\u540e\u4ea4\u7ed9tex\u7f16\u8bd1\uff0c\u5b9e\u9645\u4e0a\u4e5f\u662f\u8fd9\u6837\uff0c\u6240\u4ee5\u6ca1\u6709\u4ec0\u4e48\u5fc5\u8981\u5728input\u8fd9\u4e2a\u5730\u65b9\u628a\u53ea\u9650\u5236\u63d2\u5165tex\u3002\u5b9e\u5728\u6709\u9519\u8bef\u7684\u8bdd\u5176\u5b9e\u4ea4\u7ed9tex\u8f93\u51fa\u7136\u540e\u67e5\u770b\u5c31\u597d\u4e86\u3002\u4ee3\u7801\u65b9\u9762\u628a\u4e0b\u9762\u8fd9\u884c\u6ce8\u91ca\u6389\u5c31\u597d\u4e86\r\n```python\r\nif not base_name.endswith('.tex'): base_name+='.tex'\r\n```\r\n\r\n\r\n* p.s \u627e\u8fd9\u4e2a\u8fd8\u633a\u8d39\u4e8b\u548chhh\uff0c\u4e4d\u4e00\u770b\u8fd8\u4e0d\u77e5\u9053\u4ec0\u4e48\u60c5\u51b5\uff0c\u4f46\u5176\u5b9e\u5c0f\u95ee\u9898\r\n\r\n\u5728\u6ce8\u91ca\u6389\u4e4b\u540e\uff0c\u6682\u4e14\u5c31\u80fd\u6b63\u5e38\u4f7f\u7528\u4e86\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\nSee the Describe the bug part\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\nSee the Describe the bug part", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/19bd0c35ed05e6f99c8e3c0a8c994b1385341cae", "file_loc": {"base_commit": "19bd0c35ed05e6f99c8e3c0a8c994b1385341cae", "files": [{"path": "crazy_functions/latex_fns/latex_toolbox.py", "status": "modified", "Loc": {"(None, 'find_tex_file_ignore_case', 281)": {"add": [283], "mod": [286]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["crazy_functions/latex_fns/latex_toolbox.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e24f077b68e38b679e5ca25853ea2c402f074ea3", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1120", "iss_label": "", "title": "[Feature]: \u5e0c\u671b\u80fd\u591f\u589e\u52a0azure openai gpt4 \u7684\u6a21\u578b\u9009\u9879", "body": "### Class | \u7c7b\u578b\n\n\u7a0b\u5e8f\u4e3b\u4f53\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\nRT", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/e24f077b68e38b679e5ca25853ea2c402f074ea3", "file_loc": {"base_commit": "e24f077b68e38b679e5ca25853ea2c402f074ea3", "files": [{"path": "config.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [83]}}}, {"path": "request_llm/bridge_all.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [147]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "a799f769e4c48908c3efd64792384403392f2e82", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/67", "iss_label": "", "title": "Cluster faces during extract using dlib.chinese_whispers_clustering", "body": "I have had some success hacking together a pre-processing script to run over my training images. It uses [dlib.chinese_whispers_clustering](http://dlib.net/python/index.html#dlib.chinese_whispers_clustering) to group the found faces in the training data based on likeness. I think one of the keys to good results is good training sets, and this helps to prevent polluting the training data with other peoples faces as tends to be the case with Google image search sets or images with multiple faces.\r\n\r\nThere are a couple of ways I think this could be integrated into the project:\r\n\r\n1) during extract when generating face chips, discard non target faces (all faces not in the largest cluster)\r\n2) during convert where frames have multiple faces, identifying only the target face for replacement.\r\n\r\nHere's [the script](https://gist.github.com/badluckwiththinking/92dd6f155bc8babca6422b08b642d35d), sorry its a bit hacky, I just wanted something that worked and haven't cleaned it up. I'm not sure where I would begin to integrate it into the project, perhaps as an alternative plugin?\r\n\r\n", "code": null, "pr_html_url": "https://github.com/deepfakes/faceswap/pull/61", "commit_html_url": null, "file_loc": {"base_commit": "a799f769e4c48908c3efd64792384403392f2e82", "files": [{"path": "Dockerfile", "status": "modified", "Loc": {"(None, None, 14)": {"add": [14]}, "(None, None, 10)": {"mod": [10, 11, 12]}, "(None, None, 16)": {"mod": [16]}, "(None, None, 18)": {"mod": [18]}}}, {"path": "faceswap.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2, 8, 17, 18, 19, 20]}}}, {"path": "lib/DetectedFace.py", "status": "removed", "Loc": {}}, {"path": "lib/aligner.py", "status": "modified", "Loc": {"(None, 'get_align_mat', 25)": {"mod": [26]}}}, {"path": "lib/cli.py", "status": "modified", "Loc": {"('DirectoryProcessor', 'process_arguments', 34)": {"add": [47], "mod": [49, 51]}, "(None, None, None)": {"mod": [5]}, "('DirectoryProcessor', 'process_directory', 51)": {"mod": [56, 59]}, "('DirectoryProcessor', None, 14)": {"mod": [62]}}}, {"path": "lib/faces_detect.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [3, 4, 28]}, "(None, 'detect_faces', 6)": {"mod": [9, 11, 12, 13, 14, 15, 16]}}}, {"path": "lib/model.py", "status": "removed", "Loc": {}}, {"path": "lib/training_data.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 3, 5], "mod": [45]}, "(None, 'get_training_data', 13)": {"mod": [13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 26, 27, 29]}, "(None, 'random_warp', 47)": {"mod": [49]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16], "mod": [1, 2]}, "(None, 'get_folder', 8)": {"mod": [10]}, "(None, 'load_images', 18)": {"mod": [18, 19, 20, 21, 22, 23, 24, 25, 26]}}}, {"path": "plugins/Convert_Adjust.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('Convert', None, 5)": {"mod": [6, 7]}, "('Convert', 'patch_image', 12)": {"mod": [21]}}}, {"path": "plugins/Convert_Masked.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [6]}, "('Convert', None, 8)": {"mod": [9, 10]}, "('Convert', 'get_new_face', 51)": {"mod": [54]}, "('Convert', 'get_image_mask', 58)": {"mod": [67]}}}, {"path": "plugins/Extract_Align.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('Extract', 'extract', 6)": {"add": [7]}}}, {"path": "plugins/Extract_Crop.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}}}, {"path": "plugins/PluginLoader.py", "status": "modified", "Loc": {"('PluginLoader', None, 2)": {"mod": [4, 5, 6, 9, 10, 11, 14, 15]}}}, {"path": "scripts/convert.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 64], "mod": [7, 8, 9]}, "('ConvertImage', 'process_image', 38)": {"add": [48], "mod": [42, 43, 44, 45, 47, 50, 51, 52, 53, 54, 57]}, "('ConvertImage', None, 13)": {"mod": [38, 39, 40]}}}, {"path": "scripts/extract.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}, "('ExtractTrainingData', None, 8)": {"mod": [18, 19]}, "('ExtractTrainingData', 'process_image', 18)": {"mod": [22, 23, 24, 25, 26, 28, 29, 30, 31]}}}, {"path": "scripts/train.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10], "mod": [5, 6, 8, 9]}, "('TrainingProcessor', 'process_arguments', 18)": {"mod": [24, 25, 26, 27, 28, 29, 30]}, "('TrainingProcessor', None, 12)": {"mod": [89, 90, 91, 92, 93, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 107, 108, 109, 111, 113, 114, 115, 116]}, "('TrainingProcessor', 'process', 118)": {"mod": [119, 122, 123, 125, 127, 129, 131, 132, 133, 134, 135, 136, 138, 139, 140, 142, 143, 144, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/aligner.py", "lib/model.py", "lib/training_data.py", "plugins/Convert_Adjust.py", "plugins/Extract_Align.py", "plugins/Extract_Crop.py", "scripts/train.py", "faceswap.py", "plugins/PluginLoader.py", "plugins/Convert_Masked.py", "lib/DetectedFace.py", "lib/faces_detect.py", "lib/utils.py", "lib/cli.py", "scripts/convert.py", "scripts/extract.py"], "doc": [], "test": [], "config": ["Dockerfile"], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "f5dd18352c6640bc5c39a01642c7ac7356c0dea1", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/718", "iss_label": "bug", "title": "[Windows] cuda_path was not set if success on first check.", "body": "**Describe the bug**\r\nsetup.py file:\r\ncuDNN was not detected if `cuda_check` success in first check using \"nvcc -V\" because of `self.env.cuda_path` not set\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1, run `python setup.py` on windows 10 environment\r\n\r\n**Expected behavior**\r\ndetect cuDNN lib\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Windows 10\r\n\r\n**Additional context**\r\nI temporary disable first method to check CUDA so it working for now.\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/f5dd18352c6640bc5c39a01642c7ac7356c0dea1", "file_loc": {"base_commit": "f5dd18352c6640bc5c39a01642c7ac7356c0dea1", "files": [{"path": "lib/gpu_stats.py", "status": "modified", "Loc": {"('GPUStats', 'initialize', 64)": {"mod": [92]}}}, {"path": "setup.py", "status": "modified", "Loc": {"('Checks', None, 314)": {"add": [353]}, "('Checks', 'cudnn_check', 458)": {"add": [459]}, "('Install', 'ask_continue', 542)": {"add": [543]}, "('Checks', 'cuda_check_linux', 423)": {"mod": [442, 443, 444]}, "('Checks', 'cuda_check_windows', 445)": {"mod": [451]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py", "lib/gpu_stats.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "dea984efc1c720832d7c32513c806b4b67cc6560", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/590", "iss_label": "", "title": "Disable logging", "body": "In previous commits before the logging implementation, multiple GPUS were able to run different tasks simultaneously ( extract/train/convert ).\r\n\r\nAfter the logging commit, only 1 task can be run due to the log file being in use by the first process.\r\n\r\nIs there an option to disable logging or specify a log file instead?", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/dea984efc1c720832d7c32513c806b4b67cc6560", "file_loc": {"base_commit": "dea984efc1c720832d7c32513c806b4b67cc6560", "files": [{"path": "lib/cli.py", "status": "modified", "Loc": {"('ScriptExecutor', 'execute_script', 83)": {"mod": [85]}, "('DirOrFileFullPaths', None, 150)": {"mod": [150]}, "('FaceSwapArgs', 'get_global_arguments', 265)": {"mod": [274, 275, 276, 277]}}}, {"path": "lib/gui/utils.py", "status": "modified", "Loc": {"('FileHandler', '__init__', 36)": {"mod": [48, 49, 50, 51, 57, 58]}, "('ContextMenu', None, 332)": {"mod": [334]}}}, {"path": "lib/logger.py", "status": "modified", "Loc": {"(None, 'log_setup', 71)": {"mod": [71, 79]}, "(None, 'file_handler', 89)": {"mod": [89, 91, 92, 93]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/cli.py", "lib/gui/utils.py", "lib/logger.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181", "iss_html_url": "https://github.com/3b1b/manim/issues/1436", "iss_label": "bug", "title": "PNG images have a black background (no transparency)", "body": "### Description\r\nWhen trying do display a png image(with transparent background), it shows the background as black, didn't encouter the issue when trying with the cairo renderer.\r\n\r\n**Code**:\r\n```python\r\n img = ImageMobject(\"./dice.png\")\r\n self.play(FadeIn(img))\r\n```\r\n\r\n\r\n### Results\r\n\"result\"\r\n\r\n# Original image\r\n![dice](https://user-images.githubusercontent.com/38077008/110259246-8fdb3400-7f9e-11eb-992f-b658762c5830.png)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/3b1b/manim/commit/b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181", "file_loc": {"base_commit": "b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181", "files": [{"path": "manimlib/shaders/image/frag.glsl", "status": "modified", "Loc": {"(None, None, 12)": {"mod": [12]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["manimlib/shaders/image/frag.glsl"]}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "e1c049bece420bc1190eb3ed4d5d9878c431aa5e", "iss_html_url": "https://github.com/3b1b/manim/issues/394", "iss_label": "", "title": "import readline is failing", "body": "I am trying to run examples_scenes.py and it threw a ModuleNotFoundError when it tried to import readline. This should be easy to resolve - just pip install readline right? Nope. readline apparently doesn't work on Windows, and I got this strange follow-up error below. I don't know what to do at this point. Help?\r\n\r\n\r\nc:\\Tensorexperiments\\manim>python -m manim example_scenes.py SquareToCircle -pl\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\Python36\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"C:\\Program Files\\Python36\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"c:\\Tensorexperiments\\manim\\manim.py\", line 4, in \r\n import manimlib.stream_starter\r\n File \"c:\\Tensorexperiments\\manim\\manimlib\\stream_starter.py\", line 4, in \r\n import readline\r\nModuleNotFoundError: No module named 'readline'\r\n\r\nc:\\Tensorexperiments\\manim>pip install readline\r\nCollecting readline\r\n Downloading https://files.pythonhosted.org/packages/f4/01/2cf081af8d880b44939a5f1b446551a7f8d59eae414277fd0c303757ff1b/readline-6.2.4.1.tar.gz (2.3MB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.3MB 8.5MB/s\r\n Complete output from command python setup.py egg_info:\r\n error: this module is not meant to work on Windows\r\nCommand \"python setup.py egg_info\" failed with error code 1 in C:\\Users\\SAMERN~1\\AppData\\Local\\Temp\\pip-install-z8maklzo\\readline\\", "code": null, "pr_html_url": "https://github.com/3b1b/manim/pull/672", "commit_html_url": null, "file_loc": {"base_commit": "e1c049bece420bc1190eb3ed4d5d9878c431aa5e", "files": [{"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, 11)": {"add": [11]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "660d1d1e64c5e28e96bf9b8172cd87d1d809fd07", "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/5876", "iss_label": "bug\nseverity:medium", "title": "[Bug]: \"The model produces invalid content\"", "body": "### Is there an existing issue for the same bug?\r\n\r\n- [X] I have checked the existing issues.\r\n\r\n### Describe the bug and reproduction steps\r\n\r\nhttps://www.all-hands.dev/share?share_id=dab4a77e7d64e7a4dc6124dc672d3f4beb2d411a33155977425b821e292d4f4c\r\nThe LLM is `gpt-4o`\r\nIn the logs I got\r\n```yaml\r\n{'error': {'message': 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.', 'type': 'model_error', 'param': None, 'code': None}}\r\n```\r\n\r\n### OpenHands Installation\r\n\r\nDocker command in README\r\n\r\n### OpenHands Version\r\n\r\n0.17\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Logs, Errors, Screenshots, and Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/7045", "commit_html_url": null, "file_loc": {"base_commit": "660d1d1e64c5e28e96bf9b8172cd87d1d809fd07", "files": [{"path": "openhands/llm/llm.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [78]}, "('LLM', 'wrapper', 180)": {"mod": [220, 221, 222]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["openhands/llm/llm.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "7e8453cf1ec992e5df5cebfeda08552c58e7c9bc", "iss_html_url": "https://github.com/scrapy/scrapy/issues/2656", "iss_label": "", "title": "sos filepipelines 302", "body": "hi\r\n\r\n when i setting file_urls \"http://m.baidu.com/api?action=redirect&token=kpyysd&from=1014090y&type=app&dltype=new&refid=2650327114&tj=soft_5845028_88031597_%E8%AF%AD%E9%9F%B3%E6%90%9C%E7%B4%A2&refp=action_search&blink=da5b687474703a2f2f7265736765742e39312e636f6d2f536f66742f436f6e74726f6c6c65722e617368783f616374696f6e3d646f776e6c6f61642674706c3d312669643d34313034393931c658&crversion=1\"\r\n \r\n this url redirect 3 times so when i use scrap download it the scrapy retrun 302 how can i setting it can working ? please help me!\r\n![qq 20170316182328](https://cloud.githubusercontent.com/assets/3350372/23991950/1795e90a-0a76-11e7-9b19-4128bfdb3914.png)\r\n\r\n \r\n ", "code": null, "pr_html_url": "https://github.com/scrapy/scrapy/pull/2616", "commit_html_url": null, "file_loc": {"base_commit": "7e8453cf1ec992e5df5cebfeda08552c58e7c9bc", "files": [{"path": "docs/topics/media-pipeline.rst", "status": "modified", "Loc": {"(None, None, 324)": {"add": [324]}}}, {"path": "scrapy/pipelines/files.py", "status": "modified", "Loc": {"('FilesPipeline', '__init__', 226)": {"mod": [252]}}}, {"path": "scrapy/pipelines/media.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 7]}, "('MediaPipeline', None, 16)": {"add": [29, 95], "mod": [27]}, "('MediaPipeline', '_check_media_to_download', 96)": {"mod": [106]}}}, {"path": "tests/mockserver.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 10, 14, 122]}, "('Root', '__init__', 152)": {"add": [162]}}}, {"path": "tests/test_pipeline_media.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 84]}, "('BaseMediaPipelineTestCase', None, 22)": {"add": [24]}, "('MediaPipelineTestCase', 'test_use_media_to_download_result', 245)": {"add": [251]}, "('BaseMediaPipelineTestCase', 'setUp', 26)": {"mod": [28]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/pipelines/media.py", "scrapy/pipelines/files.py", "tests/mockserver.py"], "doc": ["docs/topics/media-pipeline.rst"], "test": ["tests/test_pipeline_media.py"], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "cc00f21a358923c03e334e245d58df0853d10661", "iss_html_url": "https://github.com/ansible/ansible/issues/57069", "iss_label": "networking\nmodule\nsupport:network\nnxos\nbug\naffects_2.7\ncisco", "title": "nxos_vpc breaks using default vrf", "body": "##### SUMMARY\r\nWhen using pkl_vrf\": \"default\" command is missing vrf value\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nModule: nxos_vpc\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.7.2\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]\r\n```\r\n\r\n##### CONFIGURATION\r\nasterisk due privacy\r\n```\r\nCACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile\r\nCACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /**/config/ansible/facts\r\nDEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/**/config/ansible/**/hosts.yml']\r\nDISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = True\r\nHOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False\r\nRETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n```\r\n nxos_vpc:\r\n domain: 10\r\n pkl_src: 1.1.1.2\r\n pkl_dest: 1.1.1.1\r\n pkl_vrf: default\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n```\r\n \"commands\": [\r\n \"vpc domain 10\",\r\n \"peer-keepalive destination 1.1.1.1 source 1.1.1.2 vrf default\",\r\n```\r\n##### ACTUAL RESULTS\r\n```\r\n \"commands\": [\r\n \"vpc domain 10\",\r\n \"peer-keepalive destination 1.1.1.1 source 1.1.1.2\",\r\n```\r\n\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/57370", "commit_html_url": null, "file_loc": {"base_commit": "cc00f21a358923c03e334e245d58df0853d10661", "files": [{"path": "lib/ansible/modules/network/nxos/nxos_vpc.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [60, 63, 277]}, "(None, 'main', 317)": {"add": [396], "mod": [392]}, "(None, 'get_vpc', 222)": {"mod": [265, 266, 267, 268, 269, 270, 271, 272, 273, 274]}, "(None, 'get_commands_to_config_vpc', 278)": {"mod": [288]}}}, {"path": "test/units/modules/network/nxos/test_nxos_vpc.py", "status": "modified", "Loc": {"('TestNxosVpcModule', 'setUp', 31)": {"add": [33]}, "('TestNxosVpcModule', 'tearDown', 40)": {"add": [41]}, "('TestNxosVpcModule', 'load_fixtures', 45)": {"add": [54], "mod": [56]}, "('TestNxosVpcModule', 'test_nxos_vpc_present', 58)": {"add": [66]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/network/nxos/nxos_vpc.py"], "doc": [], "test": ["test/units/modules/network/nxos/test_nxos_vpc.py"], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "44b53141748d29220441e0799b54ea3130ac6753", "iss_html_url": "https://github.com/ansible/ansible/issues/78076", "iss_label": "support:core\nhas_pr\ndocs\naffects_2.12", "title": "Minor change to the getting started diagram", "body": "### Summary\n\nI was looking through the new Ansible getting started guide and noticed one of the nodes in the diagram has a duplicate label. s/node 2/node 3\n\n### Issue Type\n\nDocumentation Report\n\n### Component Name\n\nhttps://github.com/ansible/ansible/blob/devel/docs/docsite/rst/images/ansible_basic.svg\n\n### Ansible Version\n\n```console\n$ ansible --version\r\nansible [core 2.12.6]\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/dnaro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.10/site-packages/ansible\r\n ansible collection location = /home/dnaro/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/bin/ansible\r\n python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 12.0.1 20220308 (Red Hat 12.0.1-0)]\r\n jinja version = 3.0.3\r\n libyaml = True\n```\n\n\n### Configuration\n\n```console\n$ ansible-config dump --only-changed -t all\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n~\n```\n\n\n### OS / Environment\n\nFedora 36\n\n### Additional Information\n\nIt corrects something that is wrong.\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/78077", "commit_html_url": null, "file_loc": {"base_commit": "44b53141748d29220441e0799b54ea3130ac6753", "files": [{"path": "docs/docsite/rst/images/ansible_basic.svg", "status": "modified", "Loc": {"(None, None, 27)": {"mod": [27, 28, 29]}, "(None, None, 35)": {"mod": [35]}, "(None, None, 51)": {"mod": [51]}, "(None, None, 67)": {"mod": [67]}, "(None, None, 192)": {"mod": [192]}, "(None, None, 194)": {"mod": [194, 195, 196, 197, 198, 199, 200]}, "(None, None, 203)": {"mod": [203]}, "(None, None, 205)": {"mod": [205]}, "(None, None, 207)": {"mod": [207]}, "(None, None, 209)": {"mod": [209]}, "(None, None, 211)": {"mod": [211]}, "(None, None, 213)": {"mod": [213]}, "(None, None, 215)": {"mod": [215]}, "(None, None, 217)": {"mod": [217]}, "(None, None, 219)": {"mod": [219]}, "(None, None, 221)": {"mod": [221]}, "(None, None, 223)": {"mod": [223, 224, 225, 226]}, "(None, None, 230)": {"mod": [230]}, "(None, None, 233)": {"mod": [233]}, "(None, None, 236)": {"mod": [236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323]}, "(None, None, 326)": {"mod": [326, 327, 328, 329, 330, 331, 332]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/docsite/rst/images/ansible_basic.svg"], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "0335d05f437eb59bcb77a58ef7819562f298ba79", "iss_html_url": "https://github.com/ansible/ansible/issues/3730", "iss_label": "", "title": "ansible stacktrace", "body": "simple ansible facts now stack trace:\n\n```\nansible -m setup -c local -i ~/hosts 127.0.0.1\n```\n\n127.0.0.1 | FAILED => Traceback (most recent call last):\n File \"/home/bcoca/work/ansible/lib/ansible/runner/**init**.py\", line 367, in _executor\n exec_rc = self._executor_internal(host, new_stdin)\n File \"/home/bcoca/work/ansible/lib/ansible/runner/__init__.py\", line 389, in _executor_internal\n host_variables = self.inventory.get_variables(host)\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/__init__.py\", line 284, in get_variables\n self._vars_per_host[hostname] = self._get_variables(hostname)\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/__init__.py\", line 294, in _get_variables\n vars_results = [ plugin.run(host) for plugin in self._vars_plugins ]\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/vars_plugins/group_vars.py\", line 43, in run\n self.pb_basedir = os.path.abspath(inventory.playbook_basedir())\n File \"/usr/lib/python2.7/posixpath.py\", line 343, in abspath\n if not isabs(path):\n File \"/usr/lib/python2.7/posixpath.py\", line 53, in isabs\n return s.startswith('/')\nAttributeError: 'NoneType' object has no attribute 'startswith'\n\nbisect showed 16efb45735899737aacc106f89014ee9551fd625 as culprit\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ansible/ansible/commit/0335d05f437eb59bcb77a58ef7819562f298ba79", "file_loc": {"base_commit": "0335d05f437eb59bcb77a58ef7819562f298ba79", "files": [{"path": "lib/ansible/inventory/vars_plugins/group_vars.py", "status": "modified", "Loc": {"('VarsModule', 'run', 38)": {"mod": [43]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/inventory/vars_plugins/group_vars.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "f841c2803a1e36bb6f392c466d36b669f9243464", "iss_html_url": "https://github.com/ansible/ansible/issues/77073", "iss_label": "module\nsupport:core\nfeature\nP3\naffects_2.13", "title": "Add support for deb822 apt sources with apt_repository", "body": "### Summary\n\nDebian has deprecated APT's original `sources.list` file format. As of Debian 11 (and Ubuntu 20.10), APT uses [the newer \"DEB822\" format](https://manpages.debian.org/unstable/apt/sources.list.5.en.html#DEB822-STYLE_FORMAT) by default. This format has been supported since APT 1.1, which goes back to Ubuntu 16.04 and Debian 9. \r\n\r\nAnsible should generate DEB822 `.sources` files instead of legacy `.list` files on supported systems.\n\n### Issue Type\n\nFeature Idea\n\n### Component Name\n\napt_repository\n\n### Additional Information\n\nHere's an example of the deb822 format:\r\n\r\n```\r\nTypes: deb\r\nURIs: http://deb.debian.org/debian\r\nSuites: bullseye\r\nComponents: main contrib non-free\r\n```\r\n\r\nThe `apt_repository` module can behave a lot more like the `yum_repository` one with this new format.\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/80018", "commit_html_url": null, "file_loc": {"base_commit": "f841c2803a1e36bb6f392c466d36b669f9243464", "files": [{"path": "test/integration/targets/setup_deb_repo/tasks/main.yml", "status": "modified", "Loc": {"(None, None, 61)": {"add": [61]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["test/integration/targets/setup_deb_repo/tasks/main.yml"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "6a71aef6c5b2f4c26d5f6522cd5b1a85cd78ee6b", "iss_html_url": "https://github.com/ansible/ansible/issues/58126", "iss_label": "networking\npython3\nmodule\nsupport:network\nbug\naffects_2.8\nios\ncisco", "title": "ios_facts module not enumerating ansible_net_model in Ansible 2.8", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\n\r\nios_facts module not enumerating ansible_net_model in Ansible 2.8\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\nios_facts\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```paste below\r\nansible 2.8.1\r\n config file = /home/ryan/test/ansible.cfg\r\n configured module search path = ['/home/ryan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.6/site-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.6.8 (default, Apr 25 2019, 21:02:35) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\n $ ansible-config dump --only-changed\r\nDEFAULT_GATHERING(/home/ryan/test/ansible.cfg) = explicit\r\nDEFAULT_HOST_LIST(/home/ryan/test/ansible.cfg) = ['/home/ryan/test/inventory']\r\nDEPRECATION_WARNINGS(/home/ryan/test/ansible.cfg) = False\r\nHOST_KEY_CHECKING(/home/ryan/test/ansible.cfg) = False\r\nPERSISTENT_COMMAND_TIMEOUT(/home/ryan/test/ansible.cfg) = 30\r\nPERSISTENT_CONNECT_TIMEOUT(/home/ryan/test/ansible.cfg) = 30\r\nRETRY_FILES_ENABLED(/home/ryan/test/ansible.cfg) = False\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\nHost OS: CentOS 7 virtual machine (VMware player)\r\nPython versions: Reproducible on 2.7.5 and 3.6\r\n\r\nTested on:\r\nCSR1000v running IOS-XE 16.09.03\r\nISR4331 running IOS-XE 16.06.03\r\nCatalyst 3850 running IOS-XE 03.06.03E\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nRun playbook to gather ios_facts, ansible_net_model is not included in any subset. Should always be included, per: https://docs.ansible.com/ansible/latest/modules/ios_facts_module.html\r\n\r\n\r\n```yaml\r\n name: IOS Facts gathering\r\n hosts: CSRTEST\r\n connection: network_cli\r\n gather_facts: yes\r\n tasks:\r\n - name: Gather facts from device\r\n ios_facts:\r\n gather_subset: all\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\nExpecting ansible_net_model back as one of the facts gathered.\r\n\r\n##### ACTUAL RESULTS\r\n\r\n\r\n```paste below\r\nTASK [Gather facts from device] ****************************************************************************************************************************************************************************************************************************************\r\ntask path: /home/ryan/test/test_facts.yml:6\r\n attempting to start connection\r\n using connection plugin network_cli\r\n found existing local domain socket, using it!\r\n updating play_context for connection\r\n\r\n local domain socket path is /home/ryan/.ansible/pc/5485150d9c\r\n ESTABLISH LOCAL CONNECTION FOR USER: ryan\r\n EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379 `\" && echo ansible-tmp-1561047456.179465-161563255687379=\"` echo /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379 `\" ) && sleep 0'\r\nUsing module file /usr/local/lib/python3.6/site-packages/ansible/modules/network/ios/ios_facts.py\r\n PUT /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/tmp6gh5jigs TO /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py\r\n EXEC /bin/sh -c 'chmod u+x /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/ /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py && sleep 0'\r\n EXEC /bin/sh -c '/usr/bin/python /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py && sleep 0'\r\n EXEC /bin/sh -c 'rm -f -r /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/ > /dev/null 2>&1 && sleep 0'\r\nok: [CSRTEST] => {\r\n \"ansible_facts\": {\r\n \"ansible_net_all_ipv4_addresses\": [\r\n \"192.168.102.133\"\r\n ],\r\n \"ansible_net_all_ipv6_addresses\": [],\r\n \"ansible_net_api\": \"cliconf\",\r\n \"ansible_net_config\": \"!\\n! Last configuration change at 16:13:51 UTC Thu Jun 20 2019\\n!\\nversion 16.9\\nservice timestamps debug datetime msec\\nservice timestamps log datetime msec\\nplatform qfp utilization monitor load 80\\nno platform punt-keepalive disable-kernel-core\\nplatform console virtual\\n!\\nhostname CSRTEST\\n!\\nboot-start-marker\\nboot-end-marker\\n!\\n!\\n!\\nno aaa new-model\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nlogin on-success log\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nsubscriber templating\\n! \\n! \\n! \\n! \\n!\\nmultilink bundle-name authenticated\\n!\\n!\\n!\\n!\\n!\\ncrypto pki trustpoint TP-self-signed-3768273344\\n enrollment selfsigned\\n subject-name cn=IOS-Self-Signed-Certificate-3768273344\\n revocation-check none\\n rsakeypair TP-self-signed-3768273344\\n!\\n!\\ncrypto pki certificate chain TP-self-signed-3768273344\\n certificate self-signed 01\\n 30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 \\n 31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 \\n 69666963 6174652D 33373638 32373333 3434301E 170D3139 30363230 31363134 \\n 30395A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 \\n 4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D33 37363832 \\n 37333334 34308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 \\n 0A028201 0100891F 68316AAF AF54176F 7D9C39F5 E34FB187 F4D88C88 8265FDE9 \\n B3A338A1 FADD5622 1A2887D2 1E655477 9EDEA72C 94EAB9C4 744C428C 83BC30A1 \\n E18B6EBC 69856EC8 4F5E8649 9D442076 3544F7D1 01AC0B0B 76E9CBE1 AEFA2C4A \\n 4EB0EE8B 29895287 97A9C7CC 586A0241 19DC79E9 35A415A5 7D976DAB 7E072350 \\n C2617E80 F8DB84D1 CFC0EBE5 3194A8C4 2E7AAC3C 7F97D423 2B016D97 C12164A6 \\n D75B73E8 A9EA96ED 079CAB76 2B8DEA2E BBB61836 C913E020 B0F7659D DA4CF838 \\n 7FCC72B5 522932D6 37196DD2 2897D197 BD6FD0C0 576CED54 85A7C94B 029BC4A3 \\n F0D7F7CC 4AAFC50A 297B6E6E ECF97699 2062D939 38DD585D E78A2794 40381513 \\n 75AEAA98 F8550203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF \\n 301F0603 551D2304 18301680 147DF3A5 74A80322 7F0D4A33 C839CE1E 479BCFD0 \\n 8C301D06 03551D0E 04160414 7DF3A574 A803227F 0D4A33C8 39CE1E47 9BCFD08C \\n 300D0609 2A864886 F70D0101 05050003 82010100 87C47448 FAE908F7 47B564D7 \\n 992A8E16 24966357 D0B864AB B32BB538 6A5371F3 0BF093E8 D0E461AC 2ED99B84 \\n 768E700C A88464AA B8E0B774 2308D4A2 881495B7 AFE1F6D7 3D25AFEE 2A7D6653 \\n 6814B4AC E4189640 15C0003E 1E1EE9B1 6E3FF371 448CA017 DA622BCD 49EF07C5 \\n FB4D6859 208FF4FE 29AEB2F3 BB9BA26E 1D140B6A B2C4DADA 913D4846 84370AF0 \\n A67E3D78 F0E9CE1E 9D344542 2732C2A7 70A50162 B32BBE36 BF3382AD 641DB7A6 \\n 1AE1FD10 2CFEC3A6 1ACCD4FD 58E48276 9F2417F4 1871A9F7 11C61604 09E4BBEB \\n 2D821D14 815A48FC 7B14A7C2 8766F1B1 7C04112A 139DB760 EFF339D0 1BA82B52 \\n 5E85BBA9 3FC49134 4FEDD944 BA27F4A4 1317652C\\n \\tquit\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nlicense udi pid CSR1000V sn 9U4DE1R3P2Y\\nlicense boot level ax\\nno license smart enable\\ndiagnostic bootup level minimal\\n!\\nspanning-tree extend system-id\\n!\\n!\\n!\\nusername ansible privilege 15 secret 5 $1$Ax9o$F2JTz/1dXjNSB21muGqxU1\\n!\\nredundancy\\n!\\n!\\n!\\n!\\n!\\n!\\n! \\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n! \\n! \\n!\\n!\\ninterface GigabitEthernet1\\n ip address dhcp\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\ninterface GigabitEthernet2\\n no ip address\\n shutdown\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\ninterface GigabitEthernet3\\n no ip address\\n shutdown\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\nip forward-protocol nd\\nip http server\\nip http authentication local\\nip http secure-server\\nip route 0.0.0.0 0.0.0.0 GigabitEthernet1 dhcp\\n!\\nip ssh version 2\\n!\\n!\\n!\\n!\\n!\\ncontrol-plane\\n!\\n!\\n!\\n!\\n!\\n!\\nline con 0\\n stopbits 1\\nline vty 0 4\\n login local\\nline vty 5 15\\n login local\\n!\\n!\\n!\\n!\\n!\\n!\\nend\",\r\n \"ansible_net_filesystems\": [\r\n \"bootflash:\"\r\n ],\r\n \"ansible_net_filesystems_info\": {\r\n \"bootflash:\": {\r\n \"spacefree_kb\": 6801160,\r\n \"spacetotal_kb\": 7712692\r\n }\r\n },\r\n \"ansible_net_gather_subset\": [\r\n \"hardware\",\r\n \"default\",\r\n \"interfaces\",\r\n \"config\"\r\n ],\r\n \"ansible_net_hostname\": \"CSRTEST\",\r\n \"ansible_net_image\": \"bootflash:packages.conf\",\r\n \"ansible_net_interfaces\": {\r\n \"GigabitEthernet1\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [\r\n {\r\n \"address\": \"192.168.102.133\",\r\n \"subnet\": \"24\"\r\n }\r\n ],\r\n \"lineprotocol\": \"up\",\r\n \"macaddress\": \"000c.29a5.1122\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"up\",\r\n \"type\": \"CSR vNIC\"\r\n },\r\n \"GigabitEthernet2\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [],\r\n \"lineprotocol\": \"down\",\r\n \"macaddress\": \"000c.29a5.112c\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"administratively down\",\r\n \"type\": \"CSR vNIC\"\r\n },\r\n \"GigabitEthernet3\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [],\r\n \"lineprotocol\": \"down\",\r\n \"macaddress\": \"000c.29a5.1136\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"administratively down\",\r\n \"type\": \"CSR vNIC\"\r\n }\r\n },\r\n \"ansible_net_iostype\": \"IOS-XE\",\r\n \"ansible_net_memfree_mb\": 1863849,\r\n \"ansible_net_memtotal_mb\": 2182523,\r\n \"ansible_net_neighbors\": {},\r\n \"ansible_net_python_version\": \"2.7.5\",\r\n \"ansible_net_serialnum\": \"9U4DE1R3P2Y\",\r\n \"ansible_net_system\": \"ios\",\r\n \"ansible_net_version\": \"16.09.03\"\r\n },\r\n \"changed\": false,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"auth_pass\": null,\r\n \"authorize\": null,\r\n \"gather_subset\": [\r\n \"all\"\r\n ],\r\n \"host\": null,\r\n \"password\": null,\r\n \"port\": null,\r\n \"provider\": null,\r\n \"ssh_keyfile\": null,\r\n \"timeout\": null,\r\n \"username\": null\r\n }\r\n }\r\n}\r\nMETA: ran handlers\r\nMETA: ran handlers\r\n\r\nPLAY RECAP *************************************************************************************************************************************************************************************************************************************************************\r\nCSRTEST : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0\r\n\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/58174", "commit_html_url": null, "file_loc": {"base_commit": "6a71aef6c5b2f4c26d5f6522cd5b1a85cd78ee6b", "files": [{"path": "lib/ansible/plugins/cliconf/ios.py", "status": "modified", "Loc": {"('Cliconf', 'get_device_info', 199)": {"mod": [210, 211, 212]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/plugins/cliconf/ios.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "d1cd6ee56d492deef40f6f2f178832a1815730a5", "iss_html_url": "https://github.com/ansible/ansible/issues/37734", "iss_label": "cloud\nazure\nmodule\naffects_2.4\nsupport:certified\nfeature", "title": "Add network interface to Load Balancer Backend pool in azure_rm_networkinterface", "body": "##### ISSUE TYPE\r\n - Feature Idea\r\n\r\n##### COMPONENT NAME\r\nazure_rm_networkinterface\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible --version\r\nansible 2.4.3.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/dgermain/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]\r\n```\r\n\r\n##### CONFIGURATION\r\n```\r\nansible-config dump --only-changed\r\n#empty return\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### SUMMARY\r\nIn current azure loadbalancer module, you can create Backend pools, but you don't have the possibility to add network interfaces in this Backend pool, neither in *azure_rm_networkinterface* nor in *azure_rm_loadbalancer*.\r\nAs an example, this feature is present in Powershell azure CLI, when handling network interfaces :\r\n```\r\n $nic = Get-AzurermNetworkInterface -name $virtualnetworkcardname\" -resourcegroupname $resourceGroup\r\n $nic.IpConfigurations[0].LoadBalancerBackendAddressPools=$backend\r\n Set-AzureRmNetworkInterface -NetworkInterface $nic\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\nAs far as I can tell, you don't have this option in the ansible module\r\n\r\n##### EXPECTED RESULTS\r\nHave an option to allow this\r\n\r\n##### ACTUAL RESULTS\r\nNo option to do so", "code": null, "pr_html_url": "github.com/ansible/ansible/pull/38643", "commit_html_url": null, "file_loc": {"base_commit": "d1cd6ee56d492deef40f6f2f178832a1815730a5", "files": [{"path": "lib/ansible/module_utils/azure_rm_common.py", "status": "modified", "Loc": {"('AzureRMModuleBase', None, 216)": {"add": [605]}, "(None, None, None)": {"mod": [131]}}}, {"path": "lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [66, 73, 153, 198, 210, 239, 351], "mod": [55, 57, 58, 59, 60, 61, 62, 63, 64, 65, 68, 127, 128, 160, 162, 163, 165, 186, 197, 207, 220, 222, 233, 234, 277, 286]}, "(None, 'nic_to_dict', 306)": {"add": [313]}, "('AzureRMNetworkInterface', 'exec_module', 411)": {"add": [525], "mod": [427, 428, 429, 431, 432, 435, 468, 469, 470, 473, 477, 514, 515, 516, 530, 532, 534]}, "('AzureRMNetworkInterface', None, 356)": {"add": [600], "mod": [594]}, "('AzureRMNetworkInterface', 'construct_ip_configuration_set', 601)": {"add": [606]}, "('AzureRMNetworkInterface', '__init__', 358)": {"mod": [364, 371, 372, 380, 386, 392, 393, 397]}, "('AzureRMNetworkInterface', 'get_security_group', 594)": {"mod": [597]}}}, {"path": "test/integration/targets/azure_rm_networkinterface/tasks/main.yml", "status": "modified", "Loc": {"(None, None, 19)": {"add": [19]}, "(None, None, 124)": {"add": [124]}, "(None, None, 131)": {"add": [131]}, "(None, None, 148)": {"add": [148]}, "(None, None, 164)": {"add": [164]}, "(None, None, 179)": {"add": [179]}, "(None, None, 189)": {"add": [189]}, "(None, None, 36)": {"mod": [36]}, "(None, None, 40)": {"mod": [40, 41]}, "(None, None, 43)": {"mod": [43]}, "(None, None, 48)": {"mod": [48]}, "(None, None, 52)": {"mod": [52, 53]}, "(None, None, 55)": {"mod": [55]}, "(None, None, 78)": {"mod": [78]}, "(None, None, 90)": {"mod": [90]}, "(None, None, 113)": {"mod": [113]}, "(None, None, 137)": {"mod": [137]}, "(None, None, 159)": {"mod": [159]}, "(None, None, 176)": {"mod": [176]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nConfig\nTest"}, "loctype": {"code": ["lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py", "lib/ansible/module_utils/azure_rm_common.py"], "doc": [], "test": [], "config": ["test/integration/targets/azure_rm_networkinterface/tasks/main.yml"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "6f8c1da0c805f334b8598fd2556f7ed92dc9348e", "iss_html_url": "https://github.com/ansible/ansible/issues/79277", "iss_label": "bug\ntraceback\naffects_2.13", "title": "ansible-test fails to report the proper error when validating ansible-doc", "body": "### Summary\n\nThe utility ansible-test sanity is fantastic and does its job. Unfortunately, when validating the ansible-doc, if the YAML is malformed, you'll get a parsing error instead of the actual YAML error.\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nansible-test\n\n### Ansible Version\n\n```console\n$ ansible --version\r\nansible [core 2.13.6rc1.post0] (stable-2.13 33852737fd) last updated 2022/10/31 21:51:24 (GMT +200)\r\n config file = None\r\n configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/warkdev/ansible/lib/ansible\r\n ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /home/warkdev/ansible/bin/ansible\r\n python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]\r\n jinja version = 3.1.2\r\n libyaml = False\n```\n\n\n### Configuration\n\n```console\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\n```\n\n\n### OS / Environment\n\nDebian 12\n\n### Steps to Reproduce\n\n* Generate an ansible module that you want to validate and introduce invalid YAML syntax in the ansible-doc\r\n* Run ansible-test sanity against that module\r\n* Verify that the error is happening\r\n\r\nI've tracked down the issue till this code: https://github.com/ansible/ansible/blob/stable-2.13/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py#L157\n\n### Expected Results\n\nERROR: Found 2 yamllint issue(s) which need to be resolved:\r\nERROR: plugins/modules/axway_cft_about_info.py:36:15: error: RETURN: syntax error: mapping values are not allowed here (syntax)\r\nERROR: plugins/modules/axway_cft_about_info.py:36:15: unparsable-with-libyaml: None - mapping values are not allowed in this context\n\n### Actual Results\n\n```console\nTraceback (most recent call last):\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py\", line 153, in parse_yaml\r\n data = yaml_load(value, Loader=loader)\r\n File \"/root/.ansible/test/venv/sanity.validate-modules/3.10/487215fd/lib/python3.10/site-packages/yaml/__init__.py\", line 81, in load\r\n return loader.get_single_data()\r\n File \"/root/.ansible/test/venv/sanity.validate-modules/3.10/487215fd/lib/python3.10/site-packages/yaml/constructor.py\", line 49, in get_single_data\r\n node = self.get_single_node()\r\n File \"yaml/_yaml.pyx\", line 673, in yaml._yaml.CParser.get_single_node\r\n File \"yaml/_yaml.pyx\", line 687, in yaml._yaml.CParser._compose_document\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 845, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 845, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 847, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 860, in yaml._yaml.CParser._parse_next_event\r\nyaml.scanner.ScannerError: mapping values are not allowed in this context\r\n in \"\", line 9, column 15\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate.py\", line 6, in \r\n main()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2475, in main\r\n run()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2363, in run\r\n mv1.validate()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2156, in validate\r\n doc_info, docs = self._validate_docs()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 1080, in _validate_docs\r\n data, errors, traces = parse_yaml(doc_info['RETURN']['value'],\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py\", line 157, in parse_yaml\r\n e.problem_mark.line += lineno - 1\r\nAttributeError: attribute 'line' of 'yaml._yaml.Mark' objects is not writable\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/79682", "commit_html_url": null, "file_loc": {"base_commit": "6f8c1da0c805f334b8598fd2556f7ed92dc9348e", "files": [{"path": "test/integration/targets/ansible-test-sanity-validate-modules/runme.sh", "status": "modified", "Loc": {"(None, None, 7)": {"mod": [7]}}}, {"path": "test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py", "status": "modified", "Loc": {"(None, 'parse_yaml', 137)": {"mod": [157, 158, 161]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py"], "doc": [], "test": [], "config": [], "asset": ["test/integration/targets/ansible-test-sanity-validate-modules/runme.sh"]}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "d97080174e9bbebd27a967368934ef91d1f28f64", "iss_html_url": "https://github.com/ansible/ansible/issues/32070", "iss_label": "networking\naffects_2.4\nsupport:core\nnxos\nbug\ncisco", "title": "Occasional failures with NXOS modules", "body": "##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nnxos modules\r\n\r\n##### ANSIBLE VERSION\r\nansible 2.4.0.0\r\n config file = /project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg\r\n configured module search path = [u'/etc/ansible/roles/plugins/library', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]\r\n\r\n\r\n##### CONFIGURATION\r\nDEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = [u'/etc/ansible/roles/plugins/action', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/actions']\r\nDEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = [u'/etc/ansible/roles/plugins/callback', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/callbacks']\r\nDEFAULT_CALLBACK_WHITELIST(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = ['profile_tasks']\r\nDEFAULT_FILTER_PLUGIN_PATH(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/plugins/filter']\r\nDEFAULT_FORCE_HANDLERS(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = True\r\nDEFAULT_HOST_LIST(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/inventory']\r\nDEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = [u'/etc/ansible/roles/plugins/library', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/modules']\r\nDEFAULT_ROLES_PATH(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles-dev', u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles', u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles-base-shell']\r\nHOST_KEY_CHECKING(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = False\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 16.04\r\nNXOS: version 7.0(3)I7(1)\r\n\r\n##### SUMMARY\r\nI observe non deterministic failures with the nxos modules when configuring 9200 series switches, in this specific case a 92160.\r\n\r\n##### STEPS TO REPRODUCE\r\nSadly this is difficult to reproduce. I have a playbook which configures a couple of dozen ports on several switches, each taking a dozen or more tasks. This is a sufficient number of tasks to occasionally trigger a failure of a task. Running the playbook again most likely will result in no errors.\r\n\r\nPlaybook https://gist.github.com/jrosser/b4d88748f5b1323828a8f2f266596ead\r\n\r\n##### EXPECTED RESULTS\r\nAll tasks to run without error. Running with -vvvv gives no insight into the communication with the switch so doesn't provide any useful debug.\r\n\r\n##### ACTUAL RESULTS\r\nVery occasionally one or more tasks will fail.\r\n```\r\nTASK [Ensure all layer 2 interfaces are up] ***********************************************************************************************************\r\nTuesday 24 October 2017 10:54:15 +0000 (0:00:21.378) 0:01:00.450 ******* \r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/1', u'description': u'to infra0-1-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/1', u'description': u'to infra0-1-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/2', u'description': u'to infra0-2-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/2', u'description': u'to infra0-2-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/3', u'description': u'to infra0-3-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/3', u'description': u'to infra0-3-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/4', u'description': u'to infra0-4-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/4', u'description': u'to infra0-4-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/5', u'description': u'to infra0-5-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/5', u'description': u'to infra0-5-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/6', u'description': u'to infra0-6-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/6', u'description': u'to infra0-6-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/7', u'description': u'to infra0-7-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/7', u'description': u'to infra0-7-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/8', u'description': u'to infra0-8-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/8', u'description': u'to infra0-8-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/9', u'description': u'to infra0-1-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/9', u'description': u'to infra0-1-b505-9'})\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: string indices must be integers, not str\r\nfailed: [fbs0-b505-10] (item={u'interface': u'Ethernet1/10', u'description': u'to infra0-2-b505-9'}) => {\"changed\": false, \"failed\": true, \"item\": {\"description\": \"to infra0-2-b505-9\", \"interface\": \"Ethernet1/10\"}, \"module_stderr\": \"Traceback (most recent call last):\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 710, in \\n main()\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 701, in main\\n normalized_interface)\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 534, in smart_existing\\n existing = get_interface(normalized_interface, module)\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 281, in get_interface\\n interface_table = body['TABLE_interface']['ROW_interface']\\nTypeError: string indices must be integers, not str\\n\", \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\", \"rc\": 0}\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/10', u'description': u'to infra0-2-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/11', u'description': u'to infra0-3-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/11', u'description': u'to infra0-3-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/12', u'description': u'to infra0-4-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/12', u'description': u'to infra0-4-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/13', u'description': u'to infra0-5-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/13', u'description': u'to infra0-5-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/14', u'description': u'to infra0-6-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/14', u'description': u'to infra0-6-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/15', u'description': u'to infra0-7-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/15', u'description': u'to infra0-7-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/16', u'description': u'to infra0-8-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/16', u'description': u'to infra0-8-b505-9'})\r\nok: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/47', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/47', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/48', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/48', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\n\r\n\r\nTASK [Ensure vrrpv3 is applied for vlans that need it] ************************************************************************************************\r\nTuesday 24 October 2017 11:01:48 +0000 (0:00:11.191) 0:08:33.606 ******* \r\nskipping: [fbs0-b505-9] => (item={u'vrf': u'default', u'vlan_id': 999}) \r\nok: [fbs0-b505-9] => (item={u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 23, u'description': u'storage-clients', u'address': u'10.23.128.5'}, u'vrf': u'STORAGE', u'address': u'10.23.128.1/24', u'interface': u'Vlan1923', u'extra_lines': [u'mtu 9216'], u'vlan_id': 1923})\r\nok: [fbs0-b505-9] => (item={u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 21, u'description': u'storage-services', u'address': u'10.21.128.5'}, u'vrf': u'STORAGE', u'address': u'10.21.128.1/24', u'interface': u'Vlan1921', u'extra_lines': [u'mtu 9216'], u'vlan_id': 1921})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1911', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 11, u'description': u'osmgmt', u'address': u'10.11.128.5'}, u'vrf': u'OSMGMT', u'vlan_id': 1911, u'address': u'10.11.128.1/24'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1931', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 31, u'description': u'metal', u'address': u'10.31.128.5'}, u'vrf': u'METAL', u'vlan_id': 1931, u'address': u'10.31.128.1/24'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1932', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 32, u'description': u'metal', u'address': u'10.32.128.5'}, u'vrf': u'METAL', u'vlan_id': 1932, u'address': u'10.32.128.1/24'})\r\nfailed: [fbs0-b505-9] (item={u'interface': u'Vlan1941', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 41, u'description': u'tunnels', u'address': u'10.41.128.5'}, u'vrf': u'TUNNEL', u'vlan_id': 1941, u'address': u'10.41.128.1/24'}) => {\"changed\": false, \"failed\": true, \"item\": {\"address\": \"10.41.128.1/24\", \"interface\": \"Vlan1941\", \"vlan_id\": 1941, \"vrf\": \"TUNNEL\", \"vrrpv3\": {\"address\": \"10.41.128.5\", \"address_family\": \"ipv4\", \"description\": \"tunnels\", \"group_id\": 41, \"priority\": \"102\"}}, \"msg\": \"interface Vlan1941\\r\\r\\n ^\\r\\n% Invalid command at '^' marker.\\r\\n\\rfbs0-b505-9# \"}\r\n```", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/32114", "commit_html_url": null, "file_loc": {"base_commit": "d97080174e9bbebd27a967368934ef91d1f28f64", "files": [{"path": "lib/ansible/module_utils/nxos.py", "status": "modified", "Loc": {"('Cli', 'run_commands', 139)": {"add": [171]}, "(None, None, None)": {"mod": [37]}}}, {"path": "lib/ansible/modules/network/nxos/nxos_interface.py", "status": "modified", "Loc": {"(None, 'get_interface', 238)": {"mod": [278, 280, 281, 282, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 315, 316, 317, 318, 319, 320, 321, 322, 324, 325, 326, 327, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 340, 341, 342, 343]}, "(None, 'get_interfaces_dict', 361)": {"mod": [372]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/network/nxos/nxos_interface.py", "lib/ansible/module_utils/nxos.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "cc7a5228b02344658dac69c38ccb7d6580d2b4c6", "iss_html_url": "https://github.com/ansible/ansible/issues/34012", "iss_label": "module\naffects_2.4\nnet_tools\nsupport:community\nbug", "title": "nmcli module fails with self.dns4=' '.join(module.params['dns4']) TypeError", "body": "\r\n##### ISSUE TYPE\r\n\r\n - Bug Report\r\n\r\n\r\n##### COMPONENT NAME\r\n\r\n`nmcli`\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```\r\nansible 2.4.1.0\r\n config file = /Users/dlbewley/src/ansible/playbook-openshift/ansible.cfg\r\n configured module search path = [u'/Users/dlbewley/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/Cellar/ansible/2.4.1.0/libexec/lib/python2.7/site-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 2.7.14 (default, Sep 25 2017, 09:53:22) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n##### OS / ENVIRONMENT\r\n\r\n- Manager: OS X\r\n- Managed: Red Hat Enterprise Linux Server release 7.4 (Maipo)\r\n\r\n##### SUMMARY\r\n\r\nPlaybook fails when trying to join `None` value for `dns4` param [here](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/net_tools/nmcli.py#L559)\r\n\r\nI do not see a requirement to include dns servers, and expect to use DHCP.\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nHost with links on eno1, eno2. Int eno1 is def gw\r\n\r\n\r\n```yaml\r\n---\r\n- hosts: bonded\r\n\r\n# Dec 18 18:08:43 ose-prod-node-07 ansible-nmcli[31031]: Invoked with conn_name=cluster ingress=None slavepriority=32 vlandev=None forwarddelay=15 egress=None ageingtime=300 mtu=None hellotime=2 maxage=20 vlanid=None priority=128 gw4=None state=present gw6=None master=None stp=True ifname=None type=bond miimon=None arp_ip_target=None downdelay=None mac=None ip6=None ip4=None autoconnect=None dns6=None dns4=None arp_interval=None flags=None mode=802.3ad updelay=None\r\n\r\n vars:\r\n nmcli_bond:\r\n - conn_name: cluster\r\n mode: 802.3ad\r\n mtu: 9000\r\n\r\n nmcli_bond_slave:\r\n - conn_name: eno1\r\n master: cluster\r\n - conn_name: eno2\r\n master: cluster\r\n\r\n tasks:\r\n - name: create bond\r\n nmcli:\r\n type: bond\r\n conn_name: '{{ item.conn_name }}'\r\n mode: '{{ item.mode }}'\r\n state: present\r\n with_items:\r\n - '{{ nmcli_bond }}'\r\n\r\n - name: add interfaces to bond\r\n nmcli:\r\n type: bond-slave\r\n conn_name: '{{ item.conn_name }}'\r\n ifname: '{{ item.ifname }}'\r\n master: '{{ item.master }}'\r\n state: present\r\n with_items:\r\n - '{{ nmcli_bond_slave }}'\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\n\r\nFirst test, but expect playbook to run without error.\r\n\r\n##### ACTUAL RESULTS\r\n\r\n\r\n\r\n```\r\nfailed: [ose-prod-node-07.example.com] (item={u'conn_name': u'cluster', u'mode': u'802.3ad', u'mtu': 9000}) => {\r\n \"changed\": false,\r\n \"failed\": true,\r\n \"item\": {\r\n \"conn_name\": \"cluster\",\r\n \"mode\": \"802.3ad\",\r\n \"mtu\": 9000\r\n },\r\n \"module_stderr\": \"OpenSSH_7.4p1, LibreSSL 2.5.0\\r\\ndebug1: Reading configuration data /Users/dlbewley/.ssh/config\\r\\ndebug1: /Users/dlbewley/.ssh/config line 3: Applying options for *\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug1: /etc/ssh/ssh_config line 51: Applying options for *\\r\\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\\r\\ndebug1: auto-mux: Trying existing master\\r\\ndebug2: fd 3 setting O_NONBLOCK\\r\\ndebug2: mux_client_hello_exchange: master version 4\\r\\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\\r\\ndebug3: mux_client_request_session: entering\\r\\ndebug3: mux_client_request_alive: entering\\r\\ndebug3: mux_client_request_alive: done pid = 10219\\r\\ndebug3: mux_client_request_session: session request sent\\r\\ndebug1: mux_client_request_session: master session id: 2\\r\\ndebug3: mux_client_read_packet: read header failed: Broken pipe\\r\\ndebug2: Received exit status from master 1\\r\\nShared connection to ose-prod-node-07.example.com closed.\\r\\n\",\r\n \"module_stdout\": \"/tmp/ansible_gKn2an/ansible_module_nmcli.py:493: PyGIWarning: NetworkManager was imported without specifying a version first. Use gi.require_version('NetworkManager', '1.0') before import to ensure that the right version gets loaded.\\r\\n from gi.repository import NetworkManager, NMClient\\r\\n/tmp/ansible_gKn2an/ansible_module_nmcli.py:493: PyGIWarning: NMClient was imported without specifying a version first. Use gi.require_version('NMClient', '1.0') before import to ensure that the right version gets loaded.\\r\\n from gi.repository import NetworkManager, NMClient\\r\\nTraceback (most recent call last):\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 1190, in \\r\\n main()\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 1134, in main\\r\\n nmcli=Nmcli(module)\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 559, in __init__\\r\\n self.dns4=' '.join(module.params['dns4'])\\r\\nTypeError\\r\\n\",\r\n \"msg\": \"MODULE FAILURE\",\r\n \"rc\": 1\r\n}\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/30757", "commit_html_url": null, "file_loc": {"base_commit": "cc7a5228b02344658dac69c38ccb7d6580d2b4c6", "files": [{"path": "lib/ansible/modules/net_tools/nmcli.py", "status": "modified", "Loc": {"('Nmcli', '__init__', 549)": {"mod": [559]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/net_tools/nmcli.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "5f7d39fede4de8af98472bd009c63c3a86568e2d", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2840", "iss_label": "bug", "title": "wandb: Network error (ReadTimeout), entering retry loop. See wandb\\debug-internal.log for full traceback.", "body": "\r\n - **Current repo**: yolov5-5.0 release version\r\n - **Common dataset**: VisDrone.yaml\r\n - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments\r\n\r\n\r\n## \ud83d\udc1b Bug\r\nI try to use your rep to train yolov4's NET because yolov4(https://github.com/WongKinYiu/PyTorch_YOLOv4)'s code is outdate and do not maintain, it has many bugs.\r\n when I train my own yolov4-tiny.yaml, it comes this bug, I think this bug is because my network can not connect to wandb's server? before today, I can train normally, and a few minute ago, I try many times to `python train.py `,but I still can not begin my train code.\r\n\r\n## To Reproduce (REQUIRED)\r\n \r\n`python train.py `\r\n\r\nOutput:\r\n```\r\nYOLOv5 2021-4-15 torch 1.7.1 CUDA:0 (GRID V100D-32Q, 32638.0MB)\r\n\r\nNamespace(adam=False, artifact_alias='latest', batch_size=64, bbox_interval=-1, bucket='', cache_images=False, cfg='models/yolov4-tiny.yaml', data='datai/Visdrone.yaml', device='', entity=None, epochs=300, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640, 640], label_smoothing=0.0, linear_lr=False, local_rank=-1, multi_scale=False, name='exp', noautoanchor=False, nosave=False, notest=False, project='runs/train', quad=False, rect=False, resume=False, save_dir='runs\\\\train\\\\exp8', save_period=-1, single_cls=False, sync_bn=False, total_batch_size=64, upload_dataset=False, weights='', workers=8, world_size=1)\r\ntensorboard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/\r\nhyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0\r\nwandb: Currently logged in as: zigar (use `wandb login --relogin` to force relogin)\r\nwandb: Network error (ReadTimeout), entering retry loop. See wandb\\debug-internal.log for full traceback.\r\n```\r\n\r\n\r\n## Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n\r\n## Environment\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n - OS: [e.g. WIndows 10]\r\n - GPU [e.g. GRID V100D-32Q, 32638.0MB]\r\n\r\n\r\n## Additional context\r\nAdd any other context about the problem here.\r\n", "code": null, "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2882", "commit_html_url": null, "file_loc": {"base_commit": "5f7d39fede4de8af98472bd009c63c3a86568e2d", "files": [{"path": "data/argoverse_hd.yaml", "status": "modified", "Loc": {"(None, None, 3)": {"mod": [3]}}}, {"path": "data/coco.yaml", "status": "modified", "Loc": {"(None, None, 3)": {"mod": [3]}}}, {"path": "data/coco128.yaml", "status": "modified", "Loc": {"(None, None, 3)": {"mod": [3]}}}, {"path": "data/scripts/get_argoverse_hd.sh", "status": "modified", "Loc": {"(None, None, 5)": {"mod": [5]}}}, {"path": "data/scripts/get_coco.sh", "status": "modified", "Loc": {"(None, None, 5)": {"mod": [5]}}}, {"path": "data/scripts/get_voc.sh", "status": "modified", "Loc": {"(None, None, 41)": {"add": [41]}, "(None, None, 77)": {"add": [77]}, "(None, None, 120)": {"add": [120]}, "(None, None, 5)": {"mod": [5]}, "(None, None, 32)": {"mod": [32, 33]}, "(None, None, 35)": {"mod": [35, 36]}, "(None, None, 38)": {"mod": [38]}, "(None, None, 40)": {"mod": [40]}, "(None, None, 43)": {"mod": [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]}, "(None, None, 57)": {"mod": [57, 58, 59]}, "(None, None, 68)": {"mod": [68]}, "(None, None, 72)": {"mod": [72, 73]}, "(None, None, 76)": {"mod": [76]}, "(None, None, 79)": {"mod": [79, 80, 81, 82]}, "(None, None, 84)": {"mod": [84]}, "(None, None, 93)": {"mod": [93]}, "(None, None, 95)": {"mod": [95]}, "(None, None, 97)": {"mod": [97, 98, 99, 100, 102, 103, 104]}, "(None, None, 106)": {"mod": [106]}, "(None, None, 108)": {"mod": [108, 109, 111, 112, 113, 114, 116, 117, 118, 119]}, "(None, None, 123)": {"mod": [123, 124, 126, 127, 128, 129, 131, 132, 133, 134]}}}, {"path": "data/voc.yaml", "status": "modified", "Loc": {"(None, None, 3)": {"mod": [3]}}}, {"path": "utils/general.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 175]}, "(None, 'check_dataset', 156)": {"add": [166], "mod": [164, 168, 169, 171]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/general.py"], "doc": [], "test": [], "config": ["data/argoverse_hd.yaml", "data/voc.yaml", "data/coco.yaml", "data/coco128.yaml"], "asset": ["data/scripts/get_argoverse_hd.sh", "data/scripts/get_voc.sh", "data/scripts/get_coco.sh"]}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "cbd55da5d24becbe3b94afaaa4cdd1187a512c3f", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2824", "iss_label": "bug", "title": " Sizes of tensors must match ", "body": "Multi Threaded Inference is not working with Yolo5. It throws the following error,\r\n\r\n```\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 113, in forward\r\n yi = self.forward_once(xi)[0] # forward\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 139, in forward_once\r\n x = m(x) # run\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 54, in forward\r\n y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy\r\nRuntimeError: The size of tensor a (68) must match the size of tensor b (56) at non-singleton dimension 3\r\nException in thread Thread-112:\r\nTraceback (most recent call last):\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/threading.py\", line 932, in _bootstrap_inner\r\n self.run()\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/threading.py\", line 870, in run\r\n self._target(*self._args, **self._kwargs)\r\n```\r\n\r\nI saw the similar bug in other issue and I used the latest version of this repo. Still the problem persists. How can I fix it?\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ultralytics/yolov5/commit/cbd55da5d24becbe3b94afaaa4cdd1187a512c3f", "file_loc": {"base_commit": "cbd55da5d24becbe3b94afaaa4cdd1187a512c3f", "files": [{"path": "models/yolo.py", "status": "modified", "Loc": {"('Detect', 'forward', 38)": {"mod": [52]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["models/yolo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "d9b64c27c24db2001535bb480959aca015159510", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/119", "iss_label": "question\nStale", "title": "yolov5m\u6a21\u578b\uff0c\u753142M\u589e\u5927\u523084M\uff0c\u662f\u505a\u4e86\u4ec0\u4e48\u4fee\u6539\u4e48\uff1f", "body": "6.16\u6211\u505a\u8bad\u7ec3\u7684\u65f6\u5019\uff08yolov5m\uff09\uff0c\u8bad\u7ec3\u51fa\u6765\u7684\u6a21\u578b\u5927\u5c0f\u662f42M\r\n\r\n\u4f46\u662f\u4eca\u5929\uff086.18\uff09\u6211\u7528\u6700\u65b0\u4ee3\u7801\u8bad\u7ec3\u7684\u65f6\u5019\uff0c\u6a21\u578b\u5927\u5c0f\u662f84M\r\n\r\n\u8bf7\u95ee\u662f\u505a\u4e86\u4ec0\u4e48\u4fee\u6539\u4e48\uff1f", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ultralytics/yolov5/commit/d9b64c27c24db2001535bb480959aca015159510", "file_loc": {"base_commit": "d9b64c27c24db2001535bb480959aca015159510", "files": [{"path": "train.py", "status": "modified", "Loc": {"(None, 'train', 60)": {"mod": [335]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "bfd51f62f8e0a114cb94c269e83ff135e31d8bdb", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/187", "iss_label": "bug", "title": "can't test with my finetune weights", "body": "i train a model in my custom data, can get the weights (**last.pt** and **best.pt**)\r\ni run:\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --weights weights/last.pt --device 4`\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --weights weights/best.pt --device 4`\r\nboth raise the error:\r\n**Traceback (most recent call last):\r\n File \"test.py\", line 277, in \r\n opt.verbose)\r\n File \"test.py\", line 86, in test\r\n names = model.names if hasattr(model, 'names') else model.module.names\r\n File \"/home/anaconda3/envs/yolov5/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 594, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'Model' object has no attribute 'module'**\r\n\r\nHowever, i can run with the default weight **yolov5s.pt**\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --device 4`\r\n\r\npytorch = 1.5", "code": null, "pr_html_url": "https://github.com/ultralytics/yolov5/pull/245", "commit_html_url": null, "file_loc": {"base_commit": "bfd51f62f8e0a114cb94c269e83ff135e31d8bdb", "files": [{"path": "train.py", "status": "modified", "Loc": {"(None, 'train', 62)": {"add": [135, 136, 174], "mod": [82, 291]}, "(None, None, None)": {"mod": [375]}}}, {"path": "utils/torch_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [56]}, "(None, 'model_info', 101)": {"mod": [114, 115]}, "('ModelEMA', 'update', 184)": {"mod": [188]}, "('ModelEMA', 'update_attr', 198)": {"mod": [199, 200, 201, 202]}}}, {"path": "utils/utils.py", "status": "modified", "Loc": {"(None, 'check_img_size', 48)": {"mod": [50]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py", "utils/utils.py", "utils/torch_utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/227", "iss_label": "", "title": "Short text samples", "body": "It would be awesome to be able to use this to help train a hot word detector. In addition to recording myself saying the hotword, I could create an even larger dataset by adding outputs of this model that used my voice as the reference.\r\n\r\nThe problem with that, however, is that this model seems to only work well on sentences of medium length (+- 20 words according to demo_cli.py). Is there anything I can do to make short text samples (e.g. 2 words) sound better?", "code": null, "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472", "commit_html_url": null, "file_loc": {"base_commit": "5425557efe30863267f805851f918124191e0be0", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, 18)": {"mod": [18]}, "(None, None, 23)": {"mod": [23, 24]}, "(None, None, 65)": {"mod": [65, 66, 68, 70]}}}, {"path": "demo_cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13, 43, 162], "mod": [24, 25, 26, 30, 31, 32, 70, 76]}}}, {"path": "demo_toolbox.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 32], "mod": [23, 24, 25]}}}, {"path": "encoder/audio.py", "status": "modified", "Loc": {"(None, 'preprocess_wav', 19)": {"mod": [20, 43, 44]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, 16)": {"add": [16]}, "(None, None, 1)": {"mod": [1]}}}, {"path": "requirements_gpu.txt", "status": "removed", "Loc": {}}, {"path": "synthesizer/LICENSE.txt", "status": "modified", "Loc": {"(None, None, 3)": {"add": [3]}, "(None, None, 4)": {"add": [4]}}}, {"path": "synthesizer/audio.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}}}, {"path": "synthesizer/feeder.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/hparams.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [348], "mod": [1, 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 121, 122, 123, 124, 125, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 149, 150, 151, 152, 153, 154, 155, 157, 158, 159, 160, 161, 162, 164, 165, 166, 167, 168, 169, 170, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 190, 191, 192, 193, 194, 196, 197, 198, 199, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 231, 232, 233, 234, 235, 237, 238, 239, 240, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 342, 343, 344, 345, 347]}, "(None, 'hparams_debug_string', 350)": {"mod": [351, 352, 353]}}}, {"path": "synthesizer/inference.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6], "mod": [1, 2, 3, 4, 5, 9, 11]}, "('Synthesizer', '__init__', 19)": {"add": [33], "mod": [21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59]}, "('Synthesizer', 'griffin_lim', 149)": {"add": [154]}, "('Synthesizer', None, 15)": {"mod": [19, 106, 107, 108, 109, 110, 111, 113, 114, 116, 117, 118, 119, 121]}, "('Synthesizer', 'is_loaded', 61)": {"mod": [63]}, "('Synthesizer', 'load', 67)": {"mod": [69, 70, 71, 72, 73, 74, 75]}, "('Synthesizer', 'synthesize_spectrograms', 77)": {"mod": [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 104]}}}, {"path": "synthesizer/infolog.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/__init__.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/architecture_wrappers.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/attention.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/custom_decoder.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/helpers.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/modules.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/models/tacotron.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11], "mod": [1, 2, 3, 4, 5, 6, 7, 8, 9]}, "(None, 'split_func', 14)": {"mod": [14, 15, 16, 17, 18, 19, 20, 21, 24, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 79, 81, 82, 84, 86, 87, 88, 89, 90, 91, 93, 94, 95, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 134, 135, 136, 137, 139, 140, 141, 142, 143, 145, 147, 148, 151, 153, 154, 155, 156, 157, 158, 160, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 198, 199, 200, 201, 202, 203, 205, 206, 207, 209, 210, 212, 213, 214, 215, 216, 217, 218, 220, 221, 222, 223, 225, 226, 228, 229, 230, 232, 233, 234, 235, 237, 238, 240, 241, 242, 243, 244, 245, 246, 247, 249, 250, 252, 253, 254, 256, 257, 259, 260, 261, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 277, 278, 279, 280, 281, 282, 283, 284, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 312, 313, 314, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 330, 331, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 369, 370, 371, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 385, 386, 387, 388, 389, 390, 391, 392, 394, 395, 396, 397, 398, 399, 400, 402, 403, 404, 405, 406, 407, 409, 410, 412, 413, 414, 415, 416, 417, 418, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 435, 436, 437, 439, 441, 442, 443, 444, 445, 446, 447, 448, 449, 451, 452, 454, 455, 456, 457, 458, 459, 460, 461, 462, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 479, 480, 481, 483, 484, 485, 486, 487, 488, 489, 491, 492, 493, 494, 495, 497, 498, 499, 501, 502, 504, 505, 507, 508, 509, 510, 512, 513, 514, 515, 516, 517, 518, 520, 521]}}}, {"path": "synthesizer/preprocess.py", "status": "modified", "Loc": {"(None, 'process_utterance', 185)": {"add": [204]}}}, {"path": "synthesizer/synthesize.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [82], "mod": [1, 3, 4, 6, 7]}, "(None, 'run_eval', 10)": {"mod": [10, 11, 12, 14, 15, 16, 17, 18, 20, 21, 23, 24, 25, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37]}, "(None, 'run_synthesis', 39)": {"mod": [40, 41, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 62, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 77, 78, 80, 81]}}}, {"path": "synthesizer/tacotron2.py", "status": "removed", "Loc": {}}, {"path": "synthesizer/train.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 79, 83], "mod": [3, 4, 5, 6, 7, 9, 10, 12, 14, 16, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 31, 32, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78]}, "(None, 'model_train_mode', 85)": {"mod": [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 133, 134, 135, 136, 138, 139, 141, 142, 143, 144, 146, 147, 148, 149]}, "(None, 'train', 110)": {"mod": [151, 152, 153, 154, 155, 156, 157, 159, 161, 167, 169, 171, 172, 173, 174, 176, 177, 178, 179, 181, 183, 184, 185, 186, 187, 189, 190, 191, 192, 194, 195, 196, 198, 199, 201, 202, 204, 205, 207, 208, 210, 212, 213, 214, 215, 216, 218, 219, 220, 222, 223, 224, 226, 227, 228, 230, 231, 232, 233, 234, 235, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 327, 328, 329, 330, 332, 333, 334, 335, 336, 337, 338, 339, 341, 342, 343, 344, 346, 347, 348, 349, 350, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 374, 375, 376, 377, 378, 379, 381, 382, 383, 385, 386, 387, 388, 391, 392]}}}, {"path": "synthesizer/utils/__init__.py", "status": "modified", "Loc": {"('ValueWindow', None, 1)": {"add": [0]}}}, {"path": "synthesizer_train.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2, 4, 6, 9, 10, 11, 12, 13, 14, 15, 16, 21, 22, 23, 24, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 53, 55]}}}, {"path": "toolbox/__init__.py", "status": "modified", "Loc": {"('Toolbox', 'init_encoder', 325)": {"add": [333]}, "('Toolbox', None, 42)": {"mod": [43]}, "('Toolbox', '__init__', 43)": {"mod": [54]}, "('Toolbox', 'synthesize', 207)": {"mod": [211, 212, 213, 214, 215, 216, 217, 221, 224, 228]}, "('Toolbox', 'vocode', 237)": {"mod": [243]}}}, {"path": "toolbox/ui.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [41]}, "('UI', None, 53)": {"mod": [331]}, "('UI', 'populate_models', 338)": {"mod": [347, 348, 349, 350, 351, 352, 353]}}}, {"path": "vocoder_preprocess.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [32, 40], "mod": [20]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/models/modules.py", "synthesizer/models/tacotron.py", "synthesizer/train.py", "synthesizer/models/attention.py", "synthesizer_train.py", "demo_cli.py", "toolbox/__init__.py", "demo_toolbox.py", "synthesizer/models/architecture_wrappers.py", "synthesizer/audio.py", "synthesizer/preprocess.py", "synthesizer/tacotron2.py", "synthesizer/hparams.py", "synthesizer/utils/__init__.py", "synthesizer/synthesize.py", "toolbox/ui.py", "encoder/audio.py", "synthesizer/feeder.py", "synthesizer/models/helpers.py", "synthesizer/models/__init__.py", "synthesizer/inference.py", "vocoder_preprocess.py", "synthesizer/models/custom_decoder.py", "synthesizer/infolog.py"], "doc": ["synthesizer/LICENSE.txt", "README.md"], "test": [], "config": ["requirements_gpu.txt", "requirements.txt"], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "f108782e30369dedfc66f22d21c2b72c77941de7", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5050", "iss_label": "bug", "title": "[Bug]: img2img sampler is not changing", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nI'm trying to choose another sampler, but it is not working.\r\n\r\nI tried checking the p value, and found sampler_name = None\r\nThere seems to be a code missing to assign the variable sampler_name in the img2img\r\n\r\ntxt2img seems working fine, though.\n\n### Steps to reproduce the problem\n\nChange the sampler and see the results. They are all the same.\n\n### What should have happened?\n\nDifferent samplers should produce different results.\n\n### Commit where the problem happens\n\n828438b\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Command Line Arguments\n\n_No response_\n\n### Additional information, context and logs\n\n_No response_", "code": null, "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4910", "commit_html_url": null, "file_loc": {"base_commit": "f108782e30369dedfc66f22d21c2b72c77941de7", "files": [{"path": "scripts/xy_grid.py", "status": "modified", "Loc": {"(None, 'confirm_samplers', 71)": {"add": [74]}, "('Script', 'process_axis', 276)": {"add": [279]}}}, {"path": "img2img.py", "Loc": {}}, {"path": "Line 102: sampler_index=sd_samplers.samplers_for_img2img[sampler_index].name", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["img2img.py", "scripts/xy_grid.py"], "doc": [], "test": [], "config": [], "asset": ["Line 102: sampler_index=sd_samplers.samplers_for_img2img[sampler_index].name"]}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "d9499f4301018ebd2977685d098381aa4111d2ae", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13724", "iss_label": "enhancement", "title": "[Feature Request]: Sort items by date by default", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What would your feature do ?\n\nI hope to use time sorting by default when opening additional interfaces, so that I can immediately try the new model I just downloaded.\n\n### Proposed workflow\n\n1. Go to .... \r\n2. Press ....\r\n3. ...\r\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/d9499f4301018ebd2977685d098381aa4111d2ae", "file_loc": {"base_commit": "d9499f4301018ebd2977685d098381aa4111d2ae", "files": [{"path": "javascript/extraNetworks.js", "status": "modified", "Loc": {"(None, 'setupExtraNetworksForTab', 18)": {"add": [51, 54, 98, 99], "mod": [30, 56, 57, 58, 59, 65, 91, 92, 93, 94]}, "(None, None, None)": {"add": [115]}, "(None, 'applyExtraNetworkSort', 116)": {"add": [116]}}}, {"path": "modules/shared_options.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [236]}}}, {"path": "modules/ui_extra_networks.py", "status": "modified", "Loc": {"(None, 'create_ui', 357)": {"add": [397], "mod": [384, 385]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/shared_options.py", "modules/ui_extra_networks.py", "javascript/extraNetworks.js"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "22bcc7be428c94e9408f589966c2040187245d81", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9102", "iss_label": "bug-report", "title": "[Bug]: Model Dropdown Select on Firefox is obscured by svelte pre-loader", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nSeems the latest pull request has added pre-loaders and I noticed that the model dropdown is constantly loading and therefore obscuring the dropdown from the user. This is only happening in Firefox, Chrome for example is fine.\r\n\r\n```\r\n.wrap.default.svelte-gjihhp {\r\ninset: 0;\r\n}\r\n```\r\n\r\nI just set it to `display: none` to access it\n\n### Steps to reproduce the problem\n\nLoad up in Firefox and try to change the model\r\n\r\nSee attached screenshot\r\n![Screenshot_1](https://user-images.githubusercontent.com/3169931/228311409-22be3832-0348-424c-9298-08e76cb166a7.jpg)\r\n\r\n\n\n### What should have happened?\n\nHave access to the model dropdown select\n\n### Commit where the problem happens\n\nf1db987\n\n### What platforms do you use to access the UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Command Line Arguments\n\n```Shell\nNo\n```\n\n\n### List of extensions\n\nNo\n\n### Console logs\n\n```Shell\nUncaught (in promise) TypeError: q[R[H]] is undefined\r\n ct http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\r\n ct http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\r\n ze http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\n```\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/joysun545/stable-diffusion-webui/commit/22bcc7be428c94e9408f589966c2040187245d81", "file_loc": {"base_commit": "22bcc7be428c94e9408f589966c2040187245d81", "files": [{"path": "modules/ui.py", "status": "modified", "Loc": {"(None, 'create_ui', 437)": {"add": [1630]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "a8e3336a850e856188350a93e67d77c07c85b8af", "iss_html_url": "https://github.com/huggingface/transformers/issues/2008", "iss_label": "wontfix", "title": "Expand run_lm_finetuning.py to all models", "body": "## \ud83d\ude80 Feature\r\n\r\n[run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/b0ee7c7df3d49a819c4d6cef977214bd91f5c075/examples/run_lm_finetuning.py) is a very useful tool for finetuning many models the library provided. But it doesn't cover all the models. Currently available models are:\r\n\r\n- gpt2\r\n- openai-gpt\r\n- bert\r\n- roberta\r\n- distilbert\r\n- camembert\r\n\r\nAnd not available ones:\r\n\r\n- ctrl\r\n- xlm\r\n- xlnet\r\n- transfo-xl\r\n- albert\r\n\r\n## Motivation\r\n\r\nMost important part of such a library is that it can be easily finetuned. `run_lm_finetuning.py` gives us that opportunity but why say no more :)\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/huggingface/transformers/commit/3dcb748e31be8c7c9e4f62926c5c144c62d07218\n\nhttps://github.com/huggingface/transformers/commit/a8e3336a850e856188350a93e67d77c07c85b8af", "file_loc": {"base_commit": "a8e3336a850e856188350a93e67d77c07c85b8af", "files": [{"path": "examples/ner/run_ner.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [33], "mod": [41]}}}, {"path": "examples/ner/run_tf_ner.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [16, 17, 18, 19, 21, 22, 23, 24, 25, 37, 38, 39, 41, 42, 43, 44, 45, 52]}, "(None, 'main', 457)": {"mod": [512, 513, 523, 530, 565, 587, 614, 615]}}}, {"path": "examples/run_glue.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [32], "mod": [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 676, 677, 683, 695]}, "(None, 'train', 69)": {"mod": [75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101]}, "(None, 'main', 386)": {"mod": [445, 625, 626, 632, 637]}}}, {"path": "examples/run_language_modeling.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [40], "mod": [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 60, 61, 62, 789]}, "('TextDataset', '__init__', 68)": {"mod": [76, 77, 78, 79, 80, 81, 82, 83]}, "(None, 'main', 464)": {"mod": [696, 699, 701, 703, 706, 708, 712, 722, 730, 771, 772]}}}, {"path": "examples/run_squad.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [32], "mod": [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 75, 76, 77, 78, 79, 80, 81, 845]}, "(None, 'train', 76)": {"mod": [83, 84, 85, 86, 87, 88, 89, 90, 91]}, "(None, 'main', 477)": {"mod": [516, 760, 761, 765, 770, 820, 821]}}}, {"path": "src/transformers/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [160, 319]}}}, {"path": "templates/adding_a_new_example_script/run_xxx.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [30], "mod": [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 75, 76, 77, 78, 79, 80, 709]}, "(None, 'set_seed', 69)": {"mod": [71, 72, 73]}, "(None, 'main', 388)": {"mod": [421, 629, 630, 634, 639, 690, 691]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["examples/run_squad.py", "templates/adding_a_new_example_script/run_xxx.py", "examples/run_glue.py", "src/transformers/__init__.py", "examples/ner/run_tf_ner.py", "examples/run_language_modeling.py", "examples/ner/run_ner.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "88d7f96e33c3f3e541bcdd913f2ff1e50aa18c1b", "iss_html_url": "https://github.com/huggingface/transformers/issues/5212", "iss_label": "", "title": "BartConfig wrong decoder_start_token_id?", "body": "# \ud83d\udc1b Bug\r\n\r\n## Information\r\n\r\nModel I am using (Bert, XLNet ...): Bart\r\n\r\nLanguage I am using the model on (English, Chinese ...): English\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```\r\nfrom transformers import BartConfig, BartTokenizer\r\nconfig = BartConfig.from_pretrained('facebook/bart-large')\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large')\r\nconfig.decoder_start_token_id\r\n>>> 2\r\ntokenizer.bos_token_id\r\n>>> 0 # != config.decoder_start_token_id\r\ntokenizer.eos_token_id\r\n>>> 2\r\n```\r\n\r\nIt is misleading in the documentation of the function ```generate````\r\n\r\n*decoder_start_token_id=None \u2013 (optional) int If an encoder-decoder model starts decoding with a different token than BOS. Defaults to None and is changed to BOS later.*\r\n\r\n\r\n## Expected behavior\r\n\r\nI expect that decoder_start_token_id = tokenizer.bos_token_id, but maybe the model is designed to start decoding with EOS token.\r\n\r\n", "code": null, "pr_html_url": "https://github.com/huggingface/transformers/pull/5306", "commit_html_url": null, "file_loc": {"base_commit": "88d7f96e33c3f3e541bcdd913f2ff1e50aa18c1b", "files": [{"path": "src/transformers/modeling_tf_utils.py", "status": "modified", "Loc": {"('TFPreTrainedModel', 'generate', 551)": {"mod": [645, 646]}}}, {"path": "src/transformers/modeling_utils.py", "status": "modified", "Loc": {"('PreTrainedModel', 'generate', 871)": {"mod": [965, 966]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\n\u91cc\u7684docstring"}, "loctype": {"code": ["src/transformers/modeling_utils.py", "src/transformers/modeling_tf_utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "ba9e3fe6267ea81a3d546b7aa5bcf0122f365e51", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1153", "iss_label": "bug", "title": "mermaid: Generating ..seq_flow/20240402032019.svg.. Error: Failed to launch the browser process!", "body": "**Bug description**\r\nGenerating /app/metagpt/workspace/jqdlhwap/resources/seq_flow/20240402032019.svg.. Error: Failed to launch the browser process!\r\n\r\n**Bug solved method**\r\n\r\n\r\n**Environment information**\r\n\"docker compose up -d\" after clone\r\nalready run \"npm install -g @mermaid-js/mermaid-cli\":\r\n\r\nroot@84a2e77496b0:/app/metagpt# mmdc -h\r\nUsage: mmdc [options]\r\n\r\nOptions:\r\n -V, --version output the version number\r\n -t, --theme [theme] Theme of the chart (choices: \"default\", \"forest\", \"dark\", \"neutral\", default: \"default\")\r\n -w, --width [width] Width of the page (default: 800)\r\n -H, --height [height] Height of the page (default: 600)\r\n -i, --input Input mermaid file. Files ending in .md will be treated as Markdown and all charts (e.g. ```mermaid (...)```) will be extracted and generated.\r\n Use `-` to read from stdin.\r\n -o, --output [output] Output file. It should be either md, svg, png or pdf. Optional. Default: input + \".svg\"\r\n -e, --outputFormat [format] Output format for the generated image. (choices: \"svg\", \"png\", \"pdf\", default: Loaded from the output file extension)\r\n -b, --backgroundColor [backgroundColor] Background color for pngs/svgs (not pdfs). Example: transparent, red, '#F0F0F0'. (default: \"white\")\r\n -c, --configFile [configFile] JSON configuration file for mermaid.\r\n -C, --cssFile [cssFile] CSS file for the page.\r\n -s, --scale [scale] Puppeteer scale factor (default: 1)\r\n -f, --pdfFit Scale PDF to fit chart\r\n -q, --quiet Suppress log output\r\n -p --puppeteerConfigFile [puppeteerConfigFile] JSON configuration file for puppeteer.\r\n -h, --help display help for command\r\n\r\n- LLM type and model name: zhipu-api / GLM-4\r\n- System version:\r\n- Python version:\r\n- MetaGPT version or branch: main\r\n\r\n`run in docker`\r\n\r\n- packages version:\r\n- installation method: \r\n\r\n**Screenshots or logs**\r\n2024-04-02 03:20:46.126 | INFO | metagpt.utils.mermaid:mermaid_to_file:48 - Generating /app/metagpt/workspace/jqdlhwap/resources/seq_flow/20240402032019.svg..\r\n2024-04-02 03:20:46.460 | WARNING | metagpt.utils.mermaid:mermaid_to_file:74 - \r\nError: Failed to launch the browser process!\r\n[0402/032046.449080:ERROR:zygote_host_impl_linux.cc(100)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.\r\n\r\n\r\nTROUBLESHOOTING: https://pptr.dev/troubleshooting\r\n\r\n at Interface.onClose (file:///usr/lib/node_modules/@mermaid-js/mermaid-cli/node_modules/@puppeteer/browsers/lib/esm/launch.js:253:24)\r\n at Interface.emit (node:events:524:35)\r\n at Interface.close (node:internal/readline/interface:526:10)\r\n at Socket.onend (node:internal/readline/interface:252:10)\r\n at Socket.emit (node:events:524:35)\r\n at endReadableNT (node:internal/streams/readable:1378:12)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)\r\n", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1155", "commit_html_url": null, "file_loc": {"base_commit": "ba9e3fe6267ea81a3d546b7aa5bcf0122f365e51", "files": [{"path": "metagpt/configs/mermaid_config.py", "status": "modified", "Loc": {"('MermaidConfig', None, 13)": {"mod": [16]}}}, {"path": "metagpt/utils/mermaid.py", "status": "modified", "Loc": {"(None, 'mermaid_to_file', 17)": {"add": [83]}}}, {"path": "config/config2.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/configs/mermaid_config.py", "metagpt/utils/mermaid.py"], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "c779f6977ecbdba075d7c81519edd5eaf6de2d0e", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1197", "iss_label": "", "title": "Support for Cohere API ", "body": "Please add support for Cohere API with all the built in RAG and tool use functionalities. Essentially, RAG and tool use in Cohere are just chat parameters definable by users. More information can be found at https://docs.cohere.com/reference/chat .", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1193", "commit_html_url": null, "file_loc": {"base_commit": "c779f6977ecbdba075d7c81519edd5eaf6de2d0e", "files": [{"path": "metagpt/const.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [53]}}}, {"path": "metagpt/rag/factories/ranker.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13]}, "('RankerFactory', '__init__', 20)": {"add": [24]}, "('RankerFactory', None, 17)": {"add": [47]}}}, {"path": "metagpt/rag/schema.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [121]}}}, {"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [42]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/rag/schema.py", "metagpt/rag/factories/ranker.py", "setup.py", "metagpt/const.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "b5bb4d7e63e72c3d118e449a3763c1ff4411f159", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1547", "iss_label": "", "title": "ERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html", "body": "**Bug description**\r\nI installed playwright and its chrominum with the guidance and made configuration of mermaid. But it seems that the mermaid didn't work normally. \r\n```\r\nERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\r\nCall log:\r\nnavigating to \"file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\", waiting until \"load\"\r\n```\r\n\r\n**Bug solved method**\r\nI think it may do with the configuration but the document of this part is not so clear. I don't know how to fill in the \"path\" if I'm using playwright and whether it is right to keep that stuff in default.\r\n\r\nThis is my configuration:\r\n```yaml\r\nllm:\r\n api_type: 'openai' # or azure / ollama / groq etc. Check LLMType for more options\r\n api_key: '[MY_API_KEY]' # MY_API_KEY\r\n model: 'yi-lightning' # or gpt-3.5-turbo\r\n base_url: 'https://api.lingyiwanwu.com/v1' # or any forward url.\r\n # proxy: 'YOUR_LLM_PROXY_IF_NEEDED' # Optional. If you want to use a proxy, set it here.\r\n # pricing_plan: 'YOUR_PRICING_PLAN' # Optional. If your pricing plan uses a different name than the `model`.\r\n\r\nmermaid:\r\n engine: 'playwright' # nodejs/ink/playwright/pyppeteer\r\n # path: 'mmdc' # such as './node_modules/.bin/mmdc'\r\n # puppeteer_config: './config/puppeteer-config' # only for nodejs\r\n # pyppeteer_path: '/usr/bin/google-chrome-stable' # only for pyppeteer\r\n```\r\n\r\nThis is your example:\r\n```yaml\r\nmermaid:\r\n engine: 'nodejs' # nodejs/ink/playwright/pyppeteer\r\n path: 'mmdc' # such as './node_modules/.bin/mmdc'\r\n puppeteer_config: './config/puppeteer-config' # only for nodejs\r\n pyppeteer_path: '/usr/bin/google-chrome-stable' # only for pyppeteer\r\n```\r\n\r\n**Environment information**\r\n- LLM type and model name:\r\n- System version: ubuntu 22.04\r\n- Python version: 3.11\r\n- MetaGPT version or branch: 0.8\r\n\r\n\r\n\r\n- packages version:\r\n- installation method: pip\r\n\r\n**Screenshots or logs**\r\n2024-10-29 16:22:16.900 | WARNING | metagpt.utils.cost_manager:update_cost:49 - Model yi-lightning not found in TOKEN_COSTS.\r\n2024-10-29 16:22:16.903 | INFO | metagpt.utils.git_repository:rename_root:219 - Rename directory /root/workspace/cited_papaer_eval/workspace/20241029162137 to /root/workspace/cited_papaer_eval/workspace/scholar_cited_evaluation\r\n2024-10-29 16:22:16.904 | INFO | metagpt.utils.file_repository:save:57 - save to: /root/workspace/cited_papaer_eval/workspace/scholar_cited_evaluation/docs/prd/20241029162216.json\r\n2024-10-29 16:22:17.435 | ERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\r\nCall log:\r\nnavigating to \"file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\", waiting until \"load\"\r\n", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1564", "commit_html_url": null, "file_loc": {"base_commit": "b5bb4d7e63e72c3d118e449a3763c1ff4411f159", "files": [{"path": "metagpt/utils/mmdc_playwright.py", "status": "modified", "Loc": {"(None, 'mermaid_to_file', 17)": {"mod": [84, 85, 86, 87]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/utils/mmdc_playwright.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7276f699fc85c611f1c3f83a19a368da9841e3a4", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/2892", "iss_label": "question", "title": "Flow as tool Usage", "body": "### Discussed in https://github.com/langflow-ai/langflow/discussions/2891\r\n\r\n
      \r\n\r\nOriginally posted by **pavansandeep2910** July 23, 2024\r\nI cannot understand how to load files to see them in flow as tool component. can anyone help me direct to flow as tool usage?
      ", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3093", "commit_html_url": null, "file_loc": {"base_commit": "7276f699fc85c611f1c3f83a19a368da9841e3a4", "files": [{"path": "src/backend/base/langflow/components/prototypes/SubFlow.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2], "mod": [4, 6, 9, 10, 11, 12]}, "('SubFlowComponent', 'build', 98)": {"add": [103], "mod": [102, 105, 108, 112, 114, 115]}, "('SubFlowComponent', None, 15)": {"mod": [15, 17, 18, 19, 22, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100]}, "('SubFlowComponent', 'get_flow_names', 24)": {"mod": [25, 26]}, "('SubFlowComponent', 'update_build_config', 35)": {"mod": [36, 39, 41]}, "('SubFlowComponent', 'add_inputs_to_build_config', 58)": {"mod": [71]}}}, {"path": "src/backend/base/langflow/initial_setup/setup.py", "status": "modified", "Loc": {"(None, 'load_starter_projects', 342)": {"mod": [346]}}}, {"path": "src/backend/base/langflow/inputs/inputs.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [231]}, "('SecretStrInput', None, 215)": {"mod": [229]}}}, {"path": "src/frontend/src/CustomNodes/GenericNode/components/handleRenderComponent/index.tsx", "status": "modified", "Loc": {"(None, None, 1)": {"mod": [1]}, "(None, None, 4)": {"mod": [4]}, "(None, None, 40)": {"mod": [40, 41, 42, 43]}, "(None, None, 103)": {"mod": [103]}}}, {"path": "src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx", "status": "modified", "Loc": {"(None, None, 352)": {"mod": [352, 353]}, "(None, None, 365)": {"mod": [365, 366]}}}, {"path": "src/frontend/src/CustomNodes/hooks/use-handle-new-value.tsx", "status": "modified", "Loc": {"(None, None, 66)": {"mod": [66, 67]}}}, {"path": "src/frontend/src/components/parameterRenderComponent/component/refreshParameterComponent/index.tsx", "status": "modified", "Loc": {"(None, None, 32)": {"mod": [32]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/components/prototypes/SubFlow.py", "src/backend/base/langflow/initial_setup/setup.py", "src/backend/base/langflow/inputs/inputs.py", "src/frontend/src/components/parameterRenderComponent/component/refreshParameterComponent/index.tsx", "src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx", "src/frontend/src/CustomNodes/GenericNode/components/handleRenderComponent/index.tsx", "src/frontend/src/CustomNodes/hooks/use-handle-new-value.tsx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "b3b5290598f5970fd6a1a092fe4d11211008a04d", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/5378", "iss_label": "bug", "title": "URL component lost imported urls in tool mode when refresh UI", "body": "### Bug Description\n\nI have this issue when i build multi agent flow like import json below\r\n- URL Component Before refresh UI:\r\n![image](https://github.com/user-attachments/assets/f361501d-3d11-4e4a-b560-203faf8a4935)\r\n![image](https://github.com/user-attachments/assets/4ee6c93f-914b-4efa-b0eb-5a13f5c404fa)\r\n\r\nAfter refresh UI:\r\n![image](https://github.com/user-attachments/assets/2224a76e-1dd7-42d4-8531-a5e86719653c)\r\n![image](https://github.com/user-attachments/assets/3753754f-3a16-4bfc-a82e-1286c7504fe3)\r\n\r\n\r\nI have check in normal mode of URL Component, and this bug does not appear.\n\n### Reproduction\n\n1. Create flow\r\n2. Add URL component, change to Tool mode\r\n3. Input Urls\r\n4. Save flow\r\n5. Reload UI (Press F5)\n\n### Expected behavior\n\nURL component has keep Urls has input. In Tool mode\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nWindows 11/Docker\n\n### Langflow Version\n\nv1.1.1\n\n### Python Version\n\nNone\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n[Simple Agent (bug).json](https://github.com/user-attachments/files/18206463/Simple.Agent.bug.json)\r\n", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/5316", "commit_html_url": null, "file_loc": {"base_commit": "b3b5290598f5970fd6a1a092fe4d11211008a04d", "files": [{"path": "src/frontend/src/components/core/parameterRenderComponent/components/toggleShadComponent/index.tsx", "status": "modified", "Loc": {"(None, None, 39)": {"mod": [39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]}}}, {"path": "src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx", "status": "modified", "Loc": {"(None, None, 5)": {"add": [5]}, "(None, None, 44)": {"add": [44]}, "(None, None, 79)": {"add": [79]}, "(None, None, 98)": {"add": [98]}, "(None, None, 149)": {"add": [149]}, "(None, None, 11)": {"mod": [11]}, "(None, None, 23)": {"mod": [23, 24, 25]}, "(None, None, 145)": {"mod": [145, 146]}, "(None, None, 158)": {"mod": [158, 159]}, "(None, None, 174)": {"mod": [174]}, "(None, None, 291)": {"mod": [291]}, "(None, None, 403)": {"mod": [403, 404, 405, 407, 408, 409]}, "(None, None, 421)": {"mod": [421, 422, 423, 424, 425, 426]}, "(None, None, 471)": {"mod": [471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx", "src/frontend/src/components/core/parameterRenderComponent/components/toggleShadComponent/index.tsx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "a4dc5381b2cf31c507cc32f9027f76bf00d61ccc", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3536", "iss_label": "bug", "title": "Prompt component does not pass variables correctly", "body": "### Bug Description\n\nI have prompt with two variables. \r\n{image_url} Value of image is https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\n{post_id} Value of Post ID is 11620\r\n\r\nPrompt is \r\nTask: Upload the image from the provided URL to WordPress and set it as the featured image for the specified post.\r\nImage URL: {image_url}\r\nPost ID: {post_id}\r\n\r\nThis worked on 1.0.15, but after I upgraded to 1.0.16 the second variable is not passed on, it repeats the image one.\r\n\r\nComponent output\r\nTask: Upload the image from the provided URL to WordPress and set it as the featured image for the specified post.\r\nImage URL: https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\nPost ID: https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\n\n\n### Reproduction\n\nCreate prompt with multiple variables.\r\nAdd value in each.\r\nTry to build prompt and you will find that only first variable is being pulled.\r\n![image](https://github.com/user-attachments/assets/cc478377-6c3d-4a71-ba13-ab6e1773413e)\r\n\r\n\r\n![image](https://github.com/user-attachments/assets/fad07891-e9d1-474f-b7a9-efb3084c4caf)\r\n\n\n### Expected behavior\n\nmultiple variables should work\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nRender\n\n### Langflow Version\n\n1.0.16\n\n### Python Version\n\n3.12\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3698", "commit_html_url": null, "file_loc": {"base_commit": "a4dc5381b2cf31c507cc32f9027f76bf00d61ccc", "files": [{"path": "src/backend/base/langflow/custom/custom_component/component.py", "status": "modified", "Loc": {"('Component', '__init__', 42)": {"add": [71], "mod": [45, 56]}, "('Component', '_reset_all_output_values', 88)": {"mod": [89, 90]}, "('Component', '_build_state_model', 92)": {"mod": [98]}, "('Component', '__deepcopy__', 112)": {"mod": [119]}, "('Component', 'list_outputs', 166)": {"mod": [170]}, "('Component', 'get_output', 210)": {"mod": [223, 224]}, "('Component', 'set_output_value', 234)": {"mod": [235, 236]}, "('Component', 'map_outputs', 240)": {"mod": [253, 257]}, "('Component', 'map_inputs', 259)": {"mod": [270]}, "('Component', '_set_output_types', 290)": {"mod": [291]}, "('Component', 'get_output_by_method', 296)": {"mod": [299]}, "('Component', '_find_matching_output_method', 327)": {"mod": [329]}, "('Component', '__getattr__', 440)": {"mod": [445, 446]}, "('Component', '_set_outputs', 577)": {"mod": [581]}, "('Component', '_build_results', 619)": {"mod": [623]}}}, {"path": "src/backend/base/langflow/graph/graph/base.py", "status": "modified", "Loc": {"('Graph', '__apply_config', 318)": {"mod": [322]}}}, {"path": "src/backend/base/langflow/template/field/base.py", "status": "modified", "Loc": {"('Output', None, 161)": {"add": [180]}}}, {"path": "src/backend/tests/unit/test_custom_component.py", "status": "modified", "Loc": {"(None, 'test_custom_component_get_function_entrypoint_args_no_args', 397)": {"add": [402]}}}, {"path": "src/backend/tests/unit/test_database.py", "status": "modified", "Loc": {"(None, 'test_read_flow', 76)": {"mod": [76, 79]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/template/field/base.py", "src/backend/base/langflow/graph/graph/base.py", "src/backend/base/langflow/custom/custom_component/component.py"], "doc": [], "test": ["src/backend/tests/unit/test_custom_component.py", "src/backend/tests/unit/test_database.py"], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "20ceb42504087c712aaee41bfc17a870ae0109d4", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/2039", "iss_label": "enhancement", "title": "[Feat] Firecrawl\ud83d\udd25 Integration ", "body": "Hi all,\r\n\r\nOpening this issue after chatting with Rodrigo. It would be awesome to add a [Firecrawl](https://firecrawl.dev) web loader / tool for people to use it to scrape, crawl and extract LLM ready data from the web.\r\n\r\nWould love to hear your thoughts on how we can best integrate it.\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/2359", "commit_html_url": null, "file_loc": {"base_commit": "20ceb42504087c712aaee41bfc17a870ae0109d4", "files": [{"path": "poetry.lock", "status": "modified", "Loc": {"(None, None, 2114)": {"add": [2114]}, "(None, None, 2436)": {"add": [2436]}, "(None, None, 2596)": {"add": [2596]}, "(None, None, 2600)": {"add": [2600]}, "(None, None, 4618)": {"add": [4618]}, "(None, None, 6082)": {"add": [6082]}, "(None, None, 2435)": {"mod": [2435]}, "(None, None, 2595)": {"mod": [2595]}, "(None, None, 2599)": {"mod": [2599]}, "(None, None, 4617)": {"mod": [4617]}, "(None, None, 6085)": {"mod": [6085]}, "(None, None, 10555)": {"mod": [10555]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, 94)": {"add": [94]}}}, {"path": "src/backend/base/poetry.lock", "status": "modified", "Loc": {"(None, None, 741)": {"add": [741]}, "(None, None, 3238)": {"mod": [3238]}}}, {"path": "src/backend/base/pyproject.toml", "status": "modified", "Loc": {"(None, None, 66)": {"add": [66]}}}, {"path": "src/frontend/src/utils/styleUtils.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [173, 365]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/utils/styleUtils.ts"], "doc": [], "test": [], "config": ["pyproject.toml", "poetry.lock", "src/backend/base/poetry.lock", "src/backend/base/pyproject.toml"], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "9d8009f2f5c5e3fd3bf47760debc787deb454b1a", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3004", "iss_label": "bug", "title": "Problem with Global Variables Setting Page", "body": "### Bug Description\r\n\r\nWhen entering http://127.0.0.1:7860/settings/global-variables\r\n\r\nI am getting error in JS console.\r\n```\r\n`DialogContent` requires a `DialogTitle` for the component to be accessible for screen reader users.\r\n\r\nIf you want to hide the `DialogTitle`, you can wrap it with our VisuallyHidden component.\r\n\r\nFor more information, see https://radix-ui.com/primitives/docs/components/dialog [index-BMduUo-e.js:3231:165](http://127.0.0.1:7860/assets/index-BMduUo-e.js)\r\n TitleWarning http://127.0.0.1:7860/assets/index-BMduUo-e.js:3231\r\n Qj http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Hk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Wk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Pk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Ek http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n jg http://127.0.0.1:7860/assets/index-BMduUo-e.js:982\r\n Wk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Pk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Gk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Ct http://127.0.0.1:7860/assets/index-BMduUo-e.js:969\r\n xt http://127.0.0.1:7860/assets/index-BMduUo-e.js:969\r\n```\r\n\r\nI am also getting error when adding again earlier deleted variable:\r\n\"Sorry, we found an unexpected error!\r\nPlease report errors with detailed tracebacks on the [GitHub Issues](https://github.com/langflow-ai/langflow/issues) page.\r\nThank you!\"\r\n\r\nSo as asked, I am kindly reporting it.\r\n\r\nAlso, There is no feature to edit fields.\r\n\r\n\r\n### Reproduction\r\n\r\nJS error:\r\n1. Just enter the page\r\n\r\nSaving Error:\r\n1. saving new variable\r\n2. deleting this new variable\r\n3. adding it again with same name\r\n\r\n\r\n### Expected behavior\r\n\r\nIt should work without error\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Operating System\r\n\r\nWindows 11 pro\r\n\r\n### Langflow Version\r\n\r\n1.13\r\n\r\n### Python Version\r\n\r\nNone\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Flow File\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3284", "commit_html_url": null, "file_loc": {"base_commit": "9d8009f2f5c5e3fd3bf47760debc787deb454b1a", "files": [{"path": "src/backend/base/langflow/api/v1/variable.py", "status": "modified", "Loc": {"(None, 'create_variable', 17)": {"add": [22], "mod": [26, 27, 28, 29, 30, 31, 32, 33, 34, 36, 37, 39, 40, 42, 44, 46, 47, 48, 49, 50, 51, 52]}, "(None, 'read_variables', 60)": {"add": [63], "mod": [67, 68]}, "(None, 'update_variable', 74)": {"add": [79], "mod": [83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95]}, "(None, 'delete_variable', 101)": {"add": [105], "mod": [109, 110, 111, 112, 113, 114, 115]}, "(None, None, None)": {"mod": [1, 5, 7, 10, 11]}}}, {"path": "src/backend/base/langflow/services/variable/base.py", "status": "modified", "Loc": {"('VariableService', None, 11)": {"add": [84], "mod": [72]}}}, {"path": "src/backend/base/langflow/services/variable/kubernetes.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4], "mod": [12, 13]}, "('KubernetesSecretService', None, 16)": {"add": [123], "mod": [113, 114, 115, 116, 117, 118]}}}, {"path": "src/backend/base/langflow/services/variable/service.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1], "mod": [11]}, "('DatabaseVariableService', 'get_variable', 68)": {"add": [78], "mod": [86, 87]}, "('DatabaseVariableService', None, 22)": {"add": [90, 111]}, "('DatabaseVariableService', 'list_variables', 91)": {"mod": [92, 93]}, "('DatabaseVariableService', 'delete_variable', 112)": {"mod": [118, 123]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/api/v1/variable.py", "src/backend/base/langflow/services/variable/base.py", "src/backend/base/langflow/services/variable/service.py", "src/backend/base/langflow/services/variable/kubernetes.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "1e98d349877305a8ee9c84901282b5731675578f", "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/803", "iss_label": "", "title": "debug mode not debug", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\nattribute in chat.py is not named correctly\n\n### Current behavior \ud83d\ude2f\n\nwrong name can't call the attrb on the object\n\n### Expected behavior \ud83e\udd14\n\nto use the attrb in a call without error\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\n# Paste your prompt here\r\n", "code": null, "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/888", "commit_html_url": null, "file_loc": {"base_commit": "1e98d349877305a8ee9c84901282b5731675578f", "files": [{"path": "scripts/chat.py", "status": "modified", "Loc": {"(None, 'chat_with_ai', 42)": {"mod": [67, 74, 113]}}}, {"path": "scripts/json_parser.py", "status": "modified", "Loc": {"(None, 'fix_json', 76)": {"mod": [94]}}}, {"path": "scripts/main.py", "status": "modified", "Loc": {"(None, 'parse_arguments', 266)": {"add": [268]}, "(None, None, None)": {"add": [294]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/json_parser.py", "scripts/main.py", "scripts/chat.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "0f152b4e97a102c0105f26d76d6e1bba3b12fc2a", "iss_html_url": "https://github.com/fastapi/fastapi/issues/894", "iss_label": "bug\nanswered\nreviewed", "title": "RecursionError from response model in 0.47.1", "body": "### Describe the bug\r\n\r\nFastAPI 0.47.1 will not be able to start due to a `RecursionError` when there is a circular reference among models. The issue seems to originate from https://github.com/tiangolo/fastapi/pull/889. This works fine in 0.46.0.\r\n\r\n### Environment\r\n\r\n- OS: Windows\r\n- FastAPI Version: 0.47.1\r\n- Python version: 3.7.0\r\n\r\n### To Reproduce\r\n\r\n```Python\r\nfrom typing import Optional\r\n\r\nfrom fastapi import FastAPI\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass Group(BaseModel):\r\n representative: Optional['Person'] = Field(None)\r\n\r\n\r\nclass Person(BaseModel):\r\n group: Optional[Group] = Field(None)\r\n\r\n\r\nGroup.update_forward_refs()\r\n\r\n\r\napp = FastAPI()\r\n\r\n\r\n@app.get('/group/{group_id}', response_model=Group)\r\ndef get_group(group_id):\r\n return []\r\n```\r\n\r\n### Expected behavior\r\n\r\nNo exception\r\n\r\n\r\n### Actual output\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 21, in \r\n @app.get('/group/{group_id}', response_model=Group)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 494, in decorator\r\n callbacks=callbacks,\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 438, in add_api_route\r\n callbacks=callbacks,\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 275, in __init__\r\n ] = create_cloned_field(self.response_field)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n [Previous line repeated 981 more times]\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 97, in create_cloned_field\r\n original_type.__name__, __config__=original_type.__config__\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\pydantic\\main.py\", line 773, in create_model\r\n return type(model_name, (__base__,), namespace)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\pydantic\\main.py\", line 152, in __new__\r\n if issubclass(base, BaseModel) and base != BaseModel:\r\n File \"D:\\virtualenvs\\test\\lib\\abc.py\", line 143, in __subclasscheck__\r\n return _abc_subclasscheck(cls, subclass)\r\nRecursionError: maximum recursion depth exceeded in comparison\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/fastapi/fastapi/commit/0f152b4e97a102c0105f26d76d6e1bba3b12fc2a", "file_loc": {"base_commit": "0f152b4e97a102c0105f26d76d6e1bba3b12fc2a", "files": [{"path": "fastapi/utils.py", "status": "modified", "Loc": {"(None, 'create_cloned_field', 134)": {"mod": [134, 141, 142, 143, 160, 163]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "05676caf70db7f3715cf6a3b4680f15efd45977a", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/6202", "iss_label": "bug\nstale", "title": "Llama-cpp-python 0.2.81 'already loaded' fails to load models", "body": "### Describe the bug\r\n\r\nAttempting to load a model after running the update-wizard-macos today (the version from a day or two ago worked fine) fails with the stack trace log included below. \r\n\r\nNotably, the error message references [this new issue in llama-cpp-python](https://github.com/abetlen/llama-cpp-python/issues/1575).\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n- Run the update wizard to update software.\r\n- Attempt to load a gguf model using the GPU and llama.cpp\r\n- Observe that loading fails.\r\n\r\n### Screenshot\r\n\r\n![Screenshot 2024-07-04 at 11 10 47\u202fPM](https://github.com/oobabooga/text-generation-webui/assets/9359101/72148b05-8a43-4d2e-9fd5-7ba6fa57b317)\r\n\r\n\r\n### Logs\r\n\r\n```shell\r\nTraceback (most recent call last):\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/ui_model_menu.py\", line 246, in load_model_wrapper\r\n shared.model, shared.tokenizer = load_model(selected_model, loader)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/models.py\", line 94, in load_model\r\n output = load_func_map[loader](model_name)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/models.py\", line 275, in llamacpp_loader\r\n model, tokenizer = LlamaCppModel.from_pretrained(model_file)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/llamacpp_model.py\", line 39, in from_pretrained\r\n LlamaCache = llama_cpp_lib().LlamaCache\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/llama_cpp_python_hijack.py\", line 38, in llama_cpp_lib\r\n raise Exception(f\"Cannot import 'llama_cpp_cuda' because '{imported_module}' is already imported. See issue #1575 in llama-cpp-python. Please restart the server before attempting to use a different version of llama-cpp-python.\")\r\nException: Cannot import 'llama_cpp_cuda' because 'llama_cpp' is already imported. See issue #1575 in llama-cpp-python. Please restart the server before attempting to use a different version of llama-cpp-python.\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nM1 Max Macbook Pro, MacOS 14.5\r\n```\r\n\r\nEdit: Just realized that Ooobabooga was the one that created that issue on the llama-cpp-python project, so I guess this error was already known. Sorry if this issue is therefore somewhat redundant\r\n\r\n", "code": null, "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/6227", "commit_html_url": null, "file_loc": {"base_commit": "05676caf70db7f3715cf6a3b4680f15efd45977a", "files": [{"path": "modules/llama_cpp_python_hijack.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}, "(None, 'llama_cpp_lib', 13)": {"mod": [16, 17, 18, 19, 20, 21, 22, 24, 26, 28, 29, 30, 31, 32, 33, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62, 64, 65, 67]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/llama_cpp_python_hijack.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "3c076c3c8096fa83440d701ba4d7d49606aaf61f", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2958", "iss_label": "bug", "title": "Latest version of Pillow breaks current implementation in html_generator.py.", "body": "### Describe the bug\n\nPillow 10.0.0 removed `ANTIALIAS` from `PIL.Image`. Current implementation requires 9.5.0, however the requirements.txt currently allows for 10.0.0 to be installed.\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nAdd new characters with png images and load the webui in chat mode.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nTraceback (most recent call last):\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\routes.py\", line 427, in run_predict\r\n output = await app.get_blocks().process_api(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\blocks.py\", line 1323, in process_api\r\n result = await self.call_function(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\blocks.py\", line 1051, in call_function\r\n prediction = await anyio.to_thread.run_sync(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\extensions\\gallery\\script.py\", line 71, in generate_html\r\n image_html = f''\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\modules\\html_generator.py\", line 144, in get_image_cache\r\n img = make_thumbnail(Image.open(path))\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\modules\\html_generator.py\", line 132, in make_thumbnail\r\n image = ImageOps.fit(image, (350, 470), Image.ANTIALIAS)\r\nAttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'\n```\n\n\n### System Info\n\n```shell\nWindows 10\r\nGPU: GTX 1080ti\n```\n", "code": null, "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/2954", "commit_html_url": null, "file_loc": {"base_commit": "3c076c3c8096fa83440d701ba4d7d49606aaf61f", "files": [{"path": "modules/html_generator.py", "status": "modified", "Loc": {"(None, 'make_thumbnail', 129)": {"mod": [132]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Other\n\u4f9d\u8d56\u58f0\u660e "}, "loctype": {"code": ["modules/html_generator.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "cf2c4e740b1d06e145c1992515d9b34e18affc95", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/801", "iss_label": "enhancement", "title": "How can we disable Gradio analytics?", "body": "**Description**\r\n\r\nHow where / can this be implemented?\r\n\r\nhttps://github.com/brkirch/stable-diffusion-webui/commit/a534959cbcabc95af50fbbe4654f8c0ee1cdd41c\r\n\r\n`os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False'`\r\n\r\n**Additional Context**\r\n\r\nFor [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)\r\n\r\n[preserve privacy by disabling gradio analytics globally\r\n#8658 ](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/8658)\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/cf2c4e740b1d06e145c1992515d9b34e18affc95", "file_loc": {"base_commit": "cf2c4e740b1d06e145c1992515d9b34e18affc95", "files": [{"path": "server.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["server.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "a3085dba073fe8bdcfb5120729a84560f5d024c3", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/1000", "iss_label": "bug", "title": "Bot spams random numbers or does not load", "body": "### Describe the bug\n\nHello,\r\nI installed oobabooga with the one click installer and I can not load the facebook_opt-2.7b (I copied the console into the log).\r\nI also installed the gpt4x alpaca model with the automatic installer(download-model.bat). If I chat with it, it just spams random 2 and 4 (I took a screenshot and pasted it down below). If I manually install the gpt4x model (with the help of this tutorial: https://youtu.be/nVC9D9fRyNU?t=162 ), I get the same output as the Facebook model in the log. \n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\n1. Automatic installer\r\n2. let download-model.bat download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g or follow this tutorial:\r\n3. let download-model.bat download one model from the list\r\n4. start-webui.bat has following arguments: python server.py --auto-devices --chat --wbits 4 --groupsize 128\n\n### Screenshot\n\n![image](https://user-images.githubusercontent.com/125409728/230895729-d2f12173-81a5-4de6-9296-71845906ab01.png)\r\n\n\n### Logs\n\n```shell\nStarting the web UI...\r\n\r\n===================================BUG REPORT===================================\r\nWelcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\r\n================================================================================\r\nCUDA SETUP: CUDA runtime path found: C:\\Users\\Caner\\Documents\\oobabooga-windows\\installer_files\\env\\bin\\cudart64_110.dll\r\nCUDA SETUP: Highest compute capability among GPUs detected: 8.6\r\nCUDA SETUP: Detected CUDA version 117\r\nCUDA SETUP: Loading binary C:\\Users\\Caner\\Documents\\oobabooga-windows\\installer_files\\env\\lib\\site-packages\\bitsandbytes\\libbitsandbytes_cuda117.dll...\r\nThe following models are available:\r\n\r\n1. anon8231489123_gpt4-x-alpaca-13b-native-4bit-128g\r\n2. facebook_opt-2.7b\r\n\r\nWhich one do you want to load? 1-2\r\n\r\n2\r\n\r\nLoading facebook_opt-2.7b...\r\nCould not find the quantized model in .pt or .safetensors format, exiting...\r\nDr\u00fccken Sie eine beliebige Taste . . .\n```\n\n\n### System Info\n\n```shell\nWindows 10 Version 22H2, Amd Ryzen 5800x, Palit Gamingpro Rtx 3080.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/a3085dba073fe8bdcfb5120729a84560f5d024c3", "file_loc": {"base_commit": "a3085dba073fe8bdcfb5120729a84560f5d024c3", "files": [{"path": "modules/models.py", "status": "modified", "Loc": {"(None, 'load_model', 40)": {"add": [176]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/models.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "c4aa1a42b156b9c5ddcfb060cc497b2fba55430f", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/1203", "iss_label": "bug", "title": "When I click \"Click Me\" on (Character screen) it no longer generates a log of the current chat/instruction. ", "body": "### Describe the bug\n\nWhen I click \"Click Me\" on (Character screen) it no longer generates a log of the current chat/instruction. \r\n\r\nThis either got caused when I did an update to the whole program. Reinstall does not fix. I also ran some Antivirus and registry changes. I can generate new note pads and I reinstalled notepad on my PC. \r\n\r\nI've of course restarted my pc and I've tried firefox and opera as the host browser. This is a new problem just from today. But I did a few things on my PC. \r\n\r\nNote pad is also missing from my right click \"Create new\" list. However it is is in my folders create new list. The one you can click on in the top menu settings inside explorer. The first thing is I guess I need to rule out if others having the issue or not. Next it would be nice to get some utility to auto save all logs or something as an option. \r\n\r\nThanks for any advice anyone. If it's a bug I will simply wait for a fix. It's possible I may have messed something but but it may also be a design flaw of the program if it's dependant on just one specific thing to save the chat log. \r\n\r\nI have no logs or other info to share at this time. \n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nFor me it's easy. Issue persists even if I restart PC, update or reinstall oobabooga. It worked last night. Today it does not work. \n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nNot applicable.\n```\n\n\n### System Info\n\n```shell\n12700K, Nvidia 4080, Windows 10. Running locally on my PC not a colab etc. Like I said I tried firefox and opera. Issue seems persistent.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/c4aa1a42b156b9c5ddcfb060cc497b2fba55430f", "file_loc": {"base_commit": "c4aa1a42b156b9c5ddcfb060cc497b2fba55430f", "files": [{"path": "server.py", "status": "modified", "Loc": {"(None, 'create_model_menus', 251)": {"mod": [324, 349]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["server.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2326", "iss_label": "bug", "title": "extensions/openai KeyError: 'assistant'", "body": "### Describe the bug\r\n\r\nStarting after [https://github.com/oobabooga/text-generation-webui/pull/2291]\r\n\r\nWhich I think it's a great improvement.\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\nStart server with extension --openai --model openaccess-ai-collective_manticore-13b.\r\nStarting [DGdev91 Auto-GPT](https://github.com/DGdev91/Auto-GPT), runs 1 cycle, give 'y' for the second, the error appears.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nopenaccess-ai-collectiveException occurred during processing of request from ('127.0.0.1', 42032)\r\nTraceback (most recent call last):\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 683, in process_request_thread\r\n self.finish_request(request, client_address)\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 360, in finish_request\r\n self.RequestHandlerClass(request, client_address, self)\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 747, in __init__\r\n self.handle()\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/http/server.py\", line 433, in handle\r\n self.handle_one_request()\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/http/server.py\", line 421, in handle_one_request\r\n method()\r\n File \"/home/mihai/text-generation-webui/extensions/openai/script.py\", line 310, in do_POST\r\n msg = role_formats[role].format(message=content)\r\nKeyError: 'assistant'\r\n----------------------------------------\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nWin11 WSL2 Ubuntu 20.04\r\nPython 3.10\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df", "file_loc": {"base_commit": "2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df", "files": [{"path": "extensions/openai/script.py", "status": "modified", "Loc": {"('Handler', 'do_POST', 159)": {"mod": [262]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["extensions/openai/script.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "92a0994f01ec6ae7756951312a70e101fb33c7e5", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/597", "iss_label": "", "title": "The app starts transparent!", "body": "Seems like everything was ok with the install.\r\n\r\nWhen I run I get the error: / warning:\r\n\r\n```\r\n> python run.py --execution-provider cuda\r\nException in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 1921, in __call__\r\n return self.func(*args)\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 839, in callit\r\n func(*args)\r\n File \"C:\\Users\\dulci\\Documents\\Development\\Deep-Live-Cam\\.venv\\lib\\site-packages\\customtkinter\\windows\\widgets\\scaling\\scaling_tracker.py\", line \r\n186, in check_dpi_scaling\r\n window.block_update_dimensions_event()\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 2383, in __getattr__\r\n return getattr(self.tk, attr)\r\nAttributeError: '_tkinter.tkapp' object has no attribute 'block_update_dimensions_event'\r\n(.venv) PS C:\\Users\\dulci\\Documents\\Development\\Deep-Live-Cam>\r\n``` \r\n\r\nAnd the application starts so transparent (opacity super low) that I can barely see it and I can't use it, because it is almost invisible against the desktop background.\r\n\r\nCan anybody suggest how to fix it?\r\n\r\nI am working with venv python 3.10.6.\r\n", "code": null, "pr_html_url": "https://github.com/hacksider/Deep-Live-Cam/pull/632", "commit_html_url": null, "file_loc": {"base_commit": "92a0994f01ec6ae7756951312a70e101fb33c7e5", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, 2)": {"add": [2]}, "(None, None, 3)": {"add": [3]}, "(None, None, 37)": {"add": [37]}, "(None, None, 47)": {"add": [47]}, "(None, None, 68)": {"add": [68]}, "(None, None, 168)": {"add": [168]}, "(None, None, 365)": {"add": [365]}, "(None, None, 6)": {"mod": [6]}, "(None, None, 8)": {"mod": [8]}, "(None, None, 10)": {"mod": [10]}, "(None, None, 12)": {"mod": [12]}, "(None, None, 15)": {"mod": [15]}, "(None, None, 20)": {"mod": [20]}, "(None, None, 24)": {"mod": [24]}, "(None, None, 28)": {"mod": [28]}, "(None, None, 32)": {"mod": [32]}, "(None, None, 36)": {"mod": [36]}, "(None, None, 39)": {"mod": [39, 40, 41]}, "(None, None, 43)": {"mod": [43, 44, 46]}, "(None, None, 49)": {"mod": [49, 50, 51, 52, 53, 54, 55, 56, 57]}, "(None, None, 59)": {"mod": [59]}, "(None, None, 61)": {"mod": [61, 62]}, "(None, None, 64)": {"mod": [64]}, "(None, None, 66)": {"mod": [66, 67]}, "(None, None, 71)": {"mod": [71, 72]}, "(None, None, 75)": {"mod": [75]}, "(None, None, 77)": {"mod": [77]}, "(None, None, 82)": {"mod": [82]}, "(None, None, 84)": {"mod": [84, 85, 86]}, "(None, None, 91)": {"mod": [91, 92]}, "(None, None, 96)": {"mod": [96]}, "(None, None, 98)": {"mod": [98, 100]}, "(None, None, 105)": {"mod": [105, 106]}, "(None, None, 110)": {"mod": [110]}, "(None, None, 112)": {"mod": [112, 113]}, "(None, None, 118)": {"mod": [118, 119]}, "(None, None, 123)": {"mod": [123]}, "(None, None, 125)": {"mod": [125, 126]}, "(None, None, 131)": {"mod": [131, 132]}, "(None, None, 136)": {"mod": [136]}, "(None, None, 138)": {"mod": [138, 139]}, "(None, None, 144)": {"mod": [144, 145]}, "(None, None, 150)": {"mod": [150, 151]}, "(None, None, 153)": {"mod": [153, 154]}, "(None, None, 156)": {"mod": [156]}, "(None, None, 158)": {"mod": [158, 159, 160, 161, 162]}, "(None, None, 164)": {"mod": [164]}, "(None, None, 166)": {"mod": [166, 167]}, "(None, None, 170)": {"mod": [170]}, "(None, None, 197)": {"mod": [197]}, "(None, None, 206)": {"mod": [206]}, "(None, None, 210)": {"mod": [210]}, "(None, None, 224)": {"mod": [224]}, "(None, None, 237)": {"mod": [237]}, "(None, None, 247)": {"mod": [247]}, "(None, None, 274)": {"mod": [274]}, "(None, None, 306)": {"mod": [306]}, "(None, None, 314)": {"mod": [314]}, "(None, None, 343)": {"mod": [343, 344]}, "(None, None, 346)": {"mod": [346, 347]}, "(None, None, 353)": {"mod": [353]}, "(None, None, 360)": {"mod": [360]}}}, {"path": "modules/ui.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6, 548], "mod": [10, 13, 18, 19, 22, 23, 24, 25, 27, 28, 29, 30, 32, 33, 34, 35, 37, 38]}, "(None, 'analyze_target', 154)": {"add": [167], "mod": [155, 163, 166]}, "(None, 'create_source_target_popup', 176)": {"add": [182], "mod": [191, 192, 200, 201, 203, 204, 206, 207, 210, 211, 214, 215, 217, 218, 221, 222, 224]}, "(None, 'update_popup_source', 221)": {"add": [233], "mod": [228, 229, 238, 240, 241, 242, 243, 245, 246, 249, 250]}, "(None, 'select_source_path', 290)": {"add": [299, 302], "mod": [294]}, "(None, 'swap_faces_paths', 305)": {"add": [323, 326]}, "(None, 'select_target_path', 329)": {"add": [338, 343, 346], "mod": [333]}, "(None, 'update_preview', 435)": {"add": [445], "mod": [437, 438, 441, 443, 444, 450]}, "(None, 'init', 61)": {"mod": [61]}, "(None, 'create_root', 70)": {"mod": [70, 73, 74, 75, 77, 78, 79, 80, 81, 83, 84, 86, 87, 89, 90, 92, 93, 95, 96, 99, 100, 103, 104, 106, 107, 108, 109, 112, 113, 116, 117, 119, 121, 122, 124, 125, 126, 129, 130, 132, 133, 135, 136, 138, 139, 141, 142, 144, 145, 147, 148, 149, 150]}, "(None, 'on_submit_click', 184)": {"mod": [189]}, "(None, 'on_button_click', 194)": {"mod": [195, 197, 198]}, "(None, 'create_preview', 258)": {"mod": [263, 265, 268, 269, 271]}, "(None, 'select_output_path', 349)": {"mod": [353, 355]}, "(None, 'check_and_ignore_nsfw', 364)": {"mod": [365, 367, 370, 372, 375, 376, 378]}, "(None, 'fit_image_to_size', 381)": {"mod": [390]}, "(None, 'toggle_preview', 417)": {"mod": [418]}, "(None, 'init_preview', 425)": {"mod": [431]}, "(None, 'create_webcam_preview', 463)": {"mod": [466, 467, 468, 469, 471, 484, 487, 512]}, "(None, 'on_submit_click', 527)": {"mod": [533]}, "(None, 'create_source_target_popup_for_webcam', 519)": {"mod": [540, 543, 546]}, "(None, 'refresh_data', 550)": {"mod": [553, 554, 563, 565, 566, 568, 569, 571, 572, 575, 576, 579, 580, 581, 584, 585, 588, 589, 590]}, "(None, 'update_webcam_source', 593)": {"mod": [596, 600, 601, 610, 612, 613, 614, 615, 617, 618, 621, 622, 623, 624]}, "(None, 'update_webcam_target', 629)": {"mod": [629, 632, 636, 637, 646, 648, 649, 650, 651, 653, 654, 657, 658, 659, 660]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "4be4f6bd24d2a35da0e50df943209ad24c068159", "iss_html_url": "https://github.com/Textualize/rich/issues/388", "iss_label": "enhancement\ndone", "title": "[REQUEST] minimal table width", "body": "hey @willmcgugan, great package!\r\n\r\nin `Table.add_column` method it would be nice to have an _actual_ minimal with option. [The documentation](https://github.com/willmcgugan/rich/blob/c98bf070e4f3785dbb050b72c09663021c5b1d73/rich/table.py#L303) says that `width` argument sets the minimal with for a column, but in fact in my tests it sets a constant width for one. I'd like my column to be at least `width` wide but expand if there is a longer string to display.", "code": null, "pr_html_url": "https://github.com/Textualize/rich/pull/391", "commit_html_url": null, "file_loc": {"base_commit": "4be4f6bd24d2a35da0e50df943209ad24c068159", "files": [{"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, 26)": {"add": [26]}}}, {"path": "rich/console.py", "status": "modified", "Loc": {"('Console', 'rule', 991)": {"mod": [993]}}}, {"path": "rich/measure.py", "status": "modified", "Loc": {"('Measurement', None, 11)": {"add": [45]}, "('Measurement', 'with_maximum', 34)": {"mod": [41]}}}, {"path": "rich/table.py", "status": "modified", "Loc": {"('Column', None, 29)": {"add": [50, 54]}, "('Table', '__init__', 118)": {"add": [123, 157]}, "('Table', 'add_column', 278)": {"add": [288, 303, 321]}, "('Table', '_calculate_column_widths', 410)": {"add": [417], "mod": [458, 459]}, "('Table', '_measure_column', 558)": {"add": [587]}, "('Table', '__rich_measure__', 241)": {"mod": [249, 254, 255, 265]}, "(None, None, None)": {"mod": [740, 741, 743, 745, 746, 747, 748, 749, 750, 751, 753, 754, 755, 756, 757, 758, 759, 761]}}}, {"path": "tests/test_measure.py", "status": "modified", "Loc": {"(None, 'test_measure_renderables', 29)": {"add": [33]}}}, {"path": "tests/test_table.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 125]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/measure.py", "rich/console.py", "rich/table.py"], "doc": ["CHANGELOG.md"], "test": ["tests/test_table.py", "tests/test_measure.py"], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "9fa57892790ce205634f6a7c83de2b9e52ab5284", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/8799", "iss_label": "site-support-request\naccount-needed", "title": "Request support site: Viceland", "body": "Viceland is a new channel from Vice. Website at https://www.viceland.com. Appears to use Uplynk, and may be encrypted, so it may not be possible.\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/9fa57892790ce205634f6a7c83de2b9e52ab5284", "file_loc": {"base_commit": "9fa57892790ce205634f6a7c83de2b9e52ab5284", "files": [{"path": "youtube_dl/extractor/uplynk.py", "status": "modified", "Loc": {"('UplynkIE', None, 13)": {"add": [51], "mod": [29, 30]}, "('UplynkIE', '_real_extract', 52)": {"mod": [53]}, "('UplynkPreplayIE', '_real_extract', 59)": {"mod": [64]}}}, {"path": "youtube_dl/extractor/viceland.py", "status": "modified", "Loc": {"('VicelandIE', None, 20)": {"add": [27]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/uplynk.py", "youtube_dl/extractor/viceland.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "5dbe81a1d35ae704b5ea208698a6bb785923d71a", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/8171", "iss_label": "", "title": "Vimeo ondemand download preiview only ", "body": "I try to download a video from vimeo on demand but i only get the preview of. \ncould someone help me plaese \n\n youtube-dl -u .com -p https://vimeo.com/ondemand/thelastcolony/150274832 --verbo\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'-u', u'PRIVATE', u'-p', u'PRIVATE', u'https://vimeo.com/ondemand/thelastcolony/150274832', u'--verbo']\n[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252\n[debug] youtube-dl version 2016.01.01\n[debug] Python version 2.7.10 - Windows-8-6.2.9200\n[debug] exe versions: none\n[debug] Proxy map: {}\n[vimeo] Logging in\n[vimeo] 150274832: Downloading webpage\n[vimeo] 150274832: Extracting information\n[vimeo] 150274832: Downloading webpage\n[vimeo] 150274832: Downloading JSON metadata\n[vimeo] 150274832: Downloading m3u8 information\n[debug] Invoking downloader on u'https://01-lvl3-gcs-pdl.vimeocdn.com/vimeo-prod-skyfire-std-us/01/54/6/150274832/459356950.mp4?expires=1452223995&token=00c3c5830ebe84f9310d4'\n[download] The Last Colony-150274832.mp4 has already been downloaded\n[download] 100% of 119.97MiB\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/5dbe81a1d35ae704b5ea208698a6bb785923d71a", "file_loc": {"base_commit": "5dbe81a1d35ae704b5ea208698a6bb785923d71a", "files": [{"path": "youtube_dl/extractor/vimeo.py", "status": "modified", "Loc": {"('VimeoIE', '_real_extract', 264)": {"add": [356], "mod": [265, 267, 269, 345]}, "('VimeoIE', '_extract_vimeo_url', 214)": {"mod": [220]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/vimeo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "365343131d752bece96d2279a3e0bcd7e9f0000f", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/17728", "iss_label": "", "title": "[PluralSight] Unable to download captions JSON: HTTP Error 404: Not Found", "body": "last version I tested\r\nyoutube-dl is up-to-date (2018.09.26)\r\n\r\nI'm try to download video from PluralSight. Video is ok but subtitle is cannot download. The error is \r\nWARNING: Unable to download captions JSON: HTTP Error 404: Not Found\r\nMy command: \r\n\r\nyoutube-dl --username xxx --password xxxx --sleep-interval 35 --max-sleep-interval 120 --sub-lang en --sub-format srt --write-sub https://app.pluralsight.com/library/courses/design-database-structure-sql-server-2014-70-465/table-of-contents\r\n\r\nThanks.", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/365343131d752bece96d2279a3e0bcd7e9f0000f", "file_loc": {"base_commit": "365343131d752bece96d2279a3e0bcd7e9f0000f", "files": [{"path": "youtube_dl/extractor/pluralsight.py", "status": "modified", "Loc": {"('PluralsightIE', None, 112)": {"mod": [213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224]}, "('PluralsightIE', '_real_extract', 271)": {"mod": [416]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/pluralsight.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "50f84a9ae171233c08ada41e03f6555c5ed95236", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/7427", "iss_label": "", "title": "\"ERROR: Signature extraction failed\" for youtube video", "body": "Hi,\n\nI have encountered an error with a video:\nhttps://www.youtube.com/watch?v=LDvVYqUMuJ0\n\n```\n$ /tmp/ydl/youtube-dl/youtube-dl https://www.youtube.com/watch?v=LDvVYqUMuJ0 --verbose\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'https://www.youtube.com/watch?v=LDvVYqUMuJ0', u'--verbose']\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\n[debug] youtube-dl version 2015.11.02\n[debug] Python version 2.7.10+ - Linux-4.2.0-1-amd64-x86_64-with-debian-stretch-sid\n[debug] exe versions: ffmpeg 1.0.6, ffprobe 1.0.6, rtmpdump 2.4\n[debug] Proxy map: {}\n[youtube] LDvVYqUMuJ0: Downloading webpage\n[youtube] LDvVYqUMuJ0: Downloading video info webpage\n[youtube] LDvVYqUMuJ0: Extracting video information\nWARNING: unable to extract html5 player; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n[youtube] {22} signature length 40.44, html5 player None\nERROR: Signature extraction failed: Traceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n (caused by ExtractorError(u\"Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\",)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\nTraceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\nTraceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/YoutubeDL.py\", line 661, in extract_info\n ie_result = ie.extract(url)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/common.py\", line 290, in extract\n return self._real_extract(url)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 1345, in _real_extract\n encrypted_sig, video_id, player_url, age_gate)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 827, in _decrypt_signature\n 'Signature extraction failed: ' + tb, cause=e)\nExtractorError: Signature extraction failed: Traceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n (caused by ExtractorError(u\"Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\",)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n```\n\nThanks,\nCorey\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/50f84a9ae171233c08ada41e03f6555c5ed95236", "file_loc": {"base_commit": "50f84a9ae171233c08ada41e03f6555c5ed95236", "files": [{"path": "youtube_dl/extractor/youtube.py", "status": "modified", "Loc": {"('YoutubeIE', '_extract_signature_function', 704)": {"mod": [706]}, "('YoutubeIE', '_real_extract', 1008)": {"mod": [1346]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16139", "iss_label": "fixed", "title": "ITV BTCC videos support?", "body": "## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.04.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [*] I've **verified** and **I assure** that I'm running youtube-dl **2018.04.09**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [*] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [*] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [*] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [*] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v `), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n$ youtube-dl -v https://pastebin.com/raw/KxD6rhpF --geo-bypass-country UK\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'https://pastebin.com/raw/KxD6rhpF', u'--geo-bypass-country', u'UK']\r\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2018.04.09\r\n[debug] Python version 2.7.13 (CPython) - Linux-4.9.62-v7+-armv7l-with-debian-9.4\r\n[debug] exe versions: ffmpeg 3.2.10-1, ffprobe 3.2.10-1\r\n[debug] Proxy map: {}\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[generic] KxD6rhpF: Requesting header\r\nWARNING: Falling back on generic information extractor.\r\n[generic] KxD6rhpF: Downloading webpage\r\n[generic] KxD6rhpF: Extracting information\r\n[download] Downloading playlist: Brightcove video tester\r\n[generic] playlist Brightcove video tester: Collected 1 video ids (downloading 1 of them)\r\n[download] Downloading video 1 of 1\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[brightcove:new] 5766870719001: Downloading webpage\r\n[brightcove:new] 5766870719001: Downloading JSON metadata\r\nERROR: Access to this resource is forbidden by access policy.\r\nYou might want to use a VPN or a proxy server (with --proxy) to workaround.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/brightcove.py\", line 706, in _real_extract\r\n json_data = self._download_json(api_url, video_id, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 692, in _download_json\r\n encoding=encoding, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 634, in _download_webpage\r\n res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/adobepass.py\", line 1332, in _download_webpage_handle\r\n *args, **compat_kwargs(kwargs))\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 539, in _download_webpage_handle\r\n urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 528, in _request_webpage\r\n raise ExtractorError(errmsg, sys.exc_info()[2], cause=err)\r\nExtractorError: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py\", line 789, in extract_info\r\n ie_result = ie.extract(url)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 440, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/brightcove.py\", line 712, in _real_extract\r\n self.raise_geo_restricted(msg=message)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 743, in raise_geo_restricted\r\n raise GeoRestrictedError(msg, countries=countries)\r\nGeoRestrictedError: Access to this resource is forbidden by access policy.\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n\r\nITV separated the BTCC race videos from the hub (which also seems to be having issues as per https://github.com/rg3/youtube-dl/issues/15925)\r\nLately the video are hosted at http://www.itv.com/btcc/races (ie for a particular weekend all videos are posted at individual pages like: http://www.itv.com/btcc/races/btcc-2018-all-the-action-from-brands-hatch)\r\n\r\nSkimming the source code of this sample weekend page, I extracted the vid params and built a test page:\r\n- Single video: https://pastebin.com/raw/KxD6rhpF\r\n\r\n**Question 1:** is the log error above pointing just to a geo restriction error or is there anything else involved that I missed? (ie: like writing some header to force the ITV scrapper to act instead of a generic one)\r\n\r\n```\r\n\r\n \r\n\r\n \r\n Brightcove video tester\r\n \r\n \r\n\r\n\r\n\r\n\t\r\n\r\n\t\r\n\r\n\r\n```\r\n\r\n**Question 2:** is there any way to generate a playlist of downloadable items based on pages like http://www.itv.com/btcc/races/btcc-2018-all-the-action-from-brands-hatch with youtube-dl?", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd", "file_loc": {"base_commit": "ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd", "files": [{"path": "youtube_dl/extractor/extractors.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [480]}}}, {"path": "youtube_dl/extractor/itv.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 20]}, "('ITVIE', '_real_extract', 56)": {"add": [262]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/itv.py", "youtube_dl/extractor/extractors.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "78f635ad3a8f819645f3991dfd244ff09f06a7f0", "iss_html_url": "https://github.com/localstack/localstack/issues/8833", "iss_label": "type: bug\naws:cloudformation\nstatus: backlog", "title": "bug: CDK Table build with replicationRegions failing on latest", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nTrying to deploy a [CDK Table](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_dynamodb.Table.html) to a localstack environment results in the following:\r\n\r\n```\r\nstack | 0/3 | 8:56:58 PM | CREATE_FAILED | AWS::CloudFormation::Stack | stack Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once\r\n```\r\n\r\n```\r\nlocalstack | 2023-08-06T03:56:28.709 DEBUG --- [ asgi_gw_4] localstack.packages.api : Installation of dynamodb-local skipped (already installed).\r\nlocalstack | 2023-08-06T03:56:28.709 DEBUG --- [ asgi_gw_4] l.services.dynamodb.server : Starting DynamoDB Local: ['java', '-Xmx256m', '-javaagent:/usr/lib/localstack/dynamodb-local/latest/ddb-local-loader-0.1.jar', '-Djava.library.path=/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal_lib', '-jar', '/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal.jar', '-port', '35799', '-dbPath', '/var/lib/localstack/tmp/state/dynamodb']\r\nlocalstack | 2023-08-06T03:56:28.710 DEBUG --- [uncthread160] localstack.utils.run : Executing command: ['java', '-Xmx256m', '-javaagent:/usr/lib/localstack/dynamodb-local/latest/ddb-local-loader-0.1.jar', '-Djava.library.path=/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal_lib', '-jar', '/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal.jar', '-port', '35799', '-dbPath', '/var/lib/localstack/tmp/state/dynamodb']\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : Initializing DynamoDB Local with the following configuration:\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : Port:\t35799\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : InMemory:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : DbPath:\t/var/lib/localstack/tmp/state/dynamodb\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : SharedDb:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : shouldDelayTransientStatuses:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : CorsParams:\tnull\r\nlocalstack | 2023-08-06T03:56:28.950 DEBUG --- [uncthread160] l.services.dynamodb.server :\r\nlocalstack | 2023-08-06T03:56:29.715 INFO --- [ asgi_gw_4] botocore.credentials : Found credentials in environment variables.\r\nlocalstack | 2023-08-06T03:56:30.532 DEBUG --- [ asgi_gw_4] l.services.plugins : checking service health dynamodb:4566\r\nlocalstack | 2023-08-06T03:56:30.534 INFO --- [ asgi_gw_4] localstack.utils.bootstrap : Execution of \"require\" took 1887.18ms\r\nlocalstack | 2023-08-06T03:56:30.879 DEBUG --- [ asgi_gw_1] l.services.plugins : checking service health kinesis:4566\r\nlocalstack | 2023-08-06T03:56:30.886 INFO --- [ asgi_gw_1] l.s.k.kinesis_mock_server : Creating kinesis backend for account 000000000000\r\nlocalstack | 2023-08-06T03:56:30.887 DEBUG --- [ asgi_gw_1] localstack.packages.api : Starting installation of kinesis-local...\r\nlocalstack | 2023-08-06T03:56:30.887 DEBUG --- [ asgi_gw_1] localstack.utils.run : Executing command: ['npm', 'install', '--prefix', '/var/lib/localstack/lib/kinesis-local/0.4.2', 'kinesis-local@0.4.2']\r\nlocalstack | 2023-08-06T03:56:32.575 DEBUG --- [ asgi_gw_1] localstack.packages.core : Setting ownership root:root on /var/lib/localstack/lib/kinesis-local/0.4.2\r\nlocalstack | 2023-08-06T03:56:32.575 DEBUG --- [ asgi_gw_1] localstack.packages.api : Installation of kinesis-local finished.\r\nlocalstack | 2023-08-06T03:56:32.576 DEBUG --- [ asgi_gw_1] l.s.k.kinesis_mock_server : starting kinesis process ['node', PosixPath('/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/main.js')] with env vars {'KINESIS_MOCK_CERT_PATH': '/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/server.json', 'KINESIS_MOCK_PLAIN_PORT': 42209, 'KINESIS_MOCK_TLS_PORT': 34279, 'SHARD_LIMIT': '100', 'ON_DEMAND_STREAM_COUNT_LIMIT': '10', 'AWS_ACCOUNT_ID': '000000000000', 'CREATE_STREAM_DURATION': '500ms', 'DELETE_STREAM_DURATION': '500ms', 'REGISTER_STREAM_CONSUMER_DURATION': '500ms', 'START_STREAM_ENCRYPTION_DURATION': '500ms', 'STOP_STREAM_ENCRYPTION_DURATION': '500ms', 'DEREGISTER_STREAM_CONSUMER_DURATION': '500ms', 'MERGE_SHARDS_DURATION': '500ms', 'SPLIT_SHARD_DURATION': '500ms', 'UPDATE_SHARD_COUNT_DURATION': '500ms', 'UPDATE_STREAM_MODE_DURATION': '500ms', 'SHOULD_PERSIST_DATA': 'true', 'PERSIST_PATH': '../../../var/lib/localstack/tmp/state/kinesis', 'PERSIST_FILE_NAME': '000000000000.json', 'PERSIST_INTERVAL': '5s', 'LOG_LEVEL': 'INFO'}\r\nlocalstack | 2023-08-06T03:56:32.576 DEBUG --- [uncthread166] localstack.utils.run : Executing command: ['node', PosixPath('/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/main.js')]\r\nlocalstack | 2023-08-06T03:56:32.834 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.823005Z contextId=6956dd23-c61e-4aa1-80ce-b8bfc8d0894b, cacheConfig={\"awsAccountId\":\"000000000000\",\"awsRegion\":\"us-east-1\",\"createStreamDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"deleteStreamDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"deregisterStreamConsumerDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"initializeStreams\":null,\"logLevel\":\"INFO\",\"mergeShardsDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"onDemandStreamCountLimit\":10,\"persistConfig\":{\"fileName\":\"000000000000.json\",\"interval\":{\"length\":5,\"unit\":\"SECONDS\"},\"loadIfExists\":true,\"path\":\"../../../var/lib/localstack/tmp/state/kinesis\",\"shouldPersist\":true},\"registerStreamConsumerDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"shardLimit\":100,\"splitShardDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"startStreamEncryptionDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"stopStreamEncryptionDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"updateShardCountDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"}} Logging Cache Config\r\nlocalstack | 2023-08-06T03:56:32.986 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.986197Z Starting Kinesis TLS Mock Service on port 34279\r\nlocalstack | 2023-08-06T03:56:32.987 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.986862Z Starting Kinesis Plain Mock Service on port 42209\r\nlocalstack | 2023-08-06T03:56:32.994 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.994001Z contextId=1d81ef53-1648-4fbc-8b16-f09375d77ece Starting persist data loop\r\nlocalstack | 2023-08-06T03:56:33.215 DEBUG --- [uncthread158] l.s.c.resource_provider : Executing callback method for AWS::DynamoDB::Table:ddbFooTabletable735E488F\r\nlocalstack | 2023-08-06T03:56:33.330 DEBUG --- [uncthread158] l.s.c.e.template_deployer : Error applying changes for CloudFormation stack \"stack-ddbFooTableNestedStackdd-f1d922ad\": 'NoneType' object has no attribute 'get' Traceback (most recent call last):\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 965, in _run\r\nlocalstack | self.do_apply_changes_in_loop(changes, stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1011, in do_apply_changes_in_loop\r\nlocalstack | should_deploy = self.prepare_should_deploy_change(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1105, in prepare_should_deploy_change\r\nlocalstack | resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 478, in _resolve_refs_recursively\r\nlocalstack | value[key] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 478, in _resolve_refs_recursively\r\nlocalstack | value[key] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 497, in _resolve_refs_recursively\r\nlocalstack | value[i] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 290, in _resolve_refs_recursively\r\nlocalstack | resolved_getatt = get_attr_from_model_instance(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 102, in get_attr_from_model_instance\r\nlocalstack | attribute = attribute.get(part)\r\nlocalstack | AttributeError: 'NoneType' object has no attribute 'get'\r\nlocalstack |\r\nlocalstack |\r\nlocalstack | 2023-08-06T03:56:33.426 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:33.445 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:38.447 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:38.458 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:43.464 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:43.473 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:48.479 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:48.489 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:53.495 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:53.502 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:58.487 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:58.495 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:58.743 DEBUG --- [uncthread154] l.s.c.e.template_deployer : Error applying changes for CloudFormation stack \"stack\": Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once Traceback (most recent call last):\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 965, in _run\r\nlocalstack | self.do_apply_changes_in_loop(changes, stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1039, in do_apply_changes_in_loop\r\nlocalstack | self.apply_change(change, stack=stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1152, in apply_change\r\nlocalstack | progress_event = executor.deploy_loop(resource_provider_payload) # noqa\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 572, in deploy_loop\r\nlocalstack | event = self.execute_action(resource_provider, payload)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 638, in execute_action\r\nlocalstack | return resource_provider.create(request)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 350, in create\r\nlocalstack | return self.create_or_delete(request)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 499, in create_or_delete\r\nlocalstack | result_handler(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/models/cloudformation.py\", line 55, in _handle_result\r\nlocalstack | connect_to().cloudformation.get_waiter(\"stack_create_complete\").wait(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/waiter.py\", line 55, in wait\r\nlocalstack | Waiter.wait(self, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/waiter.py\", line 375, in wait\r\nlocalstack | raise WaiterError(\r\nlocalstack | botocore.exceptions.WaiterError: Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once\r\nlocalstack |\r\nlocalstack |\r\nlocalstack | 2023-08-06T03:57:03.504 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:57:03.511 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:57:03.520 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\n```\r\n\r\nI am using the CDK Table class with a stream enabled, which requires replicas to enable the stream. There's no creative way around this issue that I'm aware of, due to the stack failing before the fully deployment of the Table class.\n\n### Expected Behavior\n\nI'm expecting the build to succeed locally, so I can continue with our stack deployment chain for our local environments.\r\n\r\nThe behavior is not present on `localstack/localstack-pro:2.2.0` or any release prior to the update this week. I am successfully able to deploy to live environments with the same CDK stack without error.\n\n### How are you starting LocalStack?\n\nWith a docker-compose file\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\ndocker-compose up\r\n \r\n```\r\nversion: \"3.8\"\r\n\r\nnetworks:\r\n ls:\r\n name: ls\r\n\r\nservices:\r\n localstack:\r\n container_name: \"${LOCALSTACK_DOCKER_NAME-localstack}\"\r\n environment:\r\n - DEBUG=${DEBUG-1}\r\n - DISABLE_CORS_CHECKS=1\r\n - DISABLE_CUSTOM_CORS_APIGATEWAY=1\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n - EXTRA_CORS_ALLOWED_ORIGINS=*\r\n - MAIN_DOCKER_NETWORK=ls\r\n - PERSISTENCE=${PERSISTENCE-}\r\n env_file:\r\n - ./localstack.local.env\r\n image: \"localstack/localstack-pro:${LOCALSTACK_VERSION-latest}\"\r\n networks:\r\n - ls\r\n ports:\r\n - \"127.0.0.1:4566:4566\" # LocalStack Gateway\r\n - \"127.0.0.1:4510-4559:4510-4559\" # external services port range\r\n - \"127.0.0.1:53:53\" # DNS config (required for Pro)\r\n - \"127.0.0.1:53:53/udp\" # DNS config (required for Pro)\r\n - \"127.0.0.1:443:443\" # LocalStack HTTPS Gateway (required for Pro)\r\n volumes:\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n\r\n```\r\n\r\n.env file contains:\r\n\r\n```\r\nLOCALSTACK_API_KEY=xxxxxxxxxx\r\n```\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\nnpx cdklocal deploy api\r\n\r\n```\r\nconst account = process.env.CDK_ACCOUNT || \"000000000000\";\r\nconst region = \"us-west-2\";\r\n\r\nnew APIStack(app, \"api\", {\r\n crossRegionReferences: true,\r\n env: { account, region },\r\n stackName: \"api\",\r\n});\r\n```\r\n\r\n```\r\nnew DynamoStack(this, \"ddbFooTable\", {\r\n billingMode: BillingMode.PROVISIONED,\r\n deletionProtection: false,\r\n encryption: TableEncryption.AWS_MANAGED,\r\n partitionKey: { name: \"id\", type: AttributeType.STRING },\r\n pointInTimeRecovery: false,\r\n removalPolicy: false\r\n ? RemovalPolicy.RETAIN\r\n : RemovalPolicy.DESTROY,\r\n replicationRegions: [\"us-west-1\"],\r\n stream: StreamViewType.NEW_AND_OLD_IMAGES,\r\n tableName: \"foo\",\r\n});\r\n```\r\n\n\n### Environment\n\n```markdown\n- OS: OSX 13.4\r\n- LocalStack: latest (70e077bf43491cc0954698c1240159caa9cecc0ac6652b890b52aaf0801d5fcb)\r\n- aws-cdk-lib: 2.90.0\r\n- aws-cdk-local: 2.18.0\n```\n\n\n### Anything else?\n\nUnfortunately, I'm in a position where I need the latest update due to Cognito User Pool domains not being functional in prior releases. I'm trying to get our devs to authenticate locally with an Oauth2 IDP flow instead of forcing them to authenticate with a live-stable environment. More information [here](https://github.com/localstack/localstack/issues/8700).", "code": null, "pr_html_url": "https://github.com/localstack/localstack/pull/8882", "commit_html_url": null, "file_loc": {"base_commit": "78f635ad3a8f819645f3991dfd244ff09f06a7f0", "files": [{"path": "localstack/services/cloudformation/engine/template_deployer.py", "status": "modified", "Loc": {"(None, 'get_attr_from_model_instance', 75)": {"add": [100, 101]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/cloudformation/engine/template_deployer.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "f4a188b6d51155a0831a3246f1d8e4f4be835861", "iss_html_url": "https://github.com/localstack/localstack/issues/4652", "iss_label": "type: bug\nstatus: triage needed", "title": "bug: LAMBDA_DOCKER_FLAGS doesn't work with -e", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nAttempting to add environment variables to containers created to service lambda requests no longer works in the latest version of localstack. This works in localStack version 0.12.16\r\n\r\n### Expected Behavior\r\n\r\nSetting the environment variable `LAMBDA_DOCKER_FLAGS=-e TEST_VAL=True` on localstack's docker container will result in spawned containers created for serving lambda functions having the environment variable TEST_VAL set to True.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n1. Run `docker-compose up -d` with the following docker-compose.yml\r\n```yaml\r\nversion: '2.1'\r\n\r\nservices:\r\n localstack_ltest:\r\n container_name: \"ltest\"\r\n image: localstack/localstack:0.12.18\r\n ports:\r\n - \"4566:4566\"\r\n environment:\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n - LOCALSTACK_API_KEY=\r\n - LAMBDA_EXECUTOR=docker-reuse\r\n - LAMBDA_DOCKER_FLAGS=-e TEST_VAL=True\r\n volumes:\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n restart: always\r\n```\r\n2. Create the file logs-template.yml\r\n```yaml\r\n---\r\nAWSTemplateFormatVersion: '2010-09-09'\r\nResources: \r\n LambdaFunctionLogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties: \r\n RetentionInDays: 60\r\n LogGroupName: !Join [\"\", [\"/aws/lambda/\", !Ref LambdaFunction]]\r\n LambdaFunctionRole:\r\n Type: AWS::IAM::Role\r\n Properties:\r\n AssumeRolePolicyDocument:\r\n Version: '2012-10-17'\r\n Statement:\r\n - Effect: Allow\r\n Principal:\r\n Service:\r\n - lambda.amazonaws.com\r\n Action:\r\n - sts:AssumeRole\r\n Path: /\r\n Policies:\r\n - PolicyName: LambdaRolePolicy\r\n PolicyDocument:\r\n Statement:\r\n - Effect: Allow\r\n Action:\r\n - 'logs:*'\r\n Resource: 'arn:aws:logs:*:*:*'\r\n LambdaFunction:\r\n Type: AWS::Lambda::Function\r\n Properties:\r\n FunctionName: \"test-function\"\r\n Role: !GetAtt LambdaFunctionRole.Arn\r\n Handler: index.lambda_handler\r\n Runtime: python3.8\r\n Code:\r\n ZipFile: |\r\n import os\r\n def lambda_handler(event, context):\r\n print(\"environ: \" + str(os.environ))\r\n\r\n\r\n```\r\n3. Run ` aws cloudformation deploy --stack-name test --template-file .\\logs-template.yml --endpoint-url http://127.0.0.1:4566 --region us-east-1`\r\n4. Run `aws --endpoint-url http://127.0.0.1:4566 --region us-east-1 lambda invoke --function-name test-function out.txt`\r\n5. Run `aws --endpoint-url=http://localhost:4566 --region us-east-1 logs tail /aws/lambda/test-function`\r\n6. Check for `\"TEST_VAL\": \"True\",` being in the output of the above command.\r\n\r\n### Environment\r\n\r\n```markdown\r\nCurrent configuration (broken)\r\n- OS: Ubuntu 20.04\r\n- LocalStack version: 0.12.18\r\n- LocalStack build date: 2021-09-27\r\n- LocalStack build git hash: 00797f9e\r\n\r\nWorking configuration\r\n- OS: Ubuntu 20.04\r\n- LocalStack version: 0.12.16\r\n- LocalStack Docker container id: b0137bad2045\r\n- LocalStack build date: 2021-07-31\r\n- LocalStack build git hash: f1262f74\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/localstack/localstack/commit/f4a188b6d51155a0831a3246f1d8e4f4be835861", "file_loc": {"base_commit": "f4a188b6d51155a0831a3246f1d8e4f4be835861", "files": [{"path": "localstack/services/awslambda/lambda_executors.py", "status": "modified", "Loc": {"('LambdaExecutorContainers', 'run_lambda_executor', 495)": {"mod": [543, 544]}}}, {"path": "tests/integration/test_lambda.py", "status": "modified", "Loc": {"('TestLambdaBaseFeatures', 'test_large_payloads', 477)": {"add": [492]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/awslambda/lambda_executors.py"], "doc": [], "test": ["tests/integration/test_lambda.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "dec1ba1b94153a4380cc94a0c8bd805f8922b6e3", "iss_html_url": "https://github.com/localstack/localstack/issues/3202", "iss_label": "status: triage needed\narea: configuration", "title": "Illegal path is passed into the HEAD request during the object download", "body": "# Type of request: This is a ...\n[ ] bug report\n[ ] feature request\n\n# Detailed description\nI have tried to upgrade localstack to 0.11.5(and after) and use the service port 4566.\nOn local, I got passes to all tests we have been doing.\nBut on CircleCI, I got errors when download object from s3.\n\n```\nAn error occurred (404) when calling the HeadObject operation: Not Found\n```\n\nIt goes through to 0.11.4, so I'm sure it's a bug, but what do you think?\nWould you have any advice?\n\n## Expected behavior\nMy test does as below\n\n1. upload object to s3 bucket named \"test-bucket\"\n1. list v2 for the bucket\n1. download it\n\nSo, I expected to succeed to this.\nOf course, this test has been passed until localstack upgrade.\n\n## Actual behavior\nAt CircleCI, I got below.\nSomehow \"/test-bucket/test-bucket\" is being passed as a HEAD request parameter at download time.\nThis path is double of bucket name \"/test-bucket\".\n\n```\n:\n2020-10-30 05:47:54,523:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"PUT /test-bucket HTTP/1.1\" 200 -\n2020-10-30 05:47:54,536:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"PUT /test-bucket/loadable/2020/04/06/000000_2_e462a109-916a-4b8f-b393-f5b01a6a8c10 HTTP/1.1\" 200 -\n2020-10-30 05:47:54,554:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"GET /test-bucket?list-type=2&max-keys=200&prefix=loadable%2F HTTP/1.1\" 200 -\n2020-10-30 05:47:54,566:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"GET /test-bucket/loadable/2020/04/06/000000_2_e462a109-916a-4b8f-b393-f5b01a6a8c10 HTTP/1.1\" 206 -\n2020-10-30 05:47:54,578:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"HEAD /test-bucket/test-bucket HTTP/1.1\" 404 -\n2020-10-30T05:47:54:WARNING:bootstrap.py: Thread run method ._run at 0x7f333a234f70>(None) failed: An error occurred (404) when calling the HeadObject operation: Not Found Traceback (most recent call last):\n File \"/opt/code/localstack/localstack/utils/bootstrap.py\", line 534, in run\n result = self.func(self.params)\n File \"/opt/code/localstack/localstack/utils/async_utils.py\", line 28, in _run\n return fn(*args, **kwargs)\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 560, in handler\n response = modify_and_forward(method=method, path=path_with_params, data_bytes=data, headers=headers,\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 333, in modify_and_forward\n listener_result = listener.forward_request(method=method,\n File \"/opt/code/localstack/localstack/services/edge.py\", line 81, in forward_request\n return do_forward_request(api, port, method, path, data, headers)\n File \"/opt/code/localstack/localstack/services/edge.py\", line 86, in do_forward_request\n result = do_forward_request_inmem(api, port, method, path, data, headers)\n File \"/opt/code/localstack/localstack/services/edge.py\", line 106, in do_forward_request_inmem\n response = modify_and_forward(method=method, path=path, data_bytes=data, headers=headers,\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 401, in modify_and_forward\n updated_response = update_listener.return_response(**kwargs)\n File \"/opt/code/localstack/localstack/services/s3/s3_listener.py\", line 1254, in return_response\n fix_range_content_type(bucket_name, path, headers, response)\n File \"/opt/code/localstack/localstack/services/s3/s3_listener.py\", line 465, in fix_range_content_type\n result = s3_client.head_object(Bucket=bucket_name, Key=key_name)\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 676, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found\n```\n\n# Steps to reproduce\n## Command used to start LocalStack\nsorry...\n\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\nsorry...\n\n\n\n\u2506Issue is synchronized with this [Jira Task](https://localstack.atlassian.net/browse/LOC-71) by [Unito](https://www.unito.io/learn-more)\n", "code": null, "pr_html_url": "https://github.com/localstack/localstack/pull/3370", "commit_html_url": null, "file_loc": {"base_commit": "dec1ba1b94153a4380cc94a0c8bd805f8922b6e3", "files": [{"path": "localstack/services/s3/s3_listener.py", "status": "modified", "Loc": {"(None, 'uses_path_addressing', 891)": {"mod": [892]}}}, {"path": "tests/integration/test_s3.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [36]}, "('S3ListenerTest', None, 59)": {"add": [1454]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/s3/s3_listener.py"], "doc": [], "test": ["tests/integration/test_s3.py"], "config": [], "asset": []}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "7a53ba3dad9f7b2e31dac3fbb3162838eb9441c6", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/58", "iss_label": "", "title": "Please support Azure OpenAI", "body": null, "code": null, "pr_html_url": "https://github.com/openinterpreter/open-interpreter/pull/62", "commit_html_url": null, "file_loc": {"base_commit": "7a53ba3dad9f7b2e31dac3fbb3162838eb9441c6", "files": [{"path": "interpreter/cli.py", "status": "modified", "Loc": {"(None, 'cli', 4)": {"add": [28, 39]}}}, {"path": "interpreter/interpreter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [52]}, "('Interpreter', '__init__', 62)": {"add": [69]}, "('Interpreter', 'verify_api_key', 255)": {"add": [263], "mod": [260, 262, 271, 272, 275, 276, 278, 279, 280, 281, 284, 285, 286, 287, 289]}, "('Interpreter', 'respond', 296)": {"add": [340], "mod": [312, 313, 314, 315, 316, 317, 318]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/cli.py", "interpreter/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "e69269d844b7089dec636516d6edb4f70911ebf6", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/54", "iss_label": "", "title": "Support OPENAI_ API_ BASE for proxy URLs", "body": "How to add OPENAI_ API_ BASE code to use other proxy keys\uff1f", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/abi/screenshot-to-code/commit/e69269d844b7089dec636516d6edb4f70911ebf6", "file_loc": {"base_commit": "e69269d844b7089dec636516d6edb4f70911ebf6", "files": [{"path": "backend/image_generation.py", "status": "modified", "Loc": {"(None, 'process_tasks', 8)": {"mod": [8, 9]}, "(None, 'generate_image', 23)": {"mod": [23, 24]}, "(None, 'generate_images', 63)": {"mod": [63, 90]}}}, {"path": "backend/llm.py", "status": "modified", "Loc": {"(None, 'stream_openai_response', 8)": {"mod": [9, 11]}}}, {"path": "backend/main.py", "status": "modified", "Loc": {"(None, 'stream_code_test', 62)": {"add": [75, 85, 119], "mod": [132]}}}, {"path": "frontend/src/App.tsx", "status": "modified", "Loc": {"(None, None, 39)": {"add": [39]}}}, {"path": "frontend/src/components/SettingsDialog.tsx", "status": "modified", "Loc": {"(None, None, 78)": {"add": [78]}}}, {"path": "frontend/src/types.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/src/App.tsx", "frontend/src/types.ts", "backend/main.py", "frontend/src/components/SettingsDialog.tsx", "backend/image_generation.py", "backend/llm.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "d363cf4639aacdaefbb8f69919f3c787a4519b7b", "iss_html_url": "https://github.com/pytorch/pytorch/issues/38479", "iss_label": "triaged\nmodule: numpy", "title": "torch.einsum does not pass equation argument to __torch_function__ API", "body": "## \ud83d\udc1b Bug\r\n\r\nwhen delegating torch.einsum call to an object which implements\r\n`__torch_function__` API the equation argument is not passed resulting in the error.\r\n```TypeError: einsum(): argument 'equation' (position 1) must be str, not Tensor```\r\n\r\nthis was tested on pytorch 1.5.0\r\n\r\nI've actually found the cause of this bug and have written a fix.\r\n\r\nthe following script illustrates the problem and the proposed solution\r\n\r\n## To Reproduce\r\n\r\n```python \r\nimport torch\r\n\r\nclass Wrapper():\r\n def __init__(self,data):\r\n self.data = data\r\n \r\n def __torch_function__(self, func, types, args=(), kwargs=None):\r\n if kwargs is None:\r\n kwargs = {}\r\n\r\n #unwrap inputs if necessary\r\n def unwrap(v):\r\n return v.data if isinstance(v,Wrapper) else v\r\n args = map(unwrap,args)\r\n kwargs = {k:unwrap(v) for k,v in kwargs.items()}\r\n\r\n return func(*args, **kwargs)\r\n\r\n\r\n\r\n# fixed einsum implementation\r\nfrom torch import Tensor,_VF\r\nfrom torch._overrides import has_torch_function,handle_torch_function\r\ndef fixed_einsum(equation,*operands):\r\n if not torch.jit.is_scripting():\r\n if any(type(t) is not Tensor for t in operands) and has_torch_function(operands):\r\n # equation is not passed\r\n # return handle_torch_function(einsum, operands, *operands)\r\n return handle_torch_function(fixed_einsum, operands, equation,*operands)\r\n if len(operands) == 1 and isinstance(operands[0], (list, tuple)):\r\n # the old interface of passing the operands as one list argument\r\n operands = operands[0]\r\n\r\n # recurse incase operands contains value that has torch function\r\n #in the original implementation this line is omitted\r\n return fixed_einsum(equation,*operands)\r\n\r\n return _VF.einsum(equation, operands)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print(torch.__version__)\r\n # uncomment to use fixed einsum\r\n # torch.einsum = fixed_einsum\r\n\r\n #operands are wrapped\r\n x = Wrapper(torch.randn(5))\r\n y = Wrapper(torch.randn(4))\r\n assert torch.allclose(torch.einsum('i,j->ij',x, y),torch.ger(x,y)) # outer product\r\n print(\"works with wrapped inputs\") \r\n\r\n #old interface operands is a list\r\n a = Wrapper(torch.randn(2,3))\r\n b = Wrapper(torch.randn(5,3,7))\r\n c = Wrapper(torch.randn(2,7))\r\n assert torch.allclose(torch.einsum('ik,jkl,il->ij', [a, b, c]),torch.nn.functional.bilinear(a,c,b)) # bilinear interpolation\r\n print(\"works with old API operands is list\")\r\n \r\n #equation is wrapped\r\n As = Wrapper(torch.randn(3,2,5))\r\n Bs = Wrapper(torch.randn(3,5,4))\r\n equation = Wrapper('bij,bjk->bik')\r\n assert torch.allclose(torch.einsum(equation, As, Bs),torch.matmul(As,Bs)) # batch matrix multiplication\r\n print(\"works with equation wrapped\")\r\n\r\n #see that it also works with plain tensors\r\n x = torch.randn(5)\r\n y = torch.randn(4)\r\n assert torch.allclose(torch.einsum('i,j->ij',x, y),torch.ger(x,y)) \r\n print(\"works with no wrapped values\")\r\n\r\n\r\n\r\n```\r\n\r\n\r\ncc @albanD @mruberry", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/d363cf4639aacdaefbb8f69919f3c787a4519b7b", "file_loc": {"base_commit": "d363cf4639aacdaefbb8f69919f3c787a4519b7b", "files": [{"path": "test/test_overrides.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [490]}}}, {"path": "torch/_overrides.py", "status": "modified", "Loc": {"(None, 'get_testing_overrides', 144)": {"mod": [264]}}}, {"path": "torch/functional.py", "status": "modified", "Loc": {"(None, 'einsum', 222)": {"add": [300], "mod": [297]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/functional.py", "torch/_overrides.py"], "doc": [], "test": ["test/test_overrides.py"], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "81a4aeabdf9d550ceda52a5060f19568de61b265", "iss_html_url": "https://github.com/pytorch/pytorch/issues/93667", "iss_label": "triaged\ntracker\noncall: pt2\nmodule: dynamo", "title": "14k github models on PyTorch 2.0 pass rates dashboard ", "body": "We are weekly running dynamo-eager, dynamo-eager-fullgraph, export and inductor on 14k ```nn.Modules``` crawled from 1.4k github projects to get coverage level, find and fix bugs. This dashboard page is used to track the pass rates of different backends. \r\n\r\nHow to repro:\r\n* Checkout https://github.com/jansel/pytorch-jit-paritybench\r\n* Batch evaluation with different backends:\r\n * dynamo-eager: ```python main.py --backend eager```\r\n * dynamo-eager-fullgraph: ```python main.py --backend eager --fullgraph```\r\n * export: ```python main.py --compile_mode export```\r\n * inductor: ```python main.py```\r\n* Adhoc evaluation:\r\n * ```pytest ./generated/{filename}.py -k test_{n}``` (e.g, ```pytest ./generated/test_KunpengLi1994_VSRN.py -k test_002```)\r\n * ```python -e ./generated/{filename}.py --backend eager```\r\n\r\nBugs umbrella task(#92670)\r\n\r\ncc @ezyang @msaroufim @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @soumith @wconstab @ngimel @Xia-Weiwen @desertfire @davidberard98", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/81a4aeabdf9d550ceda52a5060f19568de61b265", "file_loc": {"base_commit": "81a4aeabdf9d550ceda52a5060f19568de61b265", "files": [{"path": "test/dynamo/test_misc.py", "status": "modified", "Loc": {"('MiscTests', None, 40)": {"add": [2965]}, "('MiscTests', 'fn', 421)": {"mod": [422]}, "('MiscTests', 'test_numel', 420)": {"mod": [425]}}}, {"path": "torch/_dynamo/variables/tensor.py", "status": "modified", "Loc": {"('TensorVariable', 'call_method', 178)": {"mod": [206]}}}, {"path": "torch/_dynamo/variables/torch.py", "status": "modified", "Loc": {"('TorchVariable', 'can_constant_fold_through', 159)": {"add": [165]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/_dynamo/variables/torch.py", "torch/_dynamo/variables/tensor.py"], "doc": [], "test": ["test/dynamo/test_misc.py"], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "aac9e5288f7a9666884705e2b716c260cb5f9afc", "iss_html_url": "https://github.com/pytorch/pytorch/issues/67002", "iss_label": "module: windows\nmodule: multiprocessing\ntriaged\nskipped", "title": "DISABLED test_fs_sharing (__main__.TestMultiprocessing)", "body": "Flaky failures in the last week: https://fburl.com/scuba/opensource_ci_jobs/inmj698k. They only appear to be on windows\r\n\r\nPlatforms: rocm\r\n\r\ncc @peterjc123 @mszhanyi @skyline75489 @nbcsm @VitalyFedyunin", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/aac9e5288f7a9666884705e2b716c260cb5f9afc", "file_loc": {"base_commit": "aac9e5288f7a9666884705e2b716c260cb5f9afc", "files": [{"path": "test/test_multiprocessing.py", "status": "modified", "Loc": {"('TestMultiprocessing', 'test_receive', 289)": {"add": [293]}, "(None, None, None)": {"mod": [19, 27]}, "('TestMultiprocessing', 'test_fill', 259)": {"mod": [269, 270]}, "('TestMultiprocessing', None, 251)": {"mod": [361]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": ["test/test_multiprocessing.py"], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "e37a22128eca7ccac6e289659587a9e1bfe6d242", "iss_html_url": "https://github.com/pytorch/pytorch/issues/15052", "iss_label": "oncall: jit", "title": "Tracer doesn't work with join/wait", "body": "Reported error: `RuntimeError: output 0 of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer.`\r\n\r\nTo reproduce:\r\n```python\r\ndef test_async_script_trace(self):\r\n class Module(torch.jit.ScriptModule):\r\n def __init__(self):\r\n super(Module, self).__init__(False)\r\n\r\n @torch.jit.script_method\r\n def forward(self, x):\r\n future = torch.jit._fork(torch.neg, x)\r\n outputs = []\r\n outputs.append(torch.jit._wait(future))\r\n\r\n return outputs\r\n\r\n class Tuple(nn.Module):\r\n def __init__(self):\r\n super(Tuple, self).__init__()\r\n self.module = Module()\r\n\r\n def forward(self, x):\r\n return tuple(self.module(x))\r\n\r\n x = torch.rand(3, 4)\r\n module = torch.jit.trace(Tuple(), (x), _force_outplace=True)\r\n self.assertEqual(module(x), torch.neg(x))", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/e37a22128eca7ccac6e289659587a9e1bfe6d242", "file_loc": {"base_commit": "e37a22128eca7ccac6e289659587a9e1bfe6d242", "files": [{"path": "aten/src/ATen/core/jit_type.h", "status": "modified", "Loc": {"(None, 'FutureType', 516)": {"add": [534]}}}, {"path": "test/test_jit.py", "status": "modified", "Loc": {"('TestAsync', None, 11055)": {"add": [11224]}}}, {"path": "torch/csrc/jit/graph_executor.cpp", "status": "modified", "Loc": {"(None, None, 505)": {"mod": [516, 530, 531, 532, 533]}}}, {"path": "torch/csrc/jit/tracer.cpp", "status": "modified", "Loc": {"(None, None, None)": {"add": [39]}}}, {"path": "torch/csrc/jit/tracer.h", "status": "modified", "Loc": {"(None, 'function', 42)": {"add": [44]}, "(None, 'tracer', 24)": {"mod": [35, 36, 37, 38]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/csrc/jit/tracer.h", "torch/csrc/jit/tracer.cpp", "aten/src/ATen/core/jit_type.h", "torch/csrc/jit/graph_executor.cpp"], "doc": [], "test": ["test/test_jit.py"], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "0c091380cc03b23e68dde7368f3b910c21deb010", "iss_html_url": "https://github.com/pytorch/pytorch/issues/21680", "iss_label": "high priority\nmodule: cudnn\nmodule: nn\ntriaged\nsmall", "title": "Disable nondeterministic CTCLoss from cuDNN", "body": "## \ud83d\udc1b Bug\r\n\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.I i updated pytorch version and ctc\uff0cuse pytorch_nightly, but in my train ,nn.CTCloss() is still zero,so,i would like to ask if the version pytorch(nightly) has been solved this problem\r\n1.\r\n1.\r\n\r\n\r\n\r\n## Expected behavior\r\n\r\n\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/0c091380cc03b23e68dde7368f3b910c21deb010", "file_loc": {"base_commit": "0c091380cc03b23e68dde7368f3b910c21deb010", "files": [{"path": "aten/src/ATen/native/LossCTC.cpp", "status": "modified", "Loc": {"(None, 'ctc_loss', 341)": {"mod": [367]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code\nC"}, "loctype": {"code": ["aten/src/ATen/native/LossCTC.cpp"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "5d08c7201fa5b4641f4277bf248c944b2c297b94", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/843", "iss_label": "bug", "title": "permission denied", "body": "**Bug description**\r\nI'm running basic G4F code from README:\r\n```\r\nimport g4f\r\n\r\n\r\nresponse = g4f.ChatCompletion.create(\r\n model=g4f.models.gpt_4,\r\n messages=[{\"role\": \"user\", \"content\": \"hi\"}],\r\n) # alterative model setting\r\n\r\nprint(response)\r\n```\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\IonE\\Desktop\\main.py\", line 3, in \r\n import g4f\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\__init__.py\", line 1, in \r\n from . import models\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\models.py\", line 3, in \r\n from .Provider import Bard, BaseProvider, GetGpt, H2o, Liaobots, Vercel, Equing\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\__init__.py\", line 6, in \r\n from .Bard import Bard\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\Bard.py\", line 11, in \r\n class Bard(AsyncProvider):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\Bard.py\", line 22, in Bard\r\n cookies: dict = get_cookies(\".google.com\"),\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\base_provider.py\", line 45, in get_cookies\r\n for cookie in browser_cookie3.load(cookie_domain):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 1233, in load\r\n for cookie in cookie_fn(domain_name=domain_name):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 1160, in chrome\r\n return Chrome(cookie_file, domain_name, key_file).load()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 489, in load\r\n with _DatabaseConnetion(self.cookie_file) as con:\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 349, in __enter__\r\n return self.get_connection()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 383, in get_connection\r\n con = method()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 374, in __get_connection_legacy\r\n shutil.copyfile(self.__database_file, self.__temp_cookie_file)\r\n File \"D:\\Program Files\\Python399\\lib\\shutil.py\", line 264, in copyfile\r\n with open(src, 'rb') as fsrc:\r\nPermissionError: [Errno 13] Permission denied: 'C:\\\\Users\\\\IonE\\\\AppData\\\\Roaming\\\\..\\\\Local\\\\Google\\\\Chrome\\\\User Data\\\\Default\\\\Network\\\\Cookies'\r\n```\r\n\r\n**Environement**\r\n- python 3.9.9\r\n- ukraine", "code": null, "pr_html_url": "https://github.com/xtekky/gpt4free/pull/847", "commit_html_url": null, "file_loc": {"base_commit": "5d08c7201fa5b4641f4277bf248c944b2c297b94", "files": [{"path": "g4f/Provider/Bard.py", "status": "modified", "Loc": {"('Bard', 'create_async', 17)": {"add": [33], "mod": [22]}}}, {"path": "g4f/Provider/Bing.py", "status": "modified", "Loc": {"('Bing', 'create_async_generator', 21)": {"add": [34], "mod": [24]}}}, {"path": "g4f/Provider/Hugchat.py", "status": "modified", "Loc": {"('Hugchat', 'create_completion', 17)": {"add": [25], "mod": [23]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Bing.py", "g4f/Provider/Hugchat.py", "g4f/Provider/Bard.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "3430b04f870d982d7fba34e3b9d6e5cf3bd3b847", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1003", "iss_label": "bug", "title": "please delete site chat.aivvm.com", "body": "**Known Issues** // delete this\r\nplease delete site `chat.aivvm.com`\r\n\r\n**Delete site description**\r\nGpt4free maintainers, I am the administrator of chat.aivvm.com and request to delete this site. My website is already under high load\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/xtekky/gpt4free/commit/3430b04f870d982d7fba34e3b9d6e5cf3bd3b847", "file_loc": {"base_commit": "3430b04f870d982d7fba34e3b9d6e5cf3bd3b847", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, 218)": {"mod": [218]}, "(None, None, 281)": {"mod": [281]}, "(None, None, 374)": {"mod": [374]}}}, {"path": "etc/testing/test_chat_completion.py", "status": "modified", "Loc": {"(None, 'run_async', 19)": {"mod": [22]}}}, {"path": "g4f/Provider/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [9]}}}, {"path": "g4f/Provider/Aivvm.py", "status": "renamed", "Loc": {"(None, None, None)": {"mod": [3, 4, 5, 21, 22]}}}, {"path": "g4f/Provider/deprecated/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [14]}}}, {"path": "g4f/models.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23, 47, 56, 66, 170, 171, 172, 173, 178, 179, 183, 184, 188, 189]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u7528\u6237\u8bf7\u6c42\u9879\u76ee\u5220\u9664\u81ea\u5df1\u7f51\u5740", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Aivvm.py", "g4f/Provider/deprecated/__init__.py", "g4f/Provider/__init__.py", "g4f/models.py"], "doc": ["README.md"], "test": ["etc/testing/test_chat_completion.py"], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/188", "iss_label": "fixed", "title": "can not run hackingtool !!!!! SyntaxError: invalid syntax ???", "body": "# python3 ./hackingtool.py\r\n\r\nTraceback (most recent call last):\r\n File \"/root/hackingtool/./hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/root/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\n__________________________________________________________________________________\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": {"base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "files": [{"path": "tools/ddos.py", "status": "modified", "Loc": {"('ddos', 'run', 20)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/177", "iss_label": "", "title": "Help me please", "body": "Traceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": {"base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "files": [{"path": "tools/ddos.py", "status": "modified", "Loc": {"('ddos', 'run', 20)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/189", "iss_label": "", "title": "look", "body": "![image](https://user-images.githubusercontent.com/93758292/155468650-b9d57e21-6c82-4005-a3ee-1783699e7f11.png)\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": {"base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "files": [{"path": "tools/ddos.py", "status": "modified", "Loc": {"('ddos', 'run', 20)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/209", "iss_label": "", "title": "Running Issue", "body": "**Describe the bug**\nA clear and concise description of what the bug is.\n\nI have installed this tool successfully though when i go to run it with or it comes up with the error I have attached as a screenshot. Why would this be happening?\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Go to '...'\n2. Click on '....'\n3. Scroll down to '....'\n4. See error\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n![image](https://user-images.githubusercontent.com/13176339/161368818-26bf3219-9ba7-4bab-a451-65d379c6d405.jpeg)\n\n**Desktop (please complete the following information):**\n - OS: Kali\n - Browser [e.g. chrome, safari]\n - Version [e.g. 22]\n\n**Smartphone (please complete the following information):**\n - Device: rpi4\n - OS: [e.g. iOS8.1]\n - Browser [e.g. stock browser, safari]\n - Version [e.g. 22]\n\n**Additional context**\nAdd any other context about the problem here.", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": {"base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "files": [{"path": "tools/ddos.py", "status": "modified", "Loc": {"('ddos', 'run', 20)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/185", "iss_label": "fixed", "title": "SyntaxError: invalid syntax traceback most recent call last", "body": "\u250c\u2500\u2500(root\ud83d\udc80localhost)-[~]\r\n\u2514\u2500# hackingtool\r\nTraceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\n\r\n\r\n\r\nthis happens when i type in \"hackingtool\" in terminal. any fixes?", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": {"base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "files": [{"path": "tools/ddos.py", "status": "modified", "Loc": {"('ddos', 'run', 20)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/187", "iss_label": "fixed", "title": "from tools.ddos import DDOSTools", "body": "# sudo hackingtool\r\nTraceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": {"base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "files": [{"path": "tools/ddos.py", "status": "modified", "Loc": {"('ddos', 'run', 20)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "bf0886bae0ccbc8c5d285b6e2affe7e40474f970", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/16924", "iss_label": "Bug\nEasy\nmodule:metrics", "title": "Matthews correlation coefficient metric throws misleading division by zero RuntimeWarning", "body": "#### Description\r\nWith tested values all equal, `sklearn.metrics.matthews_corrcoef` throws a `RuntimeWarning` reporting a division by zero. This behavior was already reported in #1937 and reported fixed, but reappears in recent versions.\r\n\r\n#### Steps/Code to Reproduce\r\nThe snippet below reproduces the warning.\r\n```python\r\nimport sklearn.metrics \r\ntrues = [1,0,1,1,0] \r\npreds = [0,0,0,0,0] \r\nsklearn.metrics.matthews_corrcoef(trues, preds)\r\n```\r\n\r\n#### Expected Results\r\nNo warning is thrown.\r\n\r\n#### Actual Results\r\nThe following warning is thrown:\r\n```\r\nC:\\anaconda\\envs\\sklearn-test\\lib\\site-packages\\sklearn\\metrics\\_classification.py:900: RuntimeWarning: invalid value encountered in double_scalars\r\n mcc = cov_ytyp / np.sqrt(cov_ytyt * cov_ypyp)\r\n```\r\n\r\n#### Versions\r\n```\r\nSystem:\r\n python: 3.8.2 (default, Mar 25 2020, 08:56:29) [MSC v.1916 64 bit (AMD64)]\r\nexecutable: C:\\anaconda\\envs\\sklearn-test\\python.exe\r\n machine: Windows-10-10.0.18362-SP0\r\n\r\nPython dependencies:\r\n pip: 20.0.2\r\nsetuptools: 46.1.3.post20200330\r\n sklearn: 0.22.1\r\n numpy: 1.18.1\r\n scipy: 1.4.1\r\n Cython: None\r\n pandas: None\r\nmatplotlib: None\r\n joblib: 0.14.1\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/19977", "commit_html_url": null, "file_loc": {"base_commit": "bf0886bae0ccbc8c5d285b6e2affe7e40474f970", "files": [{"path": "sklearn/metrics/_classification.py", "status": "modified", "Loc": {"(None, 'matthews_corrcoef', 800)": {"mod": [881, 883, 886]}}}, {"path": "sklearn/metrics/tests/test_classification.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23, 625]}, "(None, 'test_matthews_corrcoef', 671)": {"mod": [687, 688, 690, 691, 694, 696, 697]}, "(None, 'test_matthews_corrcoef_multiclass', 713)": {"mod": [734, 737, 738, 739, 757, 761, 762, 763, 765, 766]}}}, {"path": "sklearn/utils/_testing.py", "status": "modified", "Loc": {"(None, 'assert_warns_div0', 190)": {"mod": [190, 191, 193, 194, 196, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/metrics/_classification.py", "sklearn/utils/_testing.py"], "doc": [], "test": ["sklearn/metrics/tests/test_classification.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "4ac6a90a82e4a8d7b5338c18ae8a16559c98ba10", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/5101", "iss_label": "", "title": "LatentDirichletAllocation has superfluous attributes", "body": "It has `dirichlet_component_` (undocumented) and `exp_dirichlet_component_` (exponential of same). I propose to get rid of at least the latter.\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/5111", "commit_html_url": null, "file_loc": {"base_commit": "4ac6a90a82e4a8d7b5338c18ae8a16559c98ba10", "files": [{"path": "sklearn/decomposition/online_lda.py", "status": "modified", "Loc": {"('LatentDirichletAllocation', '_approx_bound', 542)": {"add": [579], "mod": [597, 612]}, "('LatentDirichletAllocation', '_init_latent_vars', 283)": {"mod": [305, 306, 308]}, "('LatentDirichletAllocation', '_em_step', 366)": {"mod": [407, 408]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1\n\u591a\u4f59\u7684\u9009\u9879", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/decomposition/online_lda.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "05123af1b2f8db1bc4f05c22515ef378cbeefbd3", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/76", "iss_label": "Bug", "title": "Sparse cumsum functions do not work", "body": "e.g. SparseSeries.cumsum\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/wesm/pandas/commit/05123af1b2f8db1bc4f05c22515ef378cbeefbd3", "file_loc": {"base_commit": "05123af1b2f8db1bc4f05c22515ef378cbeefbd3", "files": [{"path": "pandas/core/frame.py", "status": "modified", "Loc": {"('DataFrame', None, 97)": {"mod": [1962, 1963, 1964, 1966, 1967, 1968, 1969, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 2021, 2022, 2023, 2025, 2026, 2027, 2028, 2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039, 2041, 2043]}}}, {"path": "pandas/core/generic.py", "status": "modified", "Loc": {"('PandasGeneric', '_reindex_axis', 162)": {"add": [168]}}}, {"path": "pandas/core/series.py", "status": "modified", "Loc": {"('Series', 'cumsum', 570)": {"mod": [580, 581, 582, 583, 584, 585, 591, 592, 593, 594]}}}, {"path": "pandas/core/sparse.py", "status": "modified", "Loc": {"('SparseSeries', None, 152)": {"add": [512]}, "('SparseDataFrame', 'count', 1058)": {"add": [1059]}, "(None, None, None)": {"mod": [13]}}}, {"path": "pandas/tests/test_frame.py", "status": "modified", "Loc": {"('TestDataFrame', None, 539)": {"add": [2271]}, "('TestDataFrame', 'test_cumsum', 2271)": {"add": [2276, 2283, 2284], "mod": [2273, 2274, 2286, 2287]}}}, {"path": "pandas/tests/test_sparse.py", "status": "modified", "Loc": {"('TestSparseSeries', None, 111)": {"add": [577]}, "('TestSparseDataFrame', 'test_count', 1065)": {"add": [1068]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["pandas/core/frame.py", "pandas/core/generic.py", "pandas/core/series.py", "pandas/core/sparse.py"], "doc": [], "test": ["pandas/tests/test_frame.py", "pandas/tests/test_sparse.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "65c0441a41b2dcaeebb648274d30978419a8661a", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16607", "iss_label": "Datetime\nCompat", "title": "to_datetime should support ISO week year", "body": "`to_datetime` does not currently seem to support `ISO week year` like `strptime` does:\r\n\r\n```\r\nIn [38]: datetime.date(2016, 1, 1).strftime('%G-%V')\r\nOut[38]: '2015-53'\r\n\r\nIn [39]: datetime.datetime.strptime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', '%G-%V-%u')\r\nOut[39]: datetime.datetime(2015, 12, 28, 0, 0)\r\n\r\nIn [41]: pd.to_datetime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', format='%G-%V-%u')\r\n ---------------------------------------------------------------------------\r\n TypeError Traceback (most recent call last)\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 443 try:\r\n --> 444 values, tz = tslib.datetime_to_datetime64(arg)\r\n 445 return DatetimeIndex._simple_new(values, name=name, tz=tz)\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.datetime_to_datetime64 (pandas/_libs/tslib.c:33275)()\r\n\r\n TypeError: Unrecognized value type: \r\n\r\n During handling of the above exception, another exception occurred:\r\n\r\n ValueError Traceback (most recent call last)\r\n in ()\r\n ----> 1 pd.to_datetime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', format='%G-%V-%u')\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in to_datetime(arg, errors, dayfirst, yearfirst, utc, box, format, exact, unit, infer_datetime_format, origin)\r\n 516 result = _convert_listlike(arg, box, format)\r\n 517 else:\r\n --> 518 result = _convert_listlike(np.array([arg]), box, format)[0]\r\n 519 \r\n 520 return result\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 445 return DatetimeIndex._simple_new(values, name=name, tz=tz)\r\n 446 except (ValueError, TypeError):\r\n --> 447 raise e\r\n 448 \r\n 449 if arg is None:\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 412 try:\r\n 413 result = tslib.array_strptime(arg, format, exact=exact,\r\n --> 414 errors=errors)\r\n 415 except tslib.OutOfBoundsDatetime:\r\n 416 if errors == 'raise':\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.array_strptime (pandas/_libs/tslib.c:63124)()\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.array_strptime (pandas/_libs/tslib.c:63003)()\r\n\r\n ValueError: 'G' is a bad directive in format '%G-%V-%u'\r\n\r\n```\r\n\r\n
      \r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\n\r\npandas: 0.20.1\r\npytest: 3.1.0\r\npip: 9.0.1\r\nsetuptools: 28.8.0\r\nCython: 0.25.2\r\nnumpy: 1.12.1\r\nscipy: 0.19.0\r\nxarray: None\r\nIPython: 6.0.0\r\nsphinx: None\r\npatsy: 0.4.1\r\ndateutil: 2.6.0\r\npytz: 2017.2\r\nblosc: None\r\nbottleneck: None\r\ntables: 3.4.2\r\nnumexpr: 2.6.2\r\nfeather: None\r\nmatplotlib: 2.0.2\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: 0.999999999\r\nsqlalchemy: 1.1.10\r\npymysql: None\r\npsycopg2: 2.7.1 (dt dec pq3 ext lo64)\r\njinja2: 2.9.6\r\ns3fs: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n
      \r\n", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/25541", "commit_html_url": null, "file_loc": {"base_commit": "65c0441a41b2dcaeebb648274d30978419a8661a", "files": [{"path": "doc/source/whatsnew/v0.25.0.rst", "status": "modified", "Loc": {"(None, None, 21)": {"add": [21]}}}, {"path": "pandas/_libs/tslibs/strptime.pyx", "status": "modified", "Loc": {"(None, None, 79)": {"add": [79]}, "(None, None, 171)": {"add": [171]}, "(None, None, 267)": {"add": [267]}, "(None, None, 513)": {"add": [513]}, "(None, None, 520)": {"add": [520]}, "(None, None, 521)": {"add": [521]}, "(None, None, 622)": {"add": [622]}, "(None, None, 57)": {"mod": [57]}, "(None, None, 178)": {"mod": [178]}, "(None, None, 271)": {"mod": [271, 272, 273, 274]}, "(None, None, 596)": {"mod": [596, 597]}, "(None, None, 600)": {"mod": [600]}}}, {"path": "pandas/core/tools/datetimes.py", "status": "modified", "Loc": {"(None, 'to_datetime', 403)": {"add": [457]}}}, {"path": "pandas/tests/indexes/datetimes/test_tools.py", "status": "modified", "Loc": {"('TestToDatetime', None, 246)": {"add": [246]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code/Doc"}, "loctype": {"code": ["pandas/core/tools/datetimes.py", "pandas/_libs/tslibs/strptime.pyx"], "doc": ["doc/source/whatsnew/v0.25.0.rst"], "test": ["pandas/tests/indexes/datetimes/test_tools.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "6ed68f015a50ab35b84a8ea71b0f846ca6a75281", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/3074", "iss_label": "", "title": "send_file doesn't urlencode ':/' in unicode attachment_filename", "body": "### Expected Behavior\r\n\r\nWhen sending files with unicode filename (with `:` or `/`) they should be downloaded with name from `filename*` field.\r\n\r\n```python\r\n# -*- coding: utf-8 -*-\r\nimport os\r\nfrom flask import Flask, send_from_directory\r\napp = Flask(__name__)\r\n@app.route('/test/', methods=['GET'])\r\ndef test_route():\r\n tmp_dir = os.getcwd()\r\n tmp_filename = __file__\r\n attachment_filename = u'\u0442\u0435\u0441\u0442:\u0442\u0435\u0441\u0442_\u0442\u0435\u0441\u0442.py'\r\n return send_from_directory(\r\n tmp_dir,\r\n tmp_filename,\r\n as_attachment=True,\r\n attachment_filename=attachment_filename\r\n )\r\nif __name__ == '__main__':\r\n app.run(host='::', port=5000)\r\n```\r\n### Actual Behavior\r\n\r\nSome browsers (Chrome-based/Safari) ignore `filename*` field when it contains colon or slash. For example file `\u0442\u0435\u0441\u0442:\u0442\u0435\u0441\u0442_\u0442\u0435\u0441\u0442.py` gets downloaded in Chrome/Safari as `__.py` but in Firefox as `\u0442\u0435\u0441\u0442_\u0442\u0435\u0441\u0442_\u0442\u0435\u0441\u0442.py` which is acceptable in my opinion.\r\n\r\nFlask response:\r\n`Content-Disposition: attachment; filename*=\"UTF-8''%D1%82%D0%B5%D1%81%D1%82:%D1%82%D0%B5%D1%81%D1%82_%D1%82%D0%B5%D1%81%D1%82.py\"; filename=\":_.py\"`\r\n\r\n### Environment\r\n\r\n* Python version: 2.7.15\r\n* Flask version: 1.0.2\r\n* Werkzeug version: 0.14.1\r\n", "pr_html_url": "https://github.com/pallets/flask/pull/3273", "file_loc": {"base_commit": "6ed68f015a50ab35b84a8ea71b0f846ca6a75281", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}}}, {"path": "flask/helpers.py", "status": "modified", "Loc": {"(None, 'send_file', 454)": {"mod": [579]}}}, {"path": "tests/test_helpers.py", "status": "modified", "Loc": {"('TestSendfile', None, 436)": {"add": [648]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["flask/helpers.py"], "doc": ["CHANGES.rst"], "test": ["tests/test_helpers.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "50b7dcbab343c93bb6738bbf116a177e72b1d9ec", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/4099", "iss_label": "docs", "title": "Harmless race condition in tutorial", "body": "I was browsing the flaskr tutorial when I noticed an (admittedly quite unlikely) race condition in the `register` view, specifically:\r\n\r\n```py\r\nif not username:\r\n error = 'Username is required.'\r\nelif not password:\r\n error = 'Password is required.'\r\nelif db.execute(\r\n 'SELECT id FROM user WHERE username = ?', (username,)\r\n).fetchone() is not None:\r\n error = f\"User {username} is already registered.\"\r\n\r\nif error is None:\r\n db.execute(\r\n 'INSERT INTO user (username, password) VALUES (?, ?)',\r\n (username, generate_password_hash(password))\r\n )\r\n db.commit()\r\n return redirect(url_for('auth.login'))\r\n```\r\n\r\nIf two requests arrive with the right timing, the following can happen:\r\n\r\n```\r\n Request 1: Request 2:\r\nSELECT id\r\n FROM user\r\n WHERE username = abc\r\n |\r\n v\r\nempty, no such user\r\n\r\n SELECT id\r\n FROM user\r\n WHERE username = abc\r\n |\r\n v\r\n empty, no such user\r\n\r\nINSERT INTO user (username, password)\r\n VALUES (abc, 123)\r\n |\r\n v\r\n ok\r\n\r\n INSERT INTO user (username, password)\r\n VALUES (abc, 456)\r\n |\r\n v\r\n failed UNIQUE constraint -> \r\n -> sqlite3.IntegrityError ->\r\n -> user gets HTTP 500\r\n```\r\n\r\nWhile the likelihood of this happening is pretty small and the harm practically zero (user gets HTTP 500 and has to manually login/choose a different username), I feel like this is not really the sort of good practice the tutorial should teach. I also believe it's important the developer understands that it's the UNIQUE constraint that ensures their app works correctly and not the if condition in the application code (the tutorial mentions SQL injection attacks and explains what protects the developer against them, so I don't really feel this is out of scope).\r\n\r\nIn my own app I've modified the code to the following:\r\n```py\r\nif not username:\r\n error = 'Username is required.'\r\nelif not password:\r\n error = 'Password is required.'\r\nelse:\r\n try:\r\n db.execute(\r\n 'INSERT INTO users (username, password) VALUES (?, ?)',\r\n (username, generate_password_hash(password))\r\n )\r\n db.commit()\r\n except IntegrityError:\r\n error = f\"User {username} is already registered.\"\r\n else:\r\n return redirect(url_for('auth.login'))\r\n```\r\n\r\nI suggest something similar be incorporated into the tutorial, with a short explanation (maybe a comment) of how the UNIQUE constraint does the work for the developer and maybe a note about the principle that one should \"ask forgiveness, not permission.\" I'm not sure on how it's better worded, so I'm making this an issue instead of a pull request.\r\n\r\nCheers, and thank you for your great work!", "pr_html_url": "https://github.com/pallets/flask/pull/4139", "file_loc": {"base_commit": "50b7dcbab343c93bb6738bbf116a177e72b1d9ec", "files": [{"path": "docs/tutorial/views.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [202], "mod": [94, 95, 96, 97, 100, 101, 102, 103, 104, 105, 128, 129, 130, 131, 132, 133, 134, 136, 137, 138, 139, 141, 142, 143, 144, 145, 146, 147]}}}, {"path": "examples/tutorial/flaskr/auth.py", "status": "modified", "Loc": {"(None, 'register', 47)": {"mod": [63, 64, 65, 66, 67, 70, 71, 72, 73, 74, 75, 76, 77]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["examples/tutorial/flaskr/auth.py"], "doc": ["docs/tutorial/views.rst"], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "f17e6061fcffdc290f615d3fdc9d949e9e719574", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/1443", "iss_label": "", "title": "json_encoder not invoked from flask.jsonify", "body": "I created a custom JSON encoder class extended from flask.json.JSONEncoder but it is not called when calling flask.jsonify. Additionally, I removed my custom JSON encoder and confirmed that flask.json.JSONEncoder isn't called either via a break statement in Pycharm.\n\n```\nfrom flask import Flask\nfrom flask import jsonify\nfrom flask.json import JSONEncoder\n\nclass MyEncoder(JSONEncoder):\n def default(self, obj):\n if hasattr(obj, '__json__'):\n return obj.__json__()\n else:\n try:\n iterable = iter(obj)\n except TypeError:\n pass\n else:\n return list(iterable)\n\n return JSONEncoder.default(self, obj)\n\n\nclass MyClass(object):\n key = 'a'\n value = 'b'\n\n def __json__(self):\n return {'key': self.key, 'value': self.value}\n\napp = Flask(__name__)\napp.json_encoder = MyEncoder\n\n@app.route('/')\ndef hello_world():\n return jsonify(MyClass())\n\n\nif __name__ == '__main__':\n app.run(debug=True)\n```\n", "pr_html_url": "https://github.com/pallets/flask/pull/1671", "file_loc": {"base_commit": "f17e6061fcffdc290f615d3fdc9d949e9e719574", "files": [{"path": "AUTHORS", "status": "modified", "Loc": {"(None, None, None)": {"add": [17, 20], "mod": [35]}}}, {"path": "CHANGES", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}}}, {"path": "docs/security.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [98, 100, 101, 102, 103, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 125, 127, 128, 129, 130, 132, 133, 134, 135, 137, 138, 140, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 161, 162, 163, 164, 165, 166, 167, 168, 170, 171, 172, 173, 174, 175]}}}, {"path": "flask/json.py", "status": "modified", "Loc": {"(None, 'jsonify', 201)": {"add": [244], "mod": [202, 203, 204, 205, 225, 226, 248, 249]}}}, {"path": "tests/test_helpers.py", "status": "modified", "Loc": {"('TestJSON', 'test_json_as_unicode', 121)": {"add": [122], "mod": [124, 125, 126, 127, 129, 130, 131, 132]}, "('TestJSON', None, 32)": {"mod": [34, 35, 37, 38, 39, 40, 42, 43, 45, 46, 47, 48, 49, 50, 106, 107, 121]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["flask/json.py"], "doc": ["docs/security.rst", "CHANGES"], "test": ["tests/test_helpers.py"], "config": [], "asset": ["AUTHORS"]}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "f808c20139649b747f604492bc33b61a7dd3e13a", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2731", "iss_label": "bug\nblueprints", "title": "Flask 1.0 backwards-incompat with double-slash/no-slash re. #2629", "body": "This is a major backwards-compat breaking change, but I suspect not the intended design and hopefully easy to fix.\r\n\r\nThe issue is related to PR #2629, and this example follows from that:\r\n\r\nGiven blueprint `bp` and app `app`:\r\n\r\n```python\r\n@bp.route('b/')\r\ndef tmp():\r\n return \"URI should be '/a/b/\"\r\n\r\napp.register_blueprint(bp, url_prefix='/a/')\r\n```\r\n\r\nIn Flask 0.12 the URL is correctly `/a/b`, but in Flask 1.0 it's `/ab`.\r\n\r\nSince issue #2629 relates to resolve double-slashes, I imagine this is a bug (and not a design decision) - and the correct solution would be to remove a slash only when there are two.\r\n", "pr_html_url": "https://github.com/pallets/flask/pull/2629", "file_loc": {"base_commit": "f808c20139649b747f604492bc33b61a7dd3e13a", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [145, 188]}}}, {"path": "flask/blueprints.py", "status": "modified", "Loc": {"('BlueprintSetupState', '__init__', 25)": {"add": [55]}}}, {"path": "tests/test_blueprints.py", "status": "modified", "Loc": {"(None, 'test_blueprint_url_definitions', 117)": {"mod": [117]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["flask/blueprints.py"], "doc": ["CHANGES.rst"], "test": ["tests/test_blueprints.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "22708b048d224a5590fa28d86ca02bac52294f90", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2594", "iss_label": "cli", "title": "add ssl_context option to `flask run`", "body": "### Expected Behaviour\r\n\r\nI expect to be able to pass the `flask run` command any of the options which are valid for the `Flask.run()` method:\r\n\r\n```sh\r\n$ FLASK_APP=myapp/run.py FLASK_DEBUG=1 flask run --host=0.0.0.0 --ssl_context=adhoc\r\n* Running on https://0.0.0.0:5000/ (Press CTRL+C to quit)\r\n```\r\n\r\nSpecifically, I want to pass `ssl_context=adhoc`, but it seems sensible to extend the command to accept all valid keyword arguments for `Flask.run()` / `werkzeug.serving.run_simple()`.\r\n\r\n### Actual Behaviour\r\n```\r\nError: no such option: --ssl_context\r\nflask run --host=0.0.0.0 --ssl_context=adhoc exited with code 2\r\n```\r\n\r\n### Environment\r\n\r\n* Python version: 3.5.2\r\n* Flask version: 0.12.2\r\n* Werkzeug version: 0.12.2\r\n", "pr_html_url": "https://github.com/pallets/flask/pull/2606", "file_loc": {"base_commit": "22708b048d224a5590fa28d86ca02bac52294f90", "files": [{"path": "CHANGES", "status": "modified", "Loc": {"(None, None, None)": {"add": [120, 156]}}}, {"path": "flask/cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16, 23, 601, 606], "mod": [26, 608, 611, 614]}, "(None, 'run_command', 619)": {"mod": [620, 645]}}}, {"path": "tests/test_cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16, 17], "mod": [27, 28]}, "(None, 'test_dotenv_optional', 462)": {"add": [466]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["flask/cli.py"], "doc": ["CHANGES"], "test": ["tests/test_cli.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "e4c712ffd2682f963906e1d0d27e67b7f83d95ce", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/266", "iss_label": "docs\ngood first issue\nblueprints", "title": "Blueprint template lookup not documented enough", "body": "The new blueprint template lookup scheme where the templates folder is just added to the searchpath instead of doing some weird stuff with the names as before. The documentation has to be clearer about that.\n", "pr_html_url": "https://github.com/pallets/flask/pull/1843", "file_loc": {"base_commit": "e4c712ffd2682f963906e1d0d27e67b7f83d95ce", "files": [{"path": "docs/blueprints.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [179, 180, 181, 182, 183, 188]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/blueprints.rst"], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "8cd0b03beeac4a41c398ea365475c651c484a9ee", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2118", "iss_label": "bug", "title": "config.from_pyfile crashes on Python 3 when source isn't encoded in default encoding", "body": "when I read my instance config file, I get an error. \r\n\r\n> exec(compile(config_file.read(), filename, 'exec'), d.__dict__)\r\n> UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 437: illegal multibyte sequence\r\nThen I modify the code of config.from_pyfile to this\r\n\r\n> with open(filename, 'rb') as config_file:\r\nThe problem is resolved. \r\n\r\n", "pr_html_url": "https://github.com/pallets/flask/pull/2123", "file_loc": {"base_commit": "8cd0b03beeac4a41c398ea365475c651c484a9ee", "files": [{"path": "CHANGES", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}}}, {"path": "flask/config.py", "status": "modified", "Loc": {"('Config', 'from_pyfile', 111)": {"mod": [129]}}}, {"path": "tests/test_config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13, 14], "mod": [10, 12]}, "(None, 'test_get_namespace', 168)": {"add": [189]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["flask/config.py"], "doc": ["CHANGES"], "test": ["tests/test_config.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "85fa8aabf5a7bd0adf204f0c2dacbba1fa6683de", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2023", "iss_label": "discussion\nlogging", "title": "How should logging in Flask look like?", "body": "Flask started to ship with a default, hardcoded logging handler. Unfortunately this setup makes it harder to install custom logging setups, because then you'll have to undo all the things Flask did to the app logger, or replace the `app.logger` entirely. A symptom of this is #1993, where Flask's own logger had to be tweaked yet again such that messages didn't get logged twice (once via Flask's setup, once via the custom one).\n\nMy question is: **Do we even want Flask to do any logging setup?** It appears that this sort of default logging is only useful during development, so maybe it makes sense to set up a default logging handler in the new Flask CLI instead of from within the application.\n", "pr_html_url": "https://github.com/pallets/flask/pull/2436", "file_loc": {"base_commit": "85fa8aabf5a7bd0adf204f0c2dacbba1fa6683de", "files": [{"path": "CHANGES", "status": "modified", "Loc": {"(None, None, None)": {"add": [108]}}}, {"path": "docs/config.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [331], "mod": [202, 204, 205, 207, 209, 211, 212, 213, 215]}}}, {"path": "docs/contents.rst.inc", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}}}, {"path": "docs/errorhandling.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [146, 147, 149, 150, 151, 152, 153, 155, 156, 157, 158, 159, 161, 162, 163, 165, 166, 167, 168, 169, 170, 171, 172, 173, 175, 176, 177, 178, 179, 180, 181, 183, 184, 185, 187, 188, 189, 192, 193, 195, 196, 197, 198, 199, 200, 202, 203, 204, 206, 207, 208, 209, 210, 211, 212, 213, 214, 216, 217, 218, 220, 221, 222, 223, 224, 225, 227, 229, 230, 232, 233, 234, 235, 236, 238, 239, 240, 242, 244, 245, 247, 249, 250, 251, 252, 253, 254, 255, 257, 259, 260, 262, 263, 265, 267, 268, 269, 270, 271, 274, 275, 277, 278, 279, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 314, 315, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 329, 332, 333, 335, 336, 337, 338, 339, 340, 341, 343, 344, 345, 347, 348, 349, 350, 351, 352]}}}, {"path": "flask/app.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [32], "mod": [19, 20, 21, 22, 40, 41]}, "('Flask', None, 70)": {"mod": [267, 268, 269, 270, 271, 297, 298, 616]}, "('Flask', '__init__', 352)": {"mod": [395, 396, 397]}, "('Flask', 'logger', 617)": {"mod": [618, 619, 620, 621, 623, 624, 625, 629, 630, 631, 632, 633, 634, 635, 636]}}}, {"path": "flask/logging.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13, 41, 49], "mod": [1, 2, 3, 4, 6, 8, 9, 10, 17, 18, 19, 22, 23, 24, 25, 26, 27, 28]}, "(None, '_proxy_stream', 32)": {"mod": [32, 33, 34, 35, 37, 38, 39, 40]}, "(None, '_should_log_for', 43)": {"mod": [43, 44, 45, 46]}, "(None, 'create_logger', 50)": {"mod": [51, 52, 53, 54, 55, 57, 59, 60, 61, 62, 63, 65, 66, 67, 68, 70, 71, 72, 73, 75, 76, 77, 79, 80, 81, 83, 84, 85, 86, 87, 88, 89, 91, 92]}}}, {"path": "tests/test_basic.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1023]}, "(None, 'test_teardown_request_handler_error', 739)": {"mod": [741]}, "(None, 'test_error_handling', 816)": {"mod": [817]}, "(None, 'test_error_handling_processing', 862)": {"mod": [863]}, "(None, 'test_baseexception_error_handling', 884)": {"mod": [885]}, "(None, 'apprunner', 1427)": {"mod": [1428]}}}, {"path": "tests/test_helpers.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [12, 16, 18, 19, 22, 23, 25]}, "('TestLogging', None, 663)": {"mod": [663, 664, 665, 666, 667, 668, 669, 670, 672, 673, 675, 676, 677, 678, 679, 681, 682, 683, 685, 686, 687, 688, 689, 690, 691, 693, 694, 696, 697, 698, 699, 700, 702, 703, 704, 705, 706, 707, 709, 710, 711, 713, 714, 715, 717, 718, 719, 720, 721, 723, 724, 725, 727, 728, 729, 730, 732, 733, 734, 735, 736, 738, 739, 740, 742, 743, 744, 746, 747, 748, 749]}}}, {"path": "tests/test_subclassing.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13]}, "(None, 'test_suppressed_exception_logging', 18)": {"mod": [25, 26, 32, 36, 37]}, "(None, 'index', 29)": {"mod": [30]}}}, {"path": "tests/test_templating.py", "status": "modified", "Loc": {"(None, 'test_template_loader_debugging', 402)": {"mod": [402, 422, 423, 424, 425, 426, 428, 429, 431, 432, 433, 434]}}}, {"path": "tests/test_testing.py", "status": "modified", "Loc": {"(None, 'test_test_client_context_binding', 209)": {"mod": [210]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["flask/logging.py", "flask/app.py"], "doc": ["docs/contents.rst.inc", "docs/config.rst", "docs/errorhandling.rst", "CHANGES"], "test": ["tests/test_templating.py", "tests/test_testing.py", "tests/test_basic.py", "tests/test_helpers.py", "tests/test_subclassing.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "465da9f610a04d379bb39a0ff03fb6c0b0ea1c45", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2866", "iss_label": "logging", "title": "DispatcherMiddleware with different loggers per app in flask 1.0", "body": "After upgrading to flask 1.0 logging from different apps using DispatcherMiddleware, each log emitted is written to all handlers in the different apps. I assume this caused by `app.logger` always having the name `flask.app`, maybe?\r\n\r\nHere is a example:\r\n\r\n\r\n```\r\nfrom werkzeug.wsgi import DispatcherMiddleware\r\nfrom flask import Flask\r\nfrom logging.handlers import RotatingFileHandler\r\n\r\n\r\nhandler1 = RotatingFileHandler('app1.log')\r\napp1 = Flask('app1')\r\napp1.logger.addHandler(handler1)\r\n\r\nhandler2 = RotatingFileHandler('app2.log')\r\napp2 = Flask('app2')\r\napp2.logger.addHandler(handler2)\r\n\r\n\r\n@app1.route(\"/\")\r\ndef hello():\r\n app1.logger.error(\"from app1\")\r\n return ''\r\n\r\n\r\n@app2.route(\"/\")\r\ndef hello2():\r\n app2.logger.error(\"from app2\")\r\n return ''\r\n\r\n\r\napp = DispatcherMiddleware(app1, {\r\n '/app2': app2\r\n})\r\n```\r\n\r\nRun with\r\n```\r\nuwsgi --socket 0.0.0.0:8000 --protocol=http -w app --callable app\r\n```\r\n\r\nAnd then make a request to / and /app2/. Each error log will be written in both logfiles.\r\n\r\n### Environment\r\n\r\n* Python version: 3.6.5\r\n* Flask version: 1.0.2\r\n* Werkzeug version: 0.14.1\r\n\r\nMy actual app is using `current_app.logger` with blueprints with the same behaviour, but I assume it the same issue.", "pr_html_url": "https://github.com/pallets/flask/pull/3282", "file_loc": {"base_commit": "465da9f610a04d379bb39a0ff03fb6c0b0ea1c45", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [20]}}}, {"path": "docs/config.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [385]}}}, {"path": "docs/errorhandling.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [234]}}}, {"path": "docs/logging.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 6, 7, 8, 9]}}}, {"path": "src/flask/app.py", "status": "modified", "Loc": {"('Flask', 'logger', 655)": {"mod": [656, 657, 659, 660, 662, 663, 665, 667, 668, 669, 670, 671]}}}, {"path": "src/flask/logging.py", "status": "modified", "Loc": {"(None, 'create_logger', 59)": {"mod": [60, 69]}}}, {"path": "tests/test_logging.py", "status": "modified", "Loc": {"(None, 'reset_logging', 21)": {"mod": [26]}, "(None, 'test_logger', 44)": {"mod": [45]}}}, {"path": "tests/test_templating.py", "status": "modified", "Loc": {"(None, 'test_template_loader_debugging', 409)": {"mod": [433]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/flask/logging.py", "src/flask/app.py"], "doc": ["docs/config.rst", "docs/logging.rst", "docs/errorhandling.rst", "CHANGES.rst"], "test": ["tests/test_templating.py", "tests/test_logging.py"], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "c8cf4694c60f0d81809468a1b45ec730496cc546", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/5160", "iss_label": "", "title": "Switch to importlib breaks scripts with `app.run()`", "body": "With a trivial script [using `app.run()`](https://flask.palletsprojects.com/en/2.3.x/server/#in-code) such as:\r\n\r\n```python3\r\nfrom flask import Flask\r\n\r\napp = Flask(__name__)\r\n\r\nif __name__ == \"__main__\":\r\n app.run(debug=True)\r\n```\r\n\r\nThe current git `main` breaks with:\r\n\r\n```pytb\r\nTraceback (most recent call last):\r\n File \"/home/florian/tmp/flask/app.py\", line 3, in \r\n app = Flask(__name__)\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/florian/tmp/flask/src/flask/app.py\", line 376, in __init__\r\n instance_path = self.auto_find_instance_path()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/florian/tmp/flask/src/flask/app.py\", line 630, in auto_find_instance_path\r\n prefix, package_path = find_package(self.import_name)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/florian/tmp/flask/src/flask/scaffold.py\", line 898, in find_package\r\n package_path = _find_package_path(import_name)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/florian/tmp/flask/src/flask/scaffold.py\", line 858, in _find_package_path\r\n spec = importlib.util.find_spec(root_mod_name)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"\", line 114, in find_spec\r\nValueError: __main__.__spec__ is None\r\n```\r\n\r\nThis seems to be a regression due to 84e11a1e827c0f55f9b9ee15952eddcf8a6492e0 from #5157.\r\n\r\nEnvironment:\r\n\r\n- Python version: 3.11.4\r\n- Flask version: git main\r\n", "pr_html_url": "https://github.com/pallets/flask/pull/5161", "file_loc": {"base_commit": "c8cf4694c60f0d81809468a1b45ec730496cc546", "files": [{"path": "CHANGES.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}}}, {"path": "src/flask/helpers.py", "status": "modified", "Loc": {"(None, 'get_root_path', 562)": {"mod": [578, 579, 584]}}}, {"path": "src/flask/scaffold.py", "status": "modified", "Loc": {"(None, '_matching_loader_thinks_module_is_package', 782)": {"mod": [782, 783, 785, 786, 787, 788, 789, 790, 792, 794, 795, 796, 797, 799, 800, 801, 802, 803, 804]}, "(None, '_find_package_path', 816)": {"mod": [825, 826, 827, 828, 829, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 857, 858, 859, 861, 862, 865, 866, 867, 868, 869, 870, 871, 872, 873, 875, 877, 878, 879, 880, 882]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/flask/helpers.py", "src/flask/scaffold.py"], "doc": ["CHANGES.rst"], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "a2655907389f1625540c0c643aed33bc26e63da8", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/762", "iss_label": "false positive", "title": "linkedin false positive", "body": "", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/773", "file_loc": {"base_commit": "a2655907389f1625540c0c643aed33bc26e63da8", "files": [{"path": "removed_sites.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [484], "mod": [149, 150]}}}, {"path": "removed_sites.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [956]}}}, {"path": "sherlock/resources/data.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [891, 892, 893, 894, 895, 896, 897, 898, 899, 1521, 1522, 1523, 1524, 1525, 1526, 1527, 1528, 2259, 2260, 2261, 2262, 2263, 2264, 2265, 2266, 2267]}}}, {"path": "sites.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["removed_sites.json", "sherlock/resources/data.json"], "doc": ["removed_sites.md", "sites.md"], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "b9ff06593288bcf94211878e41e5d010eb1e00f2", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/30", "iss_label": "enhancement", "title": "Optional csv Result Output", "body": "While the text output file is good to have, it only shows the sites that do have an account with the username. It would be useful to also have a list of the sites that still had an opening.\r\n\r\nI think that it would be useful to have a csv output as well. It could be optional, so the user would have to add a `--csv` switch before the file would be output. Then the output would be available in LibreOffice or Excel for users to sort and organize as they wish.\r\n\r\nI am thinking of the following columns:\r\n\r\n- Username\r\nThe user name that is being queried.\r\n- Social Network Name\r\nFor example, \"Twitter\", \"GitHub\", ...\r\n- Social Network Main URL\r\nFor example, \"https://twitter.com/\" for Twitter\r\n- Social Network User URL\r\nThis is the URL that Sherlock would be trying to check the existence of. For example, \"https://twitter.com/user123\" for Twitter.\r\n- Username Exists\r\nWill be either \"True\" or \"False\".\r\n- Request Response Data\r\nI think that this would be useful for debug of the program. Especially for those sites where Sherlock depends on the response text (as opposed to an explicit HTTP response code).", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/32", "file_loc": {"base_commit": "b9ff06593288bcf94211878e41e5d010eb1e00f2", "files": [{"path": "sherlock.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 6, 20], "mod": [9]}, "(None, 'print_error', 26)": {"mod": [26]}, "(None, 'make_request', 33)": {"mod": [33, 37, 46]}, "(None, 'sherlock', 49)": {"mod": [57, 58, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 72, 73, 74, 75, 77, 79, 80, 81, 82, 83, 84, 86, 87, 89, 90, 91, 92, 93, 95, 96, 98, 99, 100, 101, 102, 103, 104, 105, 107, 108]}, "(None, 'main', 115)": {"mod": [134, 155, 157, 158]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sherlock.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "e781ed5e260e1fdb58410a3103adb2b95ce547c8", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/503", "iss_label": "site support request", "title": "Add search \"kali community\"", "body": "```\r\n \"kali community\": {\r\n \"errorType\": \"status_code\",\r\n \"rank\": 10313,\r\n \"url\": \"https://forums.kali.org/member.php?username={}\",\r\n \"urlMain\": \"https://forums.kali.org/\",\r\n \"username_claimed\": \"blue\",\r\n \"username_unclaimed\": \"noonewouldeverusethis7\"\r\n },\r\n```", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/567", "file_loc": {"base_commit": "e781ed5e260e1fdb58410a3103adb2b95ce547c8", "files": [{"path": "data.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [288, 545, 665, 939, 974, 1374, 1390, 1628]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["data.json"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "7b3cf0aaf28e0e4051dffedb6d1416b9fa9ef456", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1305", "iss_label": "bug", "title": "[ISSUE] Sherlock stops after Countable search", "body": "- [X] I'm reporting a bug in Sherlock's functionality\r\n- [X] The bug I'm reporting is not a false positive or a false negative\r\n- [X] I've verified that I'm running the latest version of Sherlock\r\n- [X] I've checked for similar bug reports including closed ones\r\n- [X] I've checked for pull requests that attempt to fix this bug\r\n\r\n## Description\r\nUpon running `python3 sherlock username`, the checks stop right after a return on Countable. (See screenshot below)\r\nAs I am unsure of if this is a social media downtime issue, I am consulting here.\r\n\"Screen_Shot_2022-04-16_at_11\r\n\r\n\r\nSuggested fix:\r\nImprove 404 messages and insert timeouts if sites are not reachable within a few seconds.\r\n", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/1307", "file_loc": {"base_commit": "7b3cf0aaf28e0e4051dffedb6d1416b9fa9ef456", "files": [{"path": "removed_sites.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [655]}}}, {"path": "removed_sites.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [1267]}}}, {"path": "sherlock/resources/data.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [545, 546, 547, 548, 549, 550, 551]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["removed_sites.json", "sherlock/resources/data.json"], "doc": ["removed_sites.md"], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "588616d6155edb4184e5ff8f4cf0104e0a0e2468", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/326", "iss_label": "bug", "title": "Codepen False Negative/Trip False Positive", "body": "![image](https://user-images.githubusercontent.com/12257112/65722187-a5f37e00-e0a3-11e9-9a69-676ab5c5da78.png)\r\n\r\n![image](https://user-images.githubusercontent.com/12257112/65722337-fc60bc80-e0a3-11e9-95da-fc8ba98cbbeb.png)\r\n", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/328", "file_loc": {"base_commit": "588616d6155edb4184e5ff8f4cf0104e0a0e2468", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [71]}}}, {"path": "data.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1257, 1258]}}}, {"path": "sherlock.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sherlock.py", "data.json"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "508eb88724dbe20dedf07dc00527ad4f32c93a77", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/5", "iss_label": "bug", "title": "Formatting on windows Powershell very off", "body": "\u001b[37;1m[\u001b[92;1m+\u001b[37;1m]\u001b[92;1m Unsplash:\u001b[0m https://unsplash.com/@User\r\n\u001b[1;92m[\u001b[0m\u001b[1;77m*\u001b[0m\u001b[1;92m] Saved: \u001b[37;1mUser.txt\u001b[0m\r\n\r\nThat's how all the lines look like in windows powershell. I guess the print is designed for a linux platform?", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/71", "file_loc": {"base_commit": "508eb88724dbe20dedf07dc00527ad4f32c93a77", "files": [{"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}}}, {"path": "sherlock.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16]}, "(None, 'main', 247)": {"add": [247]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sherlock.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "e6642737462aaadf2d1f36ea0d417b8cf8a40541", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/301", "iss_label": "bug", "title": "Error Connecting for most websites (Failed to establish a new connection: [Errno -2] Name or service not known' in urllib3", "body": "I believe this might be a partial duplicate of #230, however, because of lack of information and error output I made my own issue just to be safe.\r\n\r\nWith the latest python3.7.3 and the correct requirements I constantly run into the issue of websites not connecting.\r\nSpecifically this happens after trying to connect to Bandcamp, subsequently, all other urls following bandcamp fail as well.\r\n\r\nI printed the `requests.exceptions.ConnectionError` at line 128 as well as the future object class.\r\nFor some reason, urllib3 seems to fail finding the name or service (so this might be a DNS issue?).\r\n\r\nFurther details below (snippet from e.g. YouTube): \r\n\r\n```\r\n[-] Error Connecting: YouTube\r\n[-] YouTube: Error!\r\n{'errorType': 'response_url', 'errorUrl': 'https://www.zhihu.com/people/{}', 'rank': 0, 'url': 'https://www.zhihu.com/people/{}', 'urlMain': 'https://www.zhihu.com/', 'username_claimed': 'blue', 'username_unclaimed': 'noonewouldeverusethis7', 'request_future': }\r\n\r\nHTTPSConnectionPool(host='www.zhihu.com', port=443): Max retries exceeded with url: /people/berghopper (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known'))\r\n\r\n```", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/471", "file_loc": {"base_commit": "e6642737462aaadf2d1f36ea0d417b8cf8a40541", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [72]}}}, {"path": "data.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [50, 315, 714, 722, 1032, 1118, 1302, 1617, 1872, 1880, 1896, 1939, 1971, 2005, 2013], "mod": [4, 13, 21, 29, 37, 45, 53, 61, 69, 78, 87, 95, 104, 112, 120, 128, 137, 146, 154, 162, 171, 179, 189, 197, 205, 215, 224, 232, 241, 250, 259, 267, 276, 284, 293, 302, 310, 318, 326, 334, 343, 351, 359, 368, 377, 386, 395, 403, 411, 419, 428, 444, 453, 461, 469, 477, 487, 497, 506, 514, 523, 531, 539, 549, 557, 565, 574, 582, 590, 598, 608, 616, 624, 633, 641, 649, 657, 665, 675, 684, 692, 700, 709, 717, 726, 735, 744, 753, 762, 770, 779, 788, 797, 805, 814, 832, 840, 848, 856, 865, 873, 881, 889, 898, 907, 916, 925, 933, 941, 950, 958, 968, 976, 985, 993, 1002, 1010, 1019, 1027, 1035, 1043, 1052, 1061, 1070, 1078, 1097, 1105, 1113, 1121, 1130, 1139, 1148, 1156, 1164, 1172, 1180, 1188, 1196, 1204, 1221, 1229, 1237, 1245, 1254, 1262, 1271, 1280, 1289, 1297, 1323, 1332, 1342, 1351, 1359, 1367, 1384, 1393, 1401, 1410, 1418, 1426, 1434, 1442, 1451, 1459, 1467, 1475, 1484, 1493, 1501, 1510, 1519, 1527, 1535, 1544, 1552, 1561, 1569, 1578, 1587, 1596, 1612, 1621, 1630, 1639, 1657, 1665, 1673, 1682, 1690, 1698, 1706, 1714, 1723, 1732, 1741, 1750, 1758, 1767, 1776, 1784, 1793, 1803, 1811, 1820, 1829, 1838, 1856, 1867, 1875, 1883, 1891, 1900, 1908, 1916, 1925, 1934, 1942, 1950, 1958, 1966, 1975, 1984, 1992, 2000, 2008, 2016, 2025, 2033, 2042, 2051, 2059, 2068, 2076, 2082, 2083, 2084, 2085, 2086, 2087, 2088, 2089, 2090, 2091, 2092, 2093, 2094, 2095, 2096, 2097, 2098, 2099, 2100, 2101, 2102, 2103, 2104, 2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119, 2120, 2121, 2122, 2123, 2124, 2125, 2126, 2127, 2128, 2129, 2130, 2131, 2132, 2133, 2134, 2135, 2136, 2137, 2138, 2139, 2140, 2141, 2142, 2143, 2144, 2145, 2146, 2147, 2148, 2149, 2150, 2151, 2152, 2153, 2154, 2155, 2156, 2157, 2158, 2159, 2160, 2161, 2162, 2163, 2164, 2165, 2166, 2167, 2168, 2169, 2170, 2171, 2172, 2173, 2174, 2175, 2176, 2177, 2178, 2179, 2180, 2181, 2182, 2183, 2184, 2185, 2186, 2187, 2188, 2189, 2190, 2191, 2192, 2193, 2194, 2195, 2196, 2197, 2198, 2199, 2200, 2201, 2202, 2203, 2204, 2205, 2206, 2207, 2208, 2209, 2210, 2211, 2212, 2213, 2216, 2222, 2224, 2225, 2226, 2227, 2228, 2229, 2230, 2231, 2232, 2233, 2234, 2235, 2236, 2242, 2248, 2249, 2250, 2251, 2252, 2253, 2254, 2255, 2256, 2258, 2259, 2260, 2264, 2266, 2267, 2268, 2272, 2273, 2274, 2275, 2276, 2277, 2278, 2281, 2283, 2284, 2285, 2287]}}}, {"path": "sherlock.py", "status": "modified", "Loc": {"(None, 'sherlock', 151)": {"add": [189, 191], "mod": [183, 184, 187, 188, 193, 194, 195]}, "(None, None, None)": {"mod": [29]}}}, {"path": "sites.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 248]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sherlock.py", "data.json"], "doc": ["README.md", "sites.md"], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "96130040540e1405ffe746ddf2b2cceb9b8b8f65", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/16453", "iss_label": "type:docs", "title": "The vocabulary_size method of preprocessing layers does not work in graph mode", "body": "**System information**.\r\n- Have I written custom code (as opposed to using a stock example script provided in Keras): **yes**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): **Colab**\r\n- TensorFlow installed from (source or binary): **binary**\r\n- TensorFlow version (use command below): **TF 2.8, nightly 2.10.0-dev20220422**\r\n\r\n**Describe the problem**.\r\n\r\nUsing the `vocabulary_size()` method of preprocessing layers (like `tf.keras.layers.StringLookup`) fails in graph mode, because the implementation looks like this: https://github.com/keras-team/keras/blob/aea9728313bcaa8262774699c21976288171b209/keras/layers/preprocessing/index_lookup.py#L346-L352\r\n\r\n**Describe the current behavior**.\r\n\r\nCalling `vocabulary_size()` in graph mode fails.\r\n\r\n**Describe the expected behavior**.\r\n\r\nCalling `vocabulary_size()` in graph mode succeeds and returns a `tf.Tensor` with the size.\r\n\r\n**Standalone code to reproduce the issue**.\r\n\r\nColab notebook showing the issue is at https://colab.research.google.com/drive/1Mq9G8eUvNLw6ykk4ARKf6jurLREAvRhu?usp=sharing both for TF 2.8 and TF nightly.\r\n\r\n**Source code / logs**.\r\n\r\nAn obvious fix is to avoid using the `numpy()` method when in graph mode, i.e., instead of\r\n```python\r\nreturn int(self.lookup_table.size().numpy()) + self._token_start_index()\r\n```\r\nuse just\r\n```python\r\nreturn self.lookup_table.size() + self._token_start_index()\r\n```\r\nNote that the above notebook also shows that this implementation works in the graph mode.", "pr_html_url": "https://github.com/keras-team/keras/pull/16460", "file_loc": {"base_commit": "96130040540e1405ffe746ddf2b2cceb9b8b8f65", "files": [{"path": "keras/layers/preprocessing/index_lookup.py", "status": "modified", "Loc": {"('IndexLookup', 'vocabulary_size', 346)": {"mod": [352]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/layers/preprocessing/index_lookup.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "5b299743442b64afaeeec01e925ddbeb112aad3c", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/20712", "iss_label": "type:support", "title": "Tensorflow BackupAndRestore method does not work", "body": "I copied the [example code here](https://keras.io/api/callbacks/backup_and_restore/) and it raises a ValueError with Python 3.11 and Tensorflow 2.17:\r\n```\r\nimport keras\r\nimport numpy as np\r\n\r\nclass InterruptingCallback(keras.callbacks.Callback):\r\n def on_epoch_begin(self, epoch, logs=None):\r\n if epoch == 4:\r\n raise RuntimeError('Interrupting!')\r\ncallback = keras.callbacks.BackupAndRestore(backup_dir=\"/tmp/backup\")\r\nmodel = keras.models.Sequential([keras.layers.Dense(10)])\r\nmodel.compile(keras.optimizers.SGD(), loss='mse')\r\ntry:\r\n model.fit(np.arange(100).reshape(5, 20), np.zeros(5), epochs=10,\r\n batch_size=1, callbacks=[callback, InterruptingCallback()],\r\n verbose=0)\r\nexcept Exception as e:\r\n print(e)\r\nhistory = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),\r\n epochs=10, batch_size=1, callbacks=[callback],\r\n verbose=0)\r\nlen(history.history['loss'])\r\n\r\n\r\nValueError: To use the BackupAndRestore method, your model must be built before you call `fit()`. Model is unbuilt. You can build it beforehand by calling it on a batch of data.\r\n```\r\n\r\nDoesn't that defeat the purpose of backupAndRestore?", "pr_html_url": "https://github.com/keras-team/keras/pull/20714", "file_loc": {"base_commit": "5b299743442b64afaeeec01e925ddbeb112aad3c", "files": [{"path": "keras/src/callbacks/backup_and_restore.py", "status": "modified", "Loc": {"('BackupAndRestore', None, 9)": {"add": [39]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/src/callbacks/backup_and_restore.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "a56b16fffec6e4a431bf14e13e7dabeeb5904cd8", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/8426", "iss_label": "", "title": "vgg16 predictions: class probabilities change significantly in 2.0.9", "body": "Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on [StackOverflow](http://stackoverflow.com/questions/tagged/keras) or [join the Keras Slack channel](https://keras-slack-autojoin.herokuapp.com/) and ask there instead of filing a GitHub issue.\r\n\r\nThank you!\r\n\r\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\r\npip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\r\n\r\n- [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found [here](https://www.tensorflow.org/get_started/os_setup).\r\n\r\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\r\npip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\r\n\r\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\r\n\r\nI've noticed when updating to Keras 2.0.9 that the class probabilities for predictions on the \"creative_commons_elephant.jpg\" image used in tutorials change significantly (i.e. African_elephant goes from `0.909` to `0.789`).\r\n\r\nHere is my script:\r\n\r\n```python\r\nfrom keras.applications.vgg16 import VGG16\r\nmodel = VGG16(weights='imagenet')\r\n\r\nfrom keras.preprocessing import image\r\nfrom keras.applications.vgg16 import preprocess_input, decode_predictions\r\nimport numpy as np\r\n\r\nimg_path = 'creative_commons_elephant.jpg'\r\nimg = image.load_img(img_path, target_size=(224, 224))\r\nx = image.img_to_array(img)\r\nx = np.expand_dims(x, axis=0)\r\nx = preprocess_input(x)\r\n\r\npreds = model.predict(x)\r\nprint('Predicted:', decode_predictions(preds, top=3)[0])\r\n```\r\n\r\nUnder Keras 2.0.8 this gives:\r\n\r\n```\r\n('Predicted:', [(u'n02504458', u'African_elephant', 0.90942073), (u'n01871265', u'tusker', 0.086183183), (u'n02504013', u'Indian_elephant', 0.0043545808)])\r\n```\r\n\r\nHowever under Keras 2.0.9 it gives:\r\n\r\n```\r\n('Predicted:', [(u'n02504458', u'African_elephant', 0.78988522), (u'n01871265', u'tusker', 0.1987267), (u'n02504013', u'Indian_elephant', 0.011142471)])\r\n```", "pr_html_url": "https://github.com/keras-team/keras/pull/8435", "file_loc": {"base_commit": "a56b16fffec6e4a431bf14e13e7dabeeb5904cd8", "files": [{"path": "keras/preprocessing/image.py", "status": "modified", "Loc": {"(None, 'load_img', 321)": {"mod": [322, 335]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/preprocessing/image.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "ffb8b813818221a9b43d51e251d489a40b116607", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/9434", "iss_label": "type:bug/performance", "title": "Error with multiprocessing on Sequence in fit_generator()", "body": "I'm trying to use a `Sequence` as the generator for `model.fit_generator()`.\r\n\r\nWith `use_multiprocessing=False` it works fine, but with `use_multiprocessing=True` an error is produced.\r\n\r\n**Minimal working example:**\r\n```python\r\nfrom keras.utils import Sequence\r\nfrom keras.models import Sequential\r\nfrom keras.layers import Dense\r\nfrom keras.utils import to_categorical\r\nimport numpy as np\r\n\r\nclass DummySequence(Sequence):\r\n \r\n def __init__(self, x_set, y_set, batch_size):\r\n self.x, self.y = x_set, y_set\r\n self.batch_size = batch_size\r\n\r\n def __len__(self):\r\n return int(np.ceil(len(self.x) / float(self.batch_size)))\r\n\r\n def __getitem__(self, idx):\r\n batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]\r\n batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]\r\n\r\n return np.array(batch_x), np.array(batch_y)\r\n\r\nif __name__ == '__main__':\r\n\r\n x = np.random.random((100, 3))\r\n y = to_categorical(np.random.random(100) > .5).astype(int)\r\n\r\n seq = DummySequence(x, y, 10)\r\n\r\n model = Sequential()\r\n model.add(Dense(32, input_dim=3))\r\n model.add(Dense(2, activation='softmax'))\r\n model.compile(optimizer='rmsprop',\r\n loss='categorical_crossentropy',\r\n metrics=['accuracy'])\r\n\r\n model.fit_generator(generator=seq, workers=2, use_multiprocessing=True)\r\n```\r\n**Error:**\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\elcombato\\AppData\\Local\\Continuum\\Anaconda3\\envs\\ml\\lib\\multiprocessing\\pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"C:\\Users\\elcombato\\AppData\\Local\\Continuum\\Anaconda3\\envs\\ml\\lib\\site-packages\\keras\\utils\\data_utils.py\", line 392, in get_index\r\n return _SHARED_SEQUENCES[uid][i]\r\nKeyError: 0\r\n```\r\n\r\n**Setup**\r\nWindows 10\r\nPython 3.6.4\r\nKeras 2.1.3\r\nTensorflow 1.2.1", "pr_html_url": "https://github.com/keras-team/keras/pull/9436", "file_loc": {"base_commit": "ffb8b813818221a9b43d51e251d489a40b116607", "files": [{"path": "keras/utils/data_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [375]}, "('OrderedEnqueuer', 'start', 500)": {"mod": [509, 511]}, "('OrderedEnqueuer', '_run', 526)": {"mod": [534]}}}, {"path": "tests/keras/utils/data_utils_test.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 11, 26, 221, 240, 251, 295]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/utils/data_utils.py", "tests/keras/utils/data_utils_test.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "fed28a7357e13aeb955f891747a1f9b26d5bc581", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/19520", "iss_label": "type:docs\nstat:awaiting response from contributor", "title": "kl divergence outputs confusing result when the label contains negative", "body": "Here are some APIs implemented for kl divergence: \r\n```\r\ntf.losses.kullback_leibler_divergence\r\ntf.keras.losses.KLDivergence\r\ntf.keras.metrics.kl_divergence\r\n```\r\nHowever, when the `y_true` or `y_pred` is negative, these APIs output incorrect result which is inconsistent with the documentation (https://www.tensorflow.org/api_docs/python/tf/keras/losses/KLDivergence).\r\n\r\nHere is the code to reproduce:\r\n```\r\nimport os\r\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'\r\nos.environ['OPENBLAS_NUM_THREADS'] = '1'\r\nos.environ['CUDA_VISIBLE_DEVICES'] = ''\r\nimport numpy as np\r\nnp.random.seed(35)\r\nimport tensorflow as tf\r\ny_true = tf.constant(np.random.randn(1), dtype='float32')\r\ny_pred = tf.constant(np.random.randn(1), dtype='float32')\r\nprint(y_true, y_pred)\r\nout1 = tf.losses.kullback_leibler_divergence(y_true,y_pred)\r\nout2 = tf.keras.losses.KLDivergence(\r\n reduction='sum_over_batch_size', name='kl_divergence'\r\n)(y_true, y_pred)\r\nout3 = tf.keras.metrics.kl_divergence(y_true, y_pred)\r\nprint(out1, out2, out3) # 0.0, 0.0, 0.0\r\nprint(f\"Expected result: {y_true*(np.log(y_true/y_pred))}\") # Expected result following the equation: [-2.8711002]\r\n```\r\n\r\nI notice that the current code will silently clip the negative value close to zero:\r\nhttps://github.com/keras-team/keras/blob/61bbff593a0914f5a2c426c14caadb7372f56da0/keras/losses/losses.py#L1466-L1467\r\n\r\nIf negative value is not allowed for these concerned APIs, maybe some descriptions on the documentation or validation check inside the source code can provide more information.\r\n", "pr_html_url": "https://github.com/keras-team/keras/pull/19526", "file_loc": {"base_commit": "fed28a7357e13aeb955f891747a1f9b26d5bc581", "files": [{"path": "keras/losses/losses.py", "status": "modified", "Loc": {"(None, 'kl_divergence', 1441)": {"mod": [1464]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/losses/losses.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "4bb90a5bfd762269adeee772e233a733a6d318a9", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/12013", "iss_label": "To investigate", "title": "Nasnet weight errors (no_top version)", "body": "I\"m running into this error:\r\n\r\n ValueError: You are trying to load a weight file containing 532 layers into a model with 526 layers.\r\n\r\nMy keras version is:\r\n\r\n\r\n\r\n`>>> keras.__version__\r\n '2.2.4'`\r\n\r\n`Tensorflow version:\r\n\r\n '1.12.0'`\r\n\r\nBascially, when the model tries to load I get this value error\r\n ` ValueError: You are trying to load a weight file containing 532 layers into a model with 526 layers. ` \r\n\r\n\r\nI\"ve looked at this thread:\r\n\r\nhttps://github.com/keras-team/keras/issues/10109\r\n\r\nHowever, I\"m trying to run the `no_top version`, so It shouldn't matter what my input vector is.\r\n\r\nbase_model(weights='imagenet', include_top=False)\r\n\r\nThank you. \r\n\r\n\r\nHere is the script Im using:\r\n\r\n ################IMPORTS########################\r\n\r\n #--Keras imports--#\r\n\r\n from keras.applications import resnet50, xception,inception_v3,inception_resnet_v2, densenet, nasnet, imagenet_utils\r\n from keras.preprocessing import image\r\n from keras.preprocessing.image import ImageDataGenerator\r\n from keras.layers import Input, Dense\r\n from keras import backend as k\r\n from keras.models import Model, clone_model\r\n from keras.layers import Dense, GlobalAveragePooling2D,Dropout, BatchNormalization\r\n from keras import optimizers\r\n from keras.optimizers import Adam\r\n from keras.callbacks import ModelCheckpoint,ProgbarLogger\r\n from keras.utils import print_summary\r\n from keras import __version__\r\n from keras.datasets import cifar10\r\n\r\n\r\n\r\n #--python imports--#\r\n import os\r\n import numpy as np\r\n import datetime\r\n import h5py\r\n import json\r\n import time\r\n\r\n\r\n\r\n\r\n\r\n ################# -- PARAMETERS -- ##################################\r\n\r\n\r\n img_width, img_height = 480, 480\r\n (x_train, y_train), _ = cifar10.load_data()\r\n classes = len(y_train[0])\r\n\r\n\r\n\r\n\r\n ##------initial training parameters -----##\r\n\r\n i_epochs = 10\r\n i_batch_size = 20\r\n i_steps_per_epoch = 100\r\n i_optimizer = optimizers.SGD(lr=0.0001, momentum=0.9)\r\n\r\n\r\n\r\n\r\n #################### MODELS ######################################\r\n\r\n def basemodel():\r\n \r\n base_model = nasnet.NASNetLarge(weights='imagenet', include_top=False)\r\n preprocess = nasnet.preprocess_input\r\n return base_model, preprocess\r\n\r\n\r\n\r\n def full_model():\r\n base_model, preprocess = basemodel()\r\n x = base_model.output\r\n x = GlobalAveragePooling2D()(x)\r\n x = Dense(2048, activation='relu')(x)\r\n x = Dropout(.60)(x)\r\n x= Dense(512, activation='relu')(x)\r\n predictions = Dense(classes, activation='softmax')(x)\r\n model = Model(inputs= base_model.input, outputs=predictions)\r\n return model,preprocess\r\n\r\n\r\n def initial_training_full():\r\n model, preprocess = full_model()\r\n for layer in model.layers[:-5]:\r\n layer.trainable = False\r\n\r\n\r\n model.compile(optimizer= i_optimizer,\r\n loss='sparse_categorical_crossentropy', metrics = ['accuracy'])\r\n\r\n print('Starting model training')\r\n \r\n \r\n\r\n\r\n\r\n\r\n history = model.fit(x_train, y_train,\r\n steps_per_epoch = i_steps_per_epoch,\r\n epochs = i_epochs,\r\n shuffle= True,\r\n verbose = 1)\r\n return history\r\n\r\n\r\n\r\n\r\n\r\n if __name__ == \"__main__\":\r\n initial_training_full()\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "pr_html_url": "https://github.com/keras-team/keras/pull/62", "file_loc": {"base_commit": "4bb90a5bfd762269adeee772e233a733a6d318a9", "files": [{"path": "keras/datasets/data_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 4]}, "(None, 'get_file', 7)": {"mod": [32]}}}, {"path": "setup.py", "status": "modified", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/datasets/data_utils.py", "setup.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "3d9428d3445a429b535a247168d93b8a5910219d", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/6298", "iss_label": "", "title": "Performance Using HDF5 File vs In-Memory Dataset", "body": "I am trying out using the HDF5 file as the format of training/testing data, so that I can avoid the problem of big dataset which cannot fit into the memory.\r\n\r\nAs a test, I used a dataset with about 20 million rows and 10 columns as my input, and my response variable is a real-valued vector. The model is a simple sequential model with 2 hidden layers and 50 neurons per hidden layer, and the training batch size is 10000. I am still using Keras version 1.2.0.\r\n\r\nWhen the dataset is fully loaded into the memory, one epoch of training takes about 30 seconds. However, if I used HDF5 matrix with HDF5Matrix, one epoch of training takes about 360 seconds. Is this huge slow down common due to the I/O bottleneck? Shall I implement some perhaps better data loading process with fit_generator by myself by considering the specifications of my dataset?", "pr_html_url": "https://github.com/keras-team/keras/pull/6891", "file_loc": {"base_commit": "3d9428d3445a429b535a247168d93b8a5910219d", "files": [{"path": "keras/engine/training.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12], "mod": [7, 9, 10]}, "('Model', 'fit_generator', 1715)": {"add": [1732, 1856], "mod": [1723, 1725, 1734, 1759, 1762, 1807, 1860, 1861, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1926, 1928]}, "('Model', 'evaluate_generator', 1956)": {"add": [1965, 1994], "mod": [1957, 1968, 1971, 1998, 1999, 2002, 2003, 2004, 2005, 2006, 2007, 2008]}, "('Model', 'predict_generator', 2056)": {"add": [2091], "mod": [2057, 2058, 2065, 2068, 2071, 2095, 2096, 2102, 2103, 2104, 2105, 2106, 2107, 2108]}, "('GeneratorEnqueuer', None, 582)": {"mod": [582, 583, 585, 587, 588, 589, 590, 592, 593, 594, 595, 596, 597, 599, 600, 602, 603, 604, 605, 606, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 647, 648, 650, 651, 653, 655, 656, 657, 658, 659, 661, 662, 663, 664, 665, 666, 668, 669, 670, 672, 673, 674]}}}, {"path": "keras/legacy/interfaces.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [604]}}}, {"path": "keras/models.py", "status": "modified", "Loc": {"('Sequential', 'fit_generator', 1028)": {"mod": [1036, 1038, 1075, 1077, 1121, 1123]}, "('Sequential', 'evaluate_generator', 1127)": {"mod": [1128, 1129, 1140, 1142, 1162, 1164]}, "('Sequential', 'predict_generator', 1167)": {"mod": [1168, 1169, 1179, 1181, 1194, 1196]}}}, {"path": "keras/utils/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}}}, {"path": "keras/utils/data_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14, 17], "mod": [5, 6, 8, 10, 12, 13, 16]}, "(None, 'validate_file', 261)": {"add": [284]}}}, {"path": "tests/keras/engine/test_training.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 10, 14, 299]}, "(None, 'test_model_methods', 16)": {"mod": [83, 207]}}}, {"path": "tests/keras/legacy/interface_test.py", "status": "modified", "Loc": {"(None, 'test_generator_methods_interface', 781)": {"mod": [810, 811, 812, 813, 814, 815, 818, 821]}}}, {"path": "tests/keras/test_sequential_model.py", "status": "modified", "Loc": {"(None, 'test_sequential_fit_generator', 69)": {"mod": [101]}, "(None, 'test_sequential', 106)": {"mod": [136, 137]}}}, {"path": "tests/keras/utils/data_utils_test.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 8, 66], "mod": [4, 7, 11]}}}, {"path": "tests/test_multiprocessing.py", "status": "modified", "Loc": {"(None, 'test_multiprocessing_training', 22)": {"mod": [52, 54, 60, 61]}, "(None, 'test_multiprocessing_training_fromfile', 105)": {"mod": [134, 136, 142, 143]}, "(None, 'test_multiprocessing_predicting', 149)": {"mod": [169, 171, 174, 175]}, "(None, 'test_multiprocessing_evaluating', 179)": {"mod": [202, 204, 207, 208]}, "(None, 'test_multiprocessing_fit_error', 212)": {"mod": [229, 232, 235, 238]}, "(None, 'test_multiprocessing_evaluate_error', 243)": {"mod": [258, 261, 264, 267]}, "(None, 'test_multiprocessing_predict_error', 272)": {"mod": [287, 290, 293, 296]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["tests/keras/utils/data_utils_test.py", "keras/utils/data_utils.py", "keras/utils/__init__.py", "tests/keras/legacy/interface_test.py", "keras/legacy/interfaces.py", "keras/engine/training.py", "keras/models.py"], "doc": [], "test": ["tests/keras/test_sequential_model.py", "tests/test_multiprocessing.py", "tests/keras/engine/test_training.py"], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "2bfd1f2c950df5fc3f40b903c1966f1b0a48bee4", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/10855", "iss_label": "", "title": "fit_generator crashes though keras.utils.data_utils.Sequence was used", "body": "When `model.fit_generator` is used with `workers=0` and subclasses of [`keras.utils.data_utils.Sequence`](https://keras.io/utils/#sequence) for both training and validation data, API of `Sequence` is not recognized inside `evaluate_generator`, it raises:\r\n```python\r\n File \".../keras/legacy/interfaces.py\", line 91, in wrapper\r\n return func(*args, **kwargs)\r\n File \".../keras/engine/training.py\", line 1415, in fit_generator\r\n initial_epoch=initial_epoch)\r\n File \".../keras/engine/training_generator.py\", line 230, in fit_generator\r\n validation_steps,\r\n File \".../keras/legacy/interfaces.py\", line 91, in wrapper\r\n return func(*args, **kwargs)\r\n File \".../keras/engine/training.py\", line 1469, in evaluate_generator\r\n verbose=verbose)\r\n File \".../keras/engine/training_generator.py\", line 298, in evaluate_generator\r\n else:\r\nValueError: `steps=None` is only valid for a generator based on the `keras.utils.Sequence` class. Please specify `steps` or use the `keras.utils.Sequence` class.\r\n```\r\nExample code:\r\n```python\r\nfrom keras import Sequential\r\nfrom keras.layers import Dense\r\nfrom keras.utils.data_utils import Sequence\r\nimport numpy as np\r\n\r\nclass Dataset(Sequence):\r\n def __getitem__(self, index):\r\n return np.random.uniform(size=(16, 8)), np.random.uniform(size=(16, 1))\r\n def __len__(self):\r\n return 128\r\n\r\nmodel = Sequential([Dense(4, activation='relu', input_shape=(8,)),\r\n Dense(1, activation='sigmoid')])\r\nmodel.compile(loss='mse', optimizer='adam')\r\nmodel.fit_generator(generator=Dataset(), validation_data=Dataset(),\r\n workers=0)\r\n```\r\n\r\nIssue can be fixed [here](https://github.com/keras-team/keras/blob/7205d903fbc079bb99fbae0e3c02e6d2b4d227f0/keras/engine/training_generator.py#L124) by replacing:\r\n\r\n```python\r\nif isinstance(val_data, Sequence):\r\n val_enqueuer_gen = iter(val_data)\r\n```\r\nwith\r\n```python\r\nif isinstance(val_data, Sequence):\r\n val_enqueuer_gen = iter(val_data)\r\n validation_steps = len(val_data)\r\n```", "pr_html_url": "https://github.com/keras-team/keras/pull/11285", "file_loc": {"base_commit": "2bfd1f2c950df5fc3f40b903c1966f1b0a48bee4", "files": [{"path": "keras/engine/training_generator.py", "status": "modified", "Loc": {"(None, 'fit_generator', 21)": {"mod": [127]}}}, {"path": "tests/keras/engine/test_training.py", "status": "modified", "Loc": {"(None, 'test_model_methods', 126)": {"add": [470]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/engine/training_generator.py"], "doc": [], "test": ["tests/keras/engine/test_training.py"], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "c0d95fd6c2cd8ffc0738819825c3065e3c89977c", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/3859", "iss_label": "", "title": "clipnorm doesn't work with Embedding", "body": "I'm getting a Traceback every time \"clipnorm\" is used in NN with Embedding layer.\nHere is a simple script where the problem is obvious:\n\n``` python\nimport numpy as np\nfrom keras.layers import Input, Embedding\nfrom keras.optimizers import Adam\nfrom keras.models import Model\n\ninput_layer = Input(shape = (1,) )\n\nembedding = Embedding(input_dim = 1,\n output_dim = 1)(input_layer)\n\nmodel = Model(input = input_layer, output = embedding)\n\nmodel.compile(optimizer = Adam(clipnorm = 1.0), loss = 'mse')\n\nX = np.array([[1]])\nY = np.array([[[0.5]]])\nmodel.fit(X, Y, nb_epoch = 1)\n```\n\nFailure:\n\n``` shell\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:867] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0)\nI tensorflow/core/common_runtime/gpu/gpu_device.cc:867] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0)\nTraceback (most recent call last):\n File \"./clipnorm-bug.py\", line 20, in \n model.fit(X, Y, nb_epoch = 1)\n File \"/usr/local/lib/python3.5/dist-packages/keras/engine/training.py\", line 1079, in fit\n self._make_train_function()\n File \"/usr/local/lib/python3.5/dist-packages/keras/engine/training.py\", line 696, in _make_train_function\n self.total_loss)\n File \"/usr/local/lib/python3.5/dist-packages/keras/optimizers.py\", line 379, in get_updates\n grads = self.get_gradients(loss, params)\n File \"/usr/local/lib/python3.5/dist-packages/keras/optimizers.py\", line 71, in get_gradients\n grads = [clip_norm(g, self.clipnorm, norm) for g in grads]\n File \"/usr/local/lib/python3.5/dist-packages/keras/optimizers.py\", line 71, in \n grads = [clip_norm(g, self.clipnorm, norm) for g in grads]\n File \"/usr/local/lib/python3.5/dist-packages/keras/optimizers.py\", line 9, in clip_norm\n g = K.switch(n >= c, g * c / n, g)\nTypeError: unsupported operand type(s) for *: 'IndexedSlices' and 'float'\n```\n\nKeras version is 1.1.0, TensorFlow is 0.10rc\n\nclipvalue on the other hand works fine.\n", "pr_html_url": "https://github.com/keras-team/keras/pull/4815", "file_loc": {"base_commit": "c0d95fd6c2cd8ffc0738819825c3065e3c89977c", "files": [{"path": ".travis.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [52, 54]}}}, {"path": "keras/backend/tensorflow_backend.py", "status": "modified", "Loc": {"(None, 'rnn', 1640)": {"add": [1699], "mod": [1720, 1738, 1745, 1746, 1747]}, "(None, 'batch_dot', 811)": {"mod": [870, 871, 872, 873]}, "(None, '_step', 1798)": {"mod": [1808, 1809]}, "(None, 'elu', 1923)": {"mod": [1934]}, "(None, 'random_binomial', 2402)": {"mod": [2407, 2408, 2409]}}}, {"path": "keras/optimizers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1], "mod": [4]}}}, {"path": "tests/keras/test_optimizers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [76]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/optimizers.py", "keras/backend/tensorflow_backend.py"], "doc": [], "test": ["tests/keras/test_optimizers.py"], "config": [".travis.yml"], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "80fbbc3a6a2a30f391bad2aa85e7558c50ca0709", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/4452", "iss_label": "", "title": "PReLU should be channelwise", "body": "```\r\n\r\nfrom keras.models import Sequential\r\nfrom keras.layers.advanced_activations import PReLU\r\nfrom keras.layers import Convolution2D, MaxPooling2D\r\n\r\nmodel = Sequential()\r\nmodel.add(Convolution2D(32, 5, 5, input_shape=(28,28,1)))\r\nmodel.add(PReLU())\r\n\r\nmodel.summary()\r\n```\r\n\r\nThis script produces the PReLU layer with 18432 parameters. If the PReLU was implemented according to the paper (https://arxiv.org/abs/1502.01852), number of parameters would be 32. Shouldn't this not be implemented that way? It adds an unnecessarily large number of parameters to the model.", "pr_html_url": "https://github.com/keras-team/keras/pull/4141", "file_loc": {"base_commit": "80fbbc3a6a2a30f391bad2aa85e7558c50ca0709", "files": [{"path": "keras/backend/theano_backend.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [726]}}}, {"path": "keras/layers/advanced_activations.py", "status": "modified", "Loc": {"('PReLU', None, 38)": {"add": [54], "mod": [59]}, "('PReLU', '__init__', 59)": {"add": [62]}, "('ParametricSoftplus', None, 118)": {"add": [133]}, "('ParametricSoftplus', '__init__', 138)": {"add": [143], "mod": [139]}, "('SReLU', None, 201)": {"add": [216]}, "('SReLU', '__init__', 221)": {"add": [227], "mod": [222]}, "('PReLU', 'build', 65)": {"mod": [66]}, "('PReLU', 'call', 74)": {"mod": [76]}, "('ParametricSoftplus', 'build', 146)": {"mod": [147, 148, 150]}, "('ParametricSoftplus', 'call', 158)": {"mod": [159]}, "('SReLU', 'build', 230)": {"mod": [231, 238, 240, 242, 244]}, "('SReLU', 'call', 251)": {"mod": [252, 253, 254, 255]}}}, {"path": "tests/keras/layers/test_advanced_activations.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19, 51]}, "(None, 'test_parametric_softplus', 29)": {"mod": [31, 32, 33, 34, 35]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["keras/backend/theano_backend.py", "keras/layers/advanced_activations.py"], "doc": [], "test": ["tests/keras/layers/test_advanced_activations.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "070bb2ff28b108e5fc627efa1d95ded00ca067c8", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/537", "iss_label": "", "title": "Args are lost with git_push", "body": "From the README:\n\n> `git_push` \u2013 adds `--set-upstream origin $branch` to previous failed `git push`;\n\nThis isn't actually the case, `--set-upstream` isn't added to the previous push, the entire command is replaced with just `git push --set-upstream origin $branch` regardless of whether there were any args.\n\nExample:\n\n```\n$ git push --quiet\nfatal: The current branch new-branch has no upstream branch.\nTo push the current branch and set the remote as upstream, use\n\n git push --set-upstream origin new-branch\n\n$ fuck\ngit push --set-upstream origin new-branch [enter/\u2191/\u2193/ctrl+c]\n```\n\nThe `--quiet` arg is lost, ideally `git push --set-upstream origin new-branch --quiet` would be suggested.\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/538", "file_loc": {"base_commit": "070bb2ff28b108e5fc627efa1d95ded00ca067c8", "files": [{"path": "tests/rules/test_git_push.py", "status": "modified", "Loc": {"(None, 'test_match', 16)": {"add": [16]}, "(None, 'test_get_new_command', 22)": {"add": [24]}}}, {"path": "thefuck/rules/git_push.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'get_new_command', 11)": {"mod": [12]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/rules/git_push.py"], "doc": [], "test": ["tests/rules/test_git_push.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "86efc6a252c39526c643ea3335db02c4621798e9", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/718", "iss_label": "next release", "title": "Cannot run fuck in zsh without export errors", "body": "Installed via pip. Ran eval \"$(thefuck --alias)\" in both bash and zsh. $ fuck then produces 'No fucks given' in bash while in zsh it complains about invalid arguments.\r\n```\r\n$ eval \"$(thefuck --alias)\"\r\n$ fuck\r\nfuck:export:3: not valid in this context: -'\r\nfuck:3: not an identifier: [skip\r\n```\r\n\r\nAlso FWIW:\r\n```\r\n$ zsh --version\r\nzsh 4.3.11 (x86_64-redhat-linux-gnu)\r\n$ cat /etc/redhat-release\r\nRed Hat Enterprise Linux Server release 6.8 (Santiago)\r\n```", "pr_html_url": "https://github.com/nvbn/thefuck/pull/820", "file_loc": {"base_commit": "86efc6a252c39526c643ea3335db02c4621798e9", "files": [{"path": "thefuck/shells/zsh.py", "status": "modified", "Loc": {"('Zsh', 'app_alias', 12)": {"mod": [19, 20]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/shells/zsh.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "f700b23f5725e14f8ee6ffb0a5c44ab9eaf42b53", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/717", "iss_label": "", "title": "alias produces invalid code for fish as of 3.24", "body": "As of 3.24 the output from `thefuck --alias` is incompatible with fish\r\n\r\n**Output from 3.24**\r\n```\r\n$ thefuck --alias\r\n\r\n function fuck () {\r\n TF_PYTHONIOENCODING=$PYTHONIOENCODING;\r\n export TF_ALIAS=fuck;\r\n export TF_SHELL_ALIASES=$(alias);\r\n export TF_HISTORY=$(fc -ln -10);\r\n export PYTHONIOENCODING=utf-8;\r\n TF_CMD=$(\r\n thefuck THEFUCK_ARGUMENT_PLACEHOLDER $@\r\n ) && eval $TF_CMD;\r\n unset TF_HISTORY;\r\n export PYTHONIOENCODING=$TF_PYTHONIOENCODING;\r\n history -s $TF_CMD;\r\n }\r\n \r\n$ thefuck --version \r\nThe Fuck 3.24 using Python 3.5.2\r\n```\r\n\r\n**Output from 3.23**\r\n```\r\n$ thefuck --alias\r\nfunction fuck -d \"Correct your previous console command\"\r\n set -l fucked_up_command $history[1]\r\n env TF_ALIAS=fuck PYTHONIOENCODING=utf-8 thefuck $fucked_up_command | read -l unfucked_command\r\n if [ \"$unfucked_command\" != \"\" ]\r\n eval $unfucked_command\r\n builtin history delete --exact --case-sensitive -- $fucked_up_command\r\n builtin history merge ^ /dev/null\r\n end\r\nend\r\n \r\n$ thefuck --version \r\nThe Fuck 3.23 using Python 3.5.2\r\n```\r\n\r\n**Fish version output**\r\n```\r\n$ fish --version\r\nfish, version 2.6.0\r\n```", "pr_html_url": "https://github.com/nvbn/thefuck/pull/744", "file_loc": {"base_commit": "f700b23f5725e14f8ee6ffb0a5c44ab9eaf42b53", "files": [{"path": "tests/shells/test_bash.py", "status": "modified", "Loc": {"('TestBash', 'test_app_alias_variables_correctly_set', 51)": {"add": [53]}}}, {"path": "tests/shells/test_fish.py", "status": "modified", "Loc": {"('TestFish', 'test_app_alias', 70)": {"add": [73]}}}, {"path": "tests/shells/test_tcsh.py", "status": "modified", "Loc": {"('TestTcsh', None, 8)": {"add": [46]}}}, {"path": "tests/shells/test_zsh.py", "status": "modified", "Loc": {"('TestZsh', 'test_app_alias_variables_correctly_set', 51)": {"add": [53]}}}, {"path": "thefuck/shells/fish.py", "status": "modified", "Loc": {"('Fish', 'app_alias', 28)": {"mod": [38]}}}, {"path": "thefuck/shells/tcsh.py", "status": "modified", "Loc": {"('Tcsh', 'app_alias', 9)": {"mod": [10]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/shells/tcsh.py", "thefuck/shells/fish.py"], "doc": [], "test": ["tests/shells/test_zsh.py", "tests/shells/test_fish.py", "tests/shells/test_tcsh.py", "tests/shells/test_bash.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "18992f246a84331832c399283ab930408db21d86", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/652", "iss_label": "", "title": "git push to branch not of same name: use git's suggestion", "body": "I checked out a branch with a different name than the remote counterpart:\r\n\r\n```\r\n$ git checkout -b local-branch myupstream/remote-branch\r\n```\r\n\r\nWhen I attempted to push my changes upstream, I got the following error:\r\n```\r\n$ git push -f\r\nfatal: The upstream branch of your current branch does not match\r\nthe name of your current branch. To push to the upstream branch\r\non the remote, use\r\n\r\n git push myupstream HEAD:remote-branch\r\n\r\nTo push to the branch of the same name on the remote, use\r\n\r\n git push myupstream local-branch\r\n```\r\n\r\nHowever, thefuck didn't use this information to provide a suggestion:\r\n```\r\n$ fuck\r\ngit push -d [enter/\u2191/\u2193/ctrl+c] \r\n```", "pr_html_url": "https://github.com/nvbn/thefuck/pull/703", "file_loc": {"base_commit": "18992f246a84331832c399283ab930408db21d86", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [198]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "f604756cb7639ab3efb1cd50c85e633a18cd9f9d", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/486", "iss_label": "windows", "title": "werid behavior", "body": "python 2.7.10\nwindows powershell\n\nwhen I type:\n\n``` bash\n> git comit -am 'test'\n> fuck\n```\n\nit gives me something that is completely weird:\n\n```\nC:\\Python27\\lib\\site-packages\\win_unicode_console\\__init__.py:27: RuntimeWarning: sys.stdin.encoding == 'utf-8', whereas sys.stdout.encoding == None, readline hook consumer may assume they are the same\n readline_hook.enable(use_pyreadline=use_pyreadline)\n\u001b[1mgit commit -am 'test'\u001b[0m [\u001b[32menter\u001b[0m/\u001b[34m\u2191\u001b[0m/\u001b[34m\u2193\u001b[0m/\u001b[31mctrl+c\u001b[0m]\n```\n\nand then give an exception:\n\n```\nTraceback (most recent call last):\n File \"C:\\Python27\\lib\\runpy.py\", line 162, in _run_module_as_main\n \"__main__\", fname, loader, pkg_name)\n File \"C:\\Python27\\lib\\runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"C:\\Python27\\Scripts\\thefuck.exe\\__main__.py\", line 9, in \n File \"C:\\Python27\\lib\\site-packages\\thefuck\\main.py\", line 80, in main\n fix_command()\n File \"C:\\Python27\\lib\\site-packages\\thefuck\\main.py\", line 32, in fix_command\n selected_command = select_command(corrected_commands)\n File \"C:\\Python27\\lib\\site-packages\\thefuck\\ui.py\", line 80, in select_command\n for action in read_actions():\n File \"C:\\Python27\\lib\\site-packages\\thefuck\\ui.py\", line 13, in read_actions\n key = get_key()\n File \"C:\\Python27\\lib\\site-packages\\thefuck\\system\\win32.py\", line 25, in get_key\n return ch.decode(sys.stdout.encoding)\nTypeError: decode() argument 1 must be string, not None\n```\n\nI think this is related to UTF-8 encoding in python2...\nI cannot add python3 to path, because my vim plugins will require python2 in the path...\n\nIs there any way I can fix this?\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/487", "file_loc": {"base_commit": "f604756cb7639ab3efb1cd50c85e633a18cd9f9d", "files": [{"path": "thefuck/system/win32.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'get_key', 13)": {"mod": [25]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/system/win32.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "926e9ef963464075184f5c2e04ffdce9cb971997", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/809", "iss_label": "windows", "title": "Powershell, The term 'thefuck' is not recognized as the name of a cmdlet", "body": "**The output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0`):**\r\n\r\n The Fuck 3.26 using Python 3.6.5\r\n\r\n**Your shell and its version (`bash`, `zsh`, *Windows PowerShell*, etc.):**\r\n\r\n Windows Powershell, 5.1.16299.251\r\n\r\n**Your system (Debian 7, ArchLinux, Windows, etc.):**\r\n\r\n Windows 10\r\n\r\n**How to reproduce the bug:**\r\n\r\n - install thefuck\r\n\t`py -m pip install thefuck`\r\n - put the alias into your powershell profile\r\n ```\r\n $env:PYTHONIOENCODING=\"utf-8\"\r\n iex \"$(thefuck --alias)\"\r\n ```\r\n - start a new powershell\r\n\r\nResult:\r\n```\r\nthefuck : The term 'thefuck' is not recognized as the name of a cmdlet, function, script file, \r\nor operable program. Check the spelling of the name, or if a path was included, verify that the\r\npath is correct and try again.\r\n```\r\n \r\n**The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):**\r\n\r\nHmm I can't manage to output this.\r\n\r\n\r\n**Anything else you think is relevant:**\r\n\r\nI also tried it with the following in my profile:\r\n```\r\n...\r\niex \"$(C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python36-32\\Scripts\\thefuck.exe --alias)\"\r\n```\r\nThen I get a similar error after executing `fuck`:\r\n```\r\nthefuck : The term 'thefuck' is not recognized as the name of a cmdlet, function, script file,\r\nor operable program. Check the spelling of the name, or if a path was included, verify that the\r\npath is correct and try again.\r\nAt line:1 char:141\r\n+ ... g]::IsNullOrWhiteSpace($history)) { $fuck = $(thefuck $args $ ...\r\n+ ~~~~~~~\r\n + CategoryInfo : ObjectNotFound: (thefuck:String) [], CommandNotFoundException\r\n + FullyQualifiedErrorId : CommandNotFoundException\r\n```\r\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/844", "file_loc": {"base_commit": "926e9ef963464075184f5c2e04ffdce9cb971997", "files": [{"path": "thefuck/shells/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [19]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/shells/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "18992f246a84331832c399283ab930408db21d86", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/658", "iss_label": "hacktoberfest", "title": "exception while \"run fuck second time for configuring it automatically.\"", "body": "```\r\nluca@g550jk ~> fuck\r\nSeems like fuck alias isn't configured!\r\nPlease put eval (thefuck --alias | tr '\r\n' ';') in your ~/.config/fish/config.fish and apply changes with fish or restart your shell.\r\nOr run fuck second time for configuring it automatically.\r\nMore details - https://github.com/nvbn/thefuck#manual-installation\r\nluca@g550jk ~> fuck\r\nTraceback (most recent call last):\r\n File \"/usr/bin/fuck\", line 11, in \r\n load_entry_point('thefuck==3.18', 'console_scripts', 'fuck')()\r\n File \"/usr/lib/python3.6/site-packages/thefuck/not_configured.py\", line 80, in main\r\n elif _is_second_run():\r\n File \"/usr/lib/python3.6/site-packages/thefuck/not_configured.py\", line 40, in _is_second_run\r\n if not tracker_path.exists() or not shell.get_history()[-1] == 'fuck':\r\nIndexError: list index out of range\r\nluca@g550jk ~> thefuck --version\r\nThe Fuck 3.18 using Python 3.6.1\r\nluca@g550jk ~> lsb_release -d\r\nDescription: Arch Linux\r\nluca@g550jk ~> echo $SHELL\r\n/usr/bin/fish\r\nluca@g550jk ~> fish --version\r\nfish, version 2.5.0\r\n```", "pr_html_url": "https://github.com/nvbn/thefuck/pull/704", "file_loc": {"base_commit": "18992f246a84331832c399283ab930408db21d86", "files": [{"path": "thefuck/shells/fish.py", "status": "modified", "Loc": {"('Fish', 'how_to_configure', 72)": {"mod": [74]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/shells/fish.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "328e65179e8a886b44611850b908109868884c14", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/229", "iss_label": "", "title": "Should suggest 'git tag' when typing 'git tags' instead of 'git stage'", "body": "```\n$ git tags\ngit: 'tags' is not a git command. See 'git --help'.\n\nDid you mean one of these?\n stage\n tag\n$ fuck\ngit stage\nNothing specified, nothing added.\nMaybe you wanted to say 'git add .'?\n```\n\nAny way to improve the suggestion in this case?\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/285", "file_loc": {"base_commit": "328e65179e8a886b44611850b908109868884c14", "files": [{"path": "tests/rules/test_tmux.py", "status": "modified", "Loc": {"(None, 'test_get_new_command', 17)": {"mod": [19]}}}, {"path": "thefuck/rules/tmux.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'get_new_command', 10)": {"mod": [11, 14]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/rules/tmux.py"], "doc": [], "test": ["tests/rules/test_tmux.py"], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "470c0ef699f31bba76445f18d7cbb289af388565", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/439", "iss_label": "", "title": "Windows - ImportError: No module named dbm", "body": "I have installed thefuck with pip using Python 2.7.11 in Windows. When running `thefuck` from Powershell, I get the error message `ImportError: No module named dbm`. Any ideas?\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/458", "file_loc": {"base_commit": "470c0ef699f31bba76445f18d7cbb289af388565", "files": [{"path": "thefuck/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 19, 21, 22]}, "(None, '_cache', 211)": {"mod": [230]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "d92765d5df6607cb2f2fb67cee7b63f64ac7aa6b", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/301", "iss_label": "bug", "title": "UnicodeEncodeError on non-english systems", "body": "When a command outputs a sequence with non-ascii characters, no fucks are given with a dirty UnicodeEncodeError followed by a grep --compact not recognized option.\n\nExample : \n\n```\n~$ aptget blah\nLa commande \u00ab\u00a0aptget\u00a0\u00bb est introuvable, vouliez-vous dire\u00a0:\n La commande \u00ab\u00a0apt-get\u00a0\u00bb du paquet \u00ab\u00a0apt\u00a0\u00bb (main)\naptget\u00a0: commande introuvable\n~$ fuck\napt-get blah [enter/ctrl+c]\nE: L'op\u00e9ration blah n'est pas valable\n~$ fuck\nTraceback (most recent call last):\n File \"/usr/local/lib/python2.7/dist-packages/thefuck-2.1-py2.7.egg/thefuck/main.py\", line 100, in get_matched_rule\n if rule.match(command, settings):\n File \"/usr/local/lib/python2.7/dist-packages/thefuck-2.1-py2.7.egg/thefuck/rules/history.py\", line 28, in match\n _history_of_exists_without_current(command)))\n File \"/usr/local/lib/python2.7/dist-packages/thefuck-2.1-py2.7.egg/thefuck/utils.py\", line 109, in wrapper\n memo[key] = fn(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck-2.1-py2.7.egg/thefuck/rules/history.py\", line 19, in _history_of_exists_without_current\n history = get_history()\n File \"/usr/local/lib/python2.7/dist-packages/thefuck-2.1-py2.7.egg/thefuck/utils.py\", line 109, in wrapper\n memo[key] = fn(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck-2.1-py2.7.egg/thefuck/shells.py\", line 267, in get_history\n return list(_get_shell().get_history())\n File \"/usr/local/lib/python2.7/dist-packages/thefuck-2.1-py2.7.egg/thefuck/shells.py\", line 67, in get_history\n prepared = self._script_from_history(line)\\\n File \"/usr/local/lib/python2.7/dist-packages/thefuck-2.1-py2.7.egg/thefuck/shells.py\", line 103, in _script_from_history\n print(line)\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\xe9' in position 4: ordinal not in range(128)\n----------------------------\n\nNo fuck given\ngrep\u00a0: option non reconnue \u00ab --compact \u00bb\nUtilisation\u00a0: grep [OPTION]... MOTIF [FICHIER]...\nEx\u00e9cutez \u00ab\u00a0grep --help\u00a0\u00bb pour obtenir des renseignements compl\u00e9mentaires.\nsync: ignorer tous les arguments\n```\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/458", "file_loc": {"base_commit": "470c0ef699f31bba76445f18d7cbb289af388565", "files": [{"path": "thefuck/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 19, 21, 22]}, "(None, '_cache', 211)": {"mod": [230]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["thefuck/utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "7d71a2c979bc9ae300e177cbedb059a6133174b1", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/34897", "iss_label": "integration: homekit", "title": "homekit integration: work but error in log", "body": "\r\n## The problem\r\nNo problem for functionality but a lot of same error in log\r\n\r\n\r\n## Environment\r\n0.109\r\n\r\n- Home Assistant Core release with the issue: 0.109\r\n- Last working Home Assistant Core release (if known): 0.108.9\r\n- Operating environment (Home Assistant/Supervised/Docker/venv): Home Assistant in venv\r\n- Integration causing this issue: homekit\r\n- Link to integration documentation on our website: integration:homekit\r\n\r\n## Problem-relevant `configuration.yaml`\r\n\r\n\r\n```\r\nhomekit:\r\n filter:\r\n exclude_domains:\r\n - automation\r\n - device_tracker\r\n - person\r\n - script\r\n - group\r\n exclude_entities:\r\n - climate.netatmo_ingresso\r\n - climate.netatmo_cucina\r\n - binary_sensor.telecomando_a\r\n - binary_sensor.telecomando_b\r\n - binary_sensor.telecomando_c\r\n - binary_sensor.telecomando_d\r\n - light.led_ingresso2\r\n - light.led_ingresso3\r\n - light.led_ingresso4\r\n - light.led_ingresso5\r\n - switch.ovunque_do_not_disturb_switch\r\n - switch.ovunque_repeat_switch\r\n - switch.altoparlante_do_not_disturb_switch\r\n - switch.davide_s_reverb_ai_do_not_disturb_switch\r\n - switch.echo_studio_repeat_switch\r\n - switch.dappertutto_do_not_disturb_switch\r\n - switch.echo_dot_ingresso_repeat_switch\r\n - switch.davide_s_reverb_ai_shuffle_switch\r\n - switch.dappertutto_repeat_switch\r\n - switch.this_device_do_not_disturb_switch\r\n - switch.altoparlante_repeat_switch\r\n - switch.davide_s_alexa_apps_do_not_disturb_switch\r\n - switch.echo_dot_camera_letto_do_not_disturb_switch\r\n - switch.echo_dot_camera_letto_shuffle_switch\r\n - switch.echo_dot_ingresso_do_not_disturb_switch\r\n - switch.echo_show_shuffle_switch\r\n - switch.davide_s_reverb_ai_repeat_switch\r\n - switch.echo_dot_camera_letto_repeat_switch\r\n - switch.echo_dot_ingresso_shuffle_switch\r\n - switch.altoparlante_shuffle_switch\r\n - switch.dappertutto_shuffle_switch\r\n - switch.ovunque_shuffle_switch\r\n - switch.echo_show_repeat_switch\r\n - switch.echo_studio_shuffle_switch\r\n - switch.echo_show_do_not_disturb_switch\r\n - switch.echo_studio_do_not_disturb_switch\r\n - binary_sensor.updater\r\n entity_config:\r\n media_player.lg_tv:\r\n name: TV\r\n\r\n```\r\n\r\n## Traceback/Error logs\r\n\r\n\r\n```\r\nApr 29 23:03:37 raspberrypi hass[6324]: 2020-04-29 23:03:37 ERROR (MainThread) [homeassistant.core] Error doing job: Future exception was never retrieved\r\nApr 29 23:03:37 raspberrypi hass[6324]: Traceback (most recent call last):\r\nApr 29 23:03:37 raspberrypi hass[6324]: File \"/usr/lib/python3.7/concurrent/futures/thread.py\", line 57, in run\r\nApr 29 23:03:37 raspberrypi hass[6324]: result = self.fn(*self.args, **self.kwargs)\r\nApr 29 23:03:37 raspberrypi hass[6324]: File \"/srv/homeassistant/lib/python3.7/site-packages/homeassistant/components/homekit/accessories.py\", line 252, in update_battery\r\nApr 29 23:03:37 raspberrypi hass[6324]: if self._char_battery.value != battery_level:\r\nApr 29 23:03:37 raspberrypi hass[6324]: AttributeError: 'TemperatureSensor' object has no attribute '_char_battery'\r\n\r\n```\r\n\r\n## Additional information\r\n\r\n", "pr_html_url": "https://github.com/home-assistant/core/pull/34906", "file_loc": {"base_commit": "7d71a2c979bc9ae300e177cbedb059a6133174b1", "files": [{"path": "homeassistant/components/homekit/accessories.py", "status": "modified", "Loc": {"('HomeAccessory', '__init__', 85)": {"add": [101]}, "('HomeAccessory', 'update_battery', 245)": {"add": [249], "mod": [261]}}}, {"path": "tests/components/homekit/test_accessories.py", "status": "modified", "Loc": {"(None, 'test_missing_linked_battery_sensor', 344)": {"mod": [366, 367, 368]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/homekit/accessories.py"], "doc": [], "test": ["tests/components/homekit/test_accessories.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "acf41d03db6293b9897b8194f5530793de89952f", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/26146", "iss_label": "integration: mopar", "title": "mopar component still does not work", "body": "Home Assistant release with the issue:\r\n0.97.2\r\n\r\nLast working Home Assistant release (if known):\r\n\r\nOperating environment (Hass.io/Docker/Windows/etc.):\r\n\r\narch | x86_64\r\ndev | false\r\ndocker | true\r\nhassio | true\r\nos_name | Linux\r\npython_version | 3.7.4\r\ntimezone | America/Denver\r\nversion | 0.97.2\r\nvirtualenv | false\r\n\r\n**Description of problem:**\r\nComponets show up but nothing works.\r\n```\r\n'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte\r\n```\r\n\r\n\r\n**Problem-relevant `configuration.yaml` entries and (fill out even if it seems unimportant):**\r\n```yaml\r\n\r\nmopar:\r\n username: !secret mopar_username\r\n password: !secret mopar_password\r\n pin: !secret mopar_pin\r\n```\r\n\r\n**Traceback (if applicable):**\r\n```\r\nFile \"/usr/local/lib/python3.7/site-packages/requests/models.py\", line 897, in json\r\n return complexjson.loads(self.text, **kwargs)\r\nFile \"/usr/local/lib/python3.7/site-packages/simplejson/__init__.py\", line 518, in loads\r\n return _default_decoder.decode(s)\r\nFile \"/usr/local/lib/python3.7/site-packages/simplejson/decoder.py\", line 370, in decode\r\n obj, end = self.raw_decode(s)\r\nFile \"/usr/local/lib/python3.7/site-packages/simplejson/decoder.py\", line 400, in raw_decode\r\n return self.scan_once(s, idx=_w(s, idx).end())\r\nsimplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n```\r\n\r\n**Additional information:**", "pr_html_url": "https://github.com/home-assistant/core/pull/33066", "file_loc": {"base_commit": "acf41d03db6293b9897b8194f5530793de89952f", "files": [{"path": ".coveragerc", "status": "modified", "Loc": {"(None, None, None)": {"mod": [435]}}}, {"path": "homeassistant/components/mopar/__init__.py", "status": "removed", "Loc": {}}, {"path": "homeassistant/components/mopar/lock.py", "status": "removed", "Loc": {}}, {"path": "homeassistant/components/mopar/manifest.json", "status": "removed", "Loc": {}}, {"path": "homeassistant/components/mopar/sensor.py", "status": "removed", "Loc": {}}, {"path": "homeassistant/components/mopar/services.yaml", "status": "removed", "Loc": {}}, {"path": "homeassistant/components/mopar/switch.py", "status": "removed", "Loc": {}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [885, 886]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/mopar/sensor.py", ".coveragerc", "homeassistant/components/mopar/manifest.json", "homeassistant/components/mopar/__init__.py", "homeassistant/components/mopar/lock.py", "homeassistant/components/mopar/switch.py"], "doc": [], "test": [], "config": ["homeassistant/components/mopar/services.yaml", "requirements_all.txt"], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "f020d65416177e4647a846ec017f441fe08c0696", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/54258", "iss_label": "integration: fronius\nenergy", "title": "Fronius energy sensors missing device_class and last_reset", "body": "### The problem\r\n\r\nThe Fronius integration with `sensor_type: inverter` has energy sensors, but these are not marked with any device_class or state_class and there's no last_reset.\r\n\r\neg the sensor `sensor.energy_day_fronius_inverter_1_http_solar` value `12455` and only the following attributes:\r\n\r\n```\r\nunit_of_measurement: Wh\r\nfriendly_name: Energy day Fronius Inverter 1 http://solar\r\n```\r\n\r\nThe energy sensor from `sensor_type: power_flow` has the state_class and device_class but no last_reset. eg `sensor.energy_day_fronius_power_flow_0_http_solar`. Also the device_class is power but should be energy.\r\n\r\n```\r\nstate_class: measurement\r\nunit_of_measurement: Wh\r\nfriendly_name: Energy day Fronius Power flow 0 http://solar\r\ndevice_class: power\r\n```\r\n\r\nThe total energy from the inverter sensor `sensor.energy_total_fronius_inverter_1_http_solar` is correct though.\r\n\r\n### What is version of Home Assistant Core has the issue?\r\n\r\ncore-2021.8.3\r\n\r\n### What was the last working version of Home Assistant Core?\r\n\r\n_No response_\r\n\r\n### What type of installation are you running?\r\n\r\nHome Assistant Container\r\n\r\n### Integration causing the issue\r\n\r\nfronius\r\n\r\n### Link to integration documentation on our website\r\n\r\n_No response_\r\n\r\n### Example YAML snippet\r\n\r\n```yaml\r\nsensor:\r\n - platform: fronius\r\n resource: http://solar\r\n monitored_conditions:\r\n - sensor_type: power_flow\r\n - sensor_type: meter\r\n - sensor_type: inverter\r\n```\r\n```\r\n\r\n\r\n### Anything in the logs that might be useful for us?\r\n\r\n_No response_\r\n\r\n### Additional information\r\n\r\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/54758", "file_loc": {"base_commit": "013b998974c889c8a80d04637a9ea8e43c7e2fc3", "files": [{"path": "homeassistant/components/fronius/manifest.json", "status": "modified", "Loc": {"(None, None, 5)": {"mod": [5]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, 1469)": {"mod": [1469]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/fronius/manifest.json"], "doc": [], "test": [], "config": ["requirements_all.txt"], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "832bc15daacfe1294538bb4a17600e4ada36a47e", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/83076", "iss_label": "integration: twinkly", "title": "Bug: Twinkly unavailable", "body": "### The problem\n\nTwinkly device unavailable after update to latest beta.\n\n### What version of Home Assistant Core has the issue?\n\ncore-2022.12.0b0\n\n### What was the last working version of Home Assistant Core?\n\ncore-2022.11.5\n\n### What type of installation are you running?\n\nHome Assistant Container\n\n### Integration causing the issue\n\ntwinkly\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/twinkly/\n\n### Diagnostics information\n\nn/a\n\n### Example YAML snippet\n\n```yaml\nn/a\n```\n\n\n### Anything in the logs that might be useful for us?\n\n```txt\n2022-12-01 14:11:15.238 DEBUG (MainThread) [ttls.client] Authentication token refreshed\r\n2022-12-01 14:11:15.239 DEBUG (MainThread) [ttls.client] GET endpoint gestalt\r\n2022-12-01 14:11:15.264 DEBUG (MainThread) [ttls.client] GET response 200\r\n2022-12-01 14:11:15.265 DEBUG (MainThread) [homeassistant.components.twinkly.light] Updating 'Twinkly_6A5D49.collective.lan'\r\n2022-12-01 14:11:15.265 DEBUG (MainThread) [ttls.client] Authentication token still valid\r\n2022-12-01 14:11:15.265 DEBUG (MainThread) [ttls.client] GET endpoint led/mode\r\n2022-12-01 14:11:15.290 DEBUG (MainThread) [ttls.client] GET response 200\r\n2022-12-01 14:11:15.290 DEBUG (MainThread) [ttls.client] Authentication token still valid\r\n2022-12-01 14:11:15.290 DEBUG (MainThread) [ttls.client] GET endpoint led/out/brightness\r\n2022-12-01 14:11:15.358 DEBUG (MainThread) [ttls.client] GET response 200\r\n2022-12-01 14:11:15.359 DEBUG (MainThread) [ttls.client] Authentication token still valid\r\n2022-12-01 14:11:15.359 DEBUG (MainThread) [ttls.client] GET endpoint gestalt\r\n2022-12-01 14:11:15.374 DEBUG (MainThread) [ttls.client] GET response 200\r\n2022-12-01 14:11:15.374 DEBUG (MainThread) [ttls.client] Authentication token still valid\r\n2022-12-01 14:11:15.374 DEBUG (MainThread) [ttls.client] GET endpoint movies\n```\n\n\n### Additional information\n\nThe device reports `unavailable`.\r\nI am able to control it via the Twinkly app.", "pr_html_url": "https://github.com/home-assistant/core/pull/83145", "file_loc": {"base_commit": "832bc15daacfe1294538bb4a17600e4ada36a47e", "files": [{"path": "homeassistant/components/twinkly/const.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 29]}}}, {"path": "homeassistant/components/twinkly/light.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 27, 39]}, "('TwinklyLight', '__init__', 63)": {"add": [98]}, "('TwinklyLight', 'device_info', 126)": {"add": [132]}, "('TwinklyLight', None, 60)": {"add": [167], "mod": [135, 136, 137, 138]}, "('TwinklyLight', 'async_turn_on', 168)": {"mod": [181, 182, 183, 185, 187, 188, 189, 190, 191, 192, 193, 194, 196, 197, 199, 200, 201, 203, 205, 206, 207, 208, 210]}, "('TwinklyLight', 'async_update', 226)": {"mod": [271, 272]}}}, {"path": "tests/components/twinkly/__init__.py", "status": "modified", "Loc": {"('ClientMock', '__init__', 19)": {"add": [27]}, "('ClientMock', 'turn_on', 53)": {"add": [57]}, "('ClientMock', 'set_static_colour', 81)": {"add": [83]}, "('ClientMock', 'set_mode', 100)": {"mod": [103, 105]}}}, {"path": "tests/components/twinkly/test_light.py", "status": "modified", "Loc": {"(None, 'test_turn_on_with_brightness', 68)": {"add": [80, 92], "mod": [82, 94]}, "(None, 'test_turn_on_with_color_rgbw', 101)": {"add": [109, 114, 122], "mod": [102, 116]}, "(None, 'test_turn_on_with_color_rgb', 125)": {"add": [133, 138, 146], "mod": [126, 140]}, "(None, 'test_turn_on_with_effect', 149)": {"add": [163, 171], "mod": [150, 158, 165]}, "(None, None, None)": {"mod": [6]}, "(None, 'test_turn_on_off', 48)": {"mod": [58, 60]}, "(None, 'test_turn_off', 174)": {"mod": [181, 183]}, "(None, 'test_update_name', 190)": {"mod": [202, 204]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/twinkly/light.py", "tests/components/twinkly/__init__.py", "homeassistant/components/twinkly/const.py"], "doc": [], "test": ["tests/components/twinkly/test_light.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "20f94d7ad47082ec2f1c204130abe318c4b014dd", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/54678", "iss_label": "integration: myq", "title": "MyQ Login Issue", "body": "### The problem\n\nMyq fails to initialize with \"Authentication failed: Form containing fields for email, password and submit not found.Unable to continue login process.\". Issue started happening today for me. There are more people reporting it here: https://community.home-assistant.io/t/myq-login-unavailable/330322\r\n\n\n### What is version of Home Assistant Core has the issue?\n\n2021.7.4\n\n### What was the last working version of Home Assistant Core?\n\n_No response_\n\n### What type of installation are you running?\n\nHome Assistant Container\n\n### Integration causing the issue\n\nmyq\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/myq/\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n```txt\nLogger: pymyq.api\r\nSource: /usr/local/lib/python3.9/site-packages/pymyq/api.py:701\r\nFirst occurred: 9:24:30 PM (13 occurrences)\r\nLast logged: 9:37:04 PM\r\n\r\nAuthentication failed: Form containing fields for email, password and submit not found.Unable to continue login process.\n```\n\n\n### Additional information\n\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/55099", "file_loc": {"base_commit": "20f94d7ad47082ec2f1c204130abe318c4b014dd", "files": [{"path": "homeassistant/components/myq/manifest.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1626]}}}, {"path": "requirements_test_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [939]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/myq/manifest.json"], "doc": [], "test": [], "config": ["requirements_test_all.txt", "requirements_all.txt"], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "b960ebeb8bbf791a8072e8503b417474fec134ac", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/2290", "iss_label": "", "title": "Smoke detector are not changing from \"Panic\" state", "body": "Make sure you are running the latest version of Home Assistant before reporting an issue.\n\nYou should only file an issue if you found a bug. Feature and enhancement requests should go in [the Feature Requests section](https://community.home-assistant.io/c/feature-requests) of our community forum:\n\n**Home Assistant release (`hass --version`): 0.21.1\n\n**Component/platform: Rfxtrx\n\n**Description of problem: First time I press the test button on my Nexa KD101 smoke detector it appears in HASS UI, but after that the UI is never changed, It keep showing panic and nothing happen in the UI after first test. The log are showing the smoke detector signal, so it is received.\n![image](https://cloud.githubusercontent.com/assets/7728206/16024413/5e301276-31ca-11e6-824a-b1020fc66239.png)\n\n![image](https://cloud.githubusercontent.com/assets/7728206/16024419/61d9ef78-31ca-11e6-9f31-20d61aeb7ffb.png)\n", "pr_html_url": "https://github.com/home-assistant/core/pull/2498", "file_loc": {"base_commit": "09a4336bc5d9a2f01f475b786492bb517ca36b5f", "files": [{"path": "homeassistant/components/sensor/rfxtrx.py", "status": "modified", "Loc": {"(None, 'setup_platform', 29)": {"mod": [50]}, "('RfxtrxSensor', 'state', 111)": {"mod": [113]}, "('RfxtrxSensor', 'device_state_attributes', 123)": {"mod": [125]}}}, {"path": "tests/components/sensor/test_rfxtrx.py", "status": "modified", "Loc": {"('TestSensorRfxtrx', 'test_several_sensors', 93)": {"add": [122, 135], "mod": [115, 116, 117, 118, 119, 120, 125, 126, 127, 128, 129, 130, 137, 138, 139, 140, 141, 142, 143]}, "('TestSensorRfxtrx', 'test_update_of_sensors', 233)": {"add": [262, 275], "mod": [255, 256, 257, 258, 259, 260, 265, 266, 267, 268, 269, 270, 277, 278, 279, 280, 281, 282, 283]}, "('TestSensorRfxtrx', 'test_old_config_sensor', 33)": {"mod": [47, 48, 49, 50, 51, 52]}, "('TestSensorRfxtrx', 'test_one_sensor', 54)": {"mod": [67, 68, 69, 70, 71, 72]}, "('TestSensorRfxtrx', 'test_one_sensor_no_datatype', 74)": {"mod": [86, 87, 88, 89, 90, 91]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/sensor/rfxtrx.py"], "doc": [], "test": ["tests/components/sensor/test_rfxtrx.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "d9e3c02df3a2690e74d1b606e8db0a4dd686e872", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/79275", "iss_label": "integration: unifi", "title": "Ubiquiti POE toggle (off) : aiounifi.models.event:Unsupported event key EVT_SW_PoeDisconnect", "body": "### The problem\r\n\r\nas title, toggling a POE off throws the Warning\r\nthey do toggle in the end, although it can take up to 45 secs or so.\r\nwhich is the same behavior we see on the camera's issue, extremely slow response times on the toggle\r\n\r\n### What version of Home Assistant Core has the issue?\r\n\r\n2022.10.0b0\r\n\r\n### What was the last working version of Home Assistant Core?\r\n\r\n_No response_\r\n\r\n### What type of installation are you running?\r\n\r\nHome Assistant OS\r\n\r\n### Integration causing the issue\r\n\r\nUniFi Network\r\n\r\n### Link to integration documentation on our website\r\n\r\nhttps://www.home-assistant.io/integrations/unifi/\r\n\r\n### Diagnostics information\r\n\r\n_No response_\r\n\r\n### Example YAML snippet\r\n\r\n_No response_\r\n\r\n### Anything in the logs that might be useful for us?\r\n\r\n```txt\r\n2022-09-29 09:31:08.184 WARNING (MainThread) [aiounifi.models.event] Unsupported event key EVT_SW_PoeDisconnect\r\n2022-09-29 09:31:08.190 WARNING (MainThread) [aiounifi.models.event] Unsupported event key EVT_SW_PoeDisconnect\r\n2022-09-29 09:31:08.192 WARNING (MainThread) [aiounifi.models.event] Unsupported event key EVT_SW_PoeDisconnect\r\n2022-09-29 09:31:08.196 WARNING (MainThread) [aiounifi.models.event] Unsupported event key EVT_SW_PoeDisconnect\r\n```\r\nor:\r\n\r\n```\r\nLogger: aiounifi.models.event\r\nSource: components/unifi/controller.py:232 \r\nFirst occurred: 09:31:08 (1 occurrences) \r\nLast logged: 09:31:08\r\n\r\nUnsupported event key EVT_SW_PoeDisconnect\r\n```\r\n\r\n\r\n\r\n### Additional information\r\n\r\nthese are the bt proxies I have in the integration:\r\n\r\n\"Schermafbeelding\r\n\r\nhave [another issue on POE switching](https://github.com/home-assistant/core/issues/79088), Ubiquiti Cams not responding correctly), not sure if its related\r\n\r\n", "pr_html_url": "https://github.com/home-assistant/core/pull/79487", "file_loc": {"base_commit": "069818940e5a16c412544e1e14e69b5d0964f157", "files": [{"path": "homeassistant/components/unifi/manifest.json", "status": "modified", "Loc": {"(None, None, 6)": {"mod": [6]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, 279)": {"mod": [279]}}}, {"path": "requirements_test_all.txt", "status": "modified", "Loc": {"(None, None, 254)": {"mod": [254]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/unifi/manifest.json"], "doc": [], "test": [], "config": ["requirements_test_all.txt", "requirements_all.txt"], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "b71a0c5d4bb22f1d7ec84e892cff851ad1d6f283", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/54573", "iss_label": "integration: ambiclimate", "title": "Ambi Climate Integration not listing any Entities ", "body": "### The problem\n\nThe ambiclimate integration is appearing in the Integration Screen but not showing any entities even though 1 is available and was working correctly in the previous version \r\n\r\n\r\n\n\n### What is version of Home Assistant Core has the issue?\n\nHome Assistant 2021.8.6\n\n### What was the last working version of Home Assistant Core?\n\nHome Assistant 2021.8.1\n\n### What type of installation are you running?\n\nHome Assistant Core\n\n### Integration causing the issue\n\nAmbiclimate\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/ambiclimate/\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n```txt\nLogger: homeassistant.components.climate\r\nSource: components/ambiclimate/climate.py:157\r\nIntegration: Climate (documentation, issues)\r\nFirst occurred: 2:32:41 PM (1 occurrences)\r\nLast logged: 2:32:41 PM\r\n\r\nError while setting up ambiclimate platform for climate\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity_platform.py\", line 249, in _async_setup_platform\r\n await asyncio.shield(task)\r\n File \"/usr/src/homeassistant/homeassistant/components/ambiclimate/climate.py\", line 94, in async_setup_entry\r\n devs.append(AmbiclimateEntity(heater, store))\r\n File \"/usr/src/homeassistant/homeassistant/components/ambiclimate/climate.py\", line 157, in __init__\r\n self._attr_min_temp = heater.get_min_temp()\r\n File \"/usr/local/lib/python3.9/site-packages/ambiclimate/__init__.py\", line 251, in get_min_temp\r\n data = self.ir_features['data'][self.ac_data[0].get('mode').lower()]['temperature']['value']\r\nTypeError: 'NoneType' object is not subscriptable\n```\n\n\n### Additional information\n\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/54579", "file_loc": {"base_commit": "b71a0c5d4bb22f1d7ec84e892cff851ad1d6f283", "files": [{"path": "homeassistant/components/ambiclimate/climate.py", "status": "modified", "Loc": {"('AmbiclimateEntity', 'async_update', 175)": {"add": [186]}, "('AmbiclimateEntity', '__init__', 146)": {"mod": [157, 158]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/ambiclimate/climate.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "f18c6ae72ccf9075c3f5442b1aff2381379ca9d6", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/39472", "iss_label": "integration: arlo", "title": "arlo integration is broken", "body": "\r\n## The problem\r\n\r\n\r\n`arlo` integration fails to initialize with message\r\n\r\n```txt\r\nERROR (MainThread) [homeassistant.setup] Setup failed for arlo: Integration failed to initialize.\r\n```\r\n\r\n## Environment\r\n\r\n\r\n- Home Assistant Core release with the issue: 0.114.1\r\n- Last working Home Assistant Core release (if known): N/A\r\n- Operating environment (OS/Container/Supervised/Core): Container\r\n- Integration causing this issue: arlo\r\n- Link to integration documentation on our website: https://www.home-assistant.io/integrations/arlo/\r\n\r\n## Problem-relevant `configuration.yaml`\r\n\r\n\r\n```yaml\r\narlo: !include arlo.yaml\r\n```\r\n\r\n`arlo.yaml`\r\n```yaml\r\n username: !secret arlo_username\r\n password: !secret arlo_password\r\n```\r\n\r\n`secrets.yaml`\r\n```yaml\r\narlo_username: REDACTED\r\narlo_password: REDACTED\r\n```\r\n\r\n## Traceback/Error logs\r\n\r\n\r\n```txt\r\n> grep arlo home-assistant.log\r\n2020-08-31 18:29:28 INFO (MainThread) [homeassistant.bootstrap] Setting up stage 2: {'noonlight', 'met', 'sun', 'mobile_app', 'input_datetime', 'hue', 'ffmpeg', 'default_config', 'map', 'input_text', 'alarm_control_panel', 'notify', 'input_number', 'arlo', 'scene', 'system_health', 'input_boolean', 'zeroconf', 'weather', 'light', 'automation', 'zha', 'updater', 'tts', 'ssdp', 'input_select', 'history', 'zone', 'logbook', 'zwave', 'group', 'script'}\r\n2020-08-31 18:29:29 DEBUG (MainThread) [homeassistant.setup] Dependency arlo will wait for ['ffmpeg']\r\n2020-08-31 18:29:34 INFO (MainThread) [homeassistant.setup] Setting up arlo\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Creating Arlo session\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Params: {'email': 'REDACTED', 'password': 'REDACTED'}\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Headers: {'Content-Type': 'application/json', 'Authorization': None}\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Querying https://arlo.netgear.com/hmsweb/login/v2 on attempt: 0/3\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Params: {'email': 'REDACTED', 'password': 'REDACTED'}\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Headers: {'Content-Type': 'application/json', 'Authorization': None}\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Querying https://arlo.netgear.com/hmsweb/login/v2 on attempt: 1/3\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Params: {'email': 'REDACTED', 'password': 'REDACTED'}\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Headers: {'Content-Type': 'application/json', 'Authorization': None}\r\n2020-08-31 18:29:34 DEBUG (SyncWorker_6) [pyarlo] Querying https://arlo.netgear.com/hmsweb/login/v2 on attempt: 2/3\r\n2020-08-31 18:29:35 DEBUG (SyncWorker_6) [pyarlo] Params: {'email': 'REDACTED', 'password': 'REDACTED'}\r\n2020-08-31 18:29:35 DEBUG (SyncWorker_6) [pyarlo] Headers: {'Content-Type': 'application/json', 'Authorization': None}\r\n2020-08-31 18:29:35 DEBUG (SyncWorker_6) [pyarlo] Querying https://arlo.netgear.com/hmsweb/login/v2 on attempt: 3/3\r\n2020-08-31 18:29:35 INFO (MainThread) [homeassistant.setup] Setup of domain arlo took 0.6 seconds\r\n2020-08-31 18:29:35 ERROR (MainThread) [homeassistant.setup] Setup failed for arlo: Integration failed to initialize.\r\n```\r\n\r\n## Additional information\r\n\r\nI have tested the exact same configuration using https://github.com/twrecked/hass-aarlo instead (by renaming config references from `arlo` to `aarlo`) and the integration initializes without issue. My Arlo account does not have 2FA enabled.", "pr_html_url": "https://github.com/home-assistant/core/pull/44034", "file_loc": {"base_commit": "f18c6ae72ccf9075c3f5442b1aff2381379ca9d6", "files": [{"path": "homeassistant/components/arlo/manifest.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1274]}}}, {"path": "requirements_test_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [643]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/arlo/manifest.json"], "doc": [], "test": [], "config": ["requirements_test_all.txt", "requirements_all.txt"], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "91962e2681dde1b23612df06633b16aa0867c950", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/14989", "iss_label": "", "title": "Mysensor HVAC frontend shows multiple values", "body": "I think since 0.6x mysensors hvac node shows several temp settings\r\n\r\n![image](https://user-images.githubusercontent.com/15857592/41496413-4743c5d4-7148-11e8-96c4-ae99923dc6e8.png)\r\n\r\n\r\n", "pr_html_url": "https://github.com/home-assistant/core/pull/15110", "file_loc": {"base_commit": "91962e2681dde1b23612df06633b16aa0867c950", "files": [{"path": "homeassistant/components/climate/mysensors.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [29, 30, 31]}, "('MySensorsHVAC', 'supported_features', 46)": {"mod": [48]}, "('MySensorsHVAC', 'operation_list', 104)": {"mod": [106]}, "('MySensorsHVAC', 'fan_list', 114)": {"mod": [116]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/climate/mysensors.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "62054b84336e4d4a8a6a1ea649724e9eb86876b4", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/35779", "iss_label": "integration: denon", "title": "Error while setting up denon platform for media_player with Marantz", "body": "\r\n## The problem\r\nI've used the denon integration (not DenonAVR) to connect to my Marantz NR1609. It worked for a while, but doesn't now\r\n\r\n## Environment\r\n- Home Assistant Core release with the issue: 0.109.6\r\n- Last working Home Assistant Core release (if known): 0.99 worked. Don't know when it was switched over.\r\n- Operating environment (Home Assistant/Supervised/Docker/venv): Supervised\r\n- Integration causing this issue: Denon\r\n- Link to integration documentation on our website: https://www.home-assistant.io/integrations/denon/\r\n\r\n## Problem-relevant `configuration.yaml`\r\nDidn't change this configuration recently, since it only has one variable (and a name)\r\n\r\n```\r\nmedia_player:\r\n...\r\n - platform: denon\r\n host: 192.168.xx.xx\r\n name: Marantz\r\n```\r\n\r\n## Traceback/Error logs\r\n\r\n```\r\nLogger: homeassistant.components.media_player\r\nSource: components/denon/media_player.py:117\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity_platform.py\", line 178, in _async_setup_platform\r\n await asyncio.wait_for(asyncio.shield(task), SLOW_SETUP_MAX_WAIT)\r\n File \"/usr/local/lib/python3.7/asyncio/tasks.py\", line 442, in wait_for\r\n return fut.result()\r\n File \"/usr/local/lib/python3.7/concurrent/futures/thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/usr/src/homeassistant/homeassistant/components/denon/media_player.py\", line 85, in setup_platform\r\n if denon.update():\r\n File \"/usr/src/homeassistant/homeassistant/components/denon/media_player.py\", line 162, in update\r\n self._setup_sources(telnet)\r\n File \"/usr/src/homeassistant/homeassistant/components/denon/media_player.py\", line 117, in _setup_sources\r\n source, configured_name = line[len(\"SSFUN\") :].split(\" \", 1)\r\nValueError: not enough values to unpack (expected 2, got 1)\r\n```\r\n\r\n## Additional information\r\n\r\n", "pr_html_url": "https://github.com/home-assistant/core/pull/40514", "file_loc": {"base_commit": "62054b84336e4d4a8a6a1ea649724e9eb86876b4", "files": [{"path": "homeassistant/components/denon/media_player.py", "status": "modified", "Loc": {"('DenonDevice', '_setup_sources', 108)": {"mod": [114, 117]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/denon/media_player.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "e11ead410bbd9179ded05ba5e07b6de9919ec0ea", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/128046", "iss_label": "integration: gree", "title": "Gree climate does not update tha data (status)", "body": "### The problem\n\nThe HA does not update the climate status (switch) if the change is made with a remote control. It does not follow the climate continuously.\n\n### What version of Home Assistant Core has the issue?\n\n2024.10.1\n\n### What was the last working version of Home Assistant Core?\n\n2024.09.\n\n### What type of installation are you running?\n\nHome Assistant OS\n\n### Integration causing the issue\n\nGree\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/gree\n\n### Diagnostics information\n\n_No response_\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/139469", "file_loc": {"base_commit": "e11ead410bbd9179ded05ba5e07b6de9919ec0ea", "files": [{"path": "homeassistant/components/gree/const.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [22]}}}, {"path": "homeassistant/components/gree/coordinator.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 26]}, "('DeviceDataUpdateCoordinator', '_async_update_data', 65)": {"add": [102], "mod": [91, 99, 108]}, "('DeviceDataUpdateCoordinator', '__init__', 38)": {"mod": [51]}}}, {"path": "tests/components/gree/test_climate.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [54]}, "(None, 'run_update', 348)": {"mod": [349]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/gree/coordinator.py", "homeassistant/components/gree/const.py"], "doc": [], "test": ["tests/components/gree/test_climate.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "80af2f4279828419a9112a5478029f2c506896e7", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/53678", "iss_label": "integration: sonos", "title": "Sonos Alarm switch changes immediatly after click", "body": "### The problem\n\nI have sonos alarm switches. When I click on it, the status changes, but immediately back again.\r\nAfter a while (approx. 60s) the status changes to the correct state.\r\nIn the sonos app, the alarm is switched correctly with the first click.\r\n\r\nSee the GIF below. The behavior when switching on and then switching off.\r\n\r\n![Animation](https://user-images.githubusercontent.com/15085873/127458431-ea5a0ed0-25e6-4459-837a-41f9fad044d8.gif)\r\n\n\n### What is version of Home Assistant Core has the issue?\n\ncore-2021.7.4\n\n### What was the last working version of Home Assistant Core?\n\n_No response_\n\n### What type of installation are you running?\n\nHome Assistant Core\n\n### Integration causing the issue\n\nSonos\n\n### Link to integration documentation on our website\n\nhttps://github.com/home-assistant/core/tree/dev/homeassistant/components/sonos\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/55529", "file_loc": {"base_commit": "80af2f4279828419a9112a5478029f2c506896e7", "files": [{"path": "homeassistant/components/sonos/alarms.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11], "mod": [9]}, "('SonosAlarms', 'async_update_entities', 37)": {"add": [38], "mod": [40, 42, 45, 46, 47, 48, 49, 50, 51, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64]}, "('SonosAlarms', '__init__', 23)": {"mod": [26]}, "('SonosAlarms', '__iter__', 28)": {"mod": [30, 31]}, "('SonosAlarms', 'get', 33)": {"mod": [35]}, "('SonosAlarms', None, 20)": {"mod": [37]}, "('SonosAlarms', 'update_cache', 53)": {"mod": [66, 67, 68, 70]}}}, {"path": "homeassistant/components/sonos/favorites.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 11]}, "('SonosFavorites', '__init__', 23)": {"add": [26]}, "('SonosFavorites', 'update_cache', 46)": {"add": [50, 63, 64], "mod": [49, 61]}, "('SonosFavorites', None, 20)": {"mod": [33, 46, 47]}, "('SonosFavorites', 'async_update_entities', 33)": {"mod": [35, 36, 37, 38, 39, 44]}}}, {"path": "homeassistant/components/sonos/household_coordinator.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9], "mod": [4, 7, 11]}, "('SonosHouseholdCoordinator', '__init__', 22)": {"add": [27], "mod": [26]}, "('SonosHouseholdCoordinator', None, 19)": {"add": [46], "mod": [34, 35, 37, 38, 60, 61, 62, 63, 64, 65, 66, 68, 72, 73]}, "('SonosHouseholdCoordinator', 'setup', 29)": {"mod": [32]}, "('SonosHouseholdCoordinator', '_async_poll', 47)": {"mod": [52, 53, 55]}}}, {"path": "homeassistant/components/sonos/manifest.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6]}}}, {"path": "homeassistant/components/sonos/speaker.py", "status": "modified", "Loc": {"('SonosSpeaker', 'async_dispatch_favorites', 449)": {"mod": [453]}}}, {"path": "homeassistant/components/sonos/switch.py", "status": "modified", "Loc": {"(None, '_async_create_entity', 38)": {"add": [39, 40, 41]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2175]}}}, {"path": "requirements_test_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1215]}}}, {"path": "tests/components/sonos/conftest.py", "status": "modified", "Loc": {"('SonosMockEvent', 'increment_variable', 34)": {"add": [41]}, "(None, 'alarm_clock_fixture', 111)": {"add": [115], "mod": [121]}, "(None, 'alarm_clock_fixture_extended', 127)": {"add": [131], "mod": [141]}, "(None, 'music_library_fixture', 103)": {"mod": [105, 106]}}}, {"path": "tests/components/sonos/test_switch.py", "status": "modified", "Loc": {"(None, 'test_alarm_create_delete', 53)": {"add": [71], "mod": [78]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/sonos/alarms.py", "homeassistant/components/sonos/household_coordinator.py", "homeassistant/components/sonos/manifest.json", "tests/components/sonos/conftest.py", "homeassistant/components/sonos/switch.py", "homeassistant/components/sonos/speaker.py", "homeassistant/components/sonos/favorites.py"], "doc": [], "test": ["tests/components/sonos/test_switch.py"], "config": ["requirements_test_all.txt", "requirements_all.txt"], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "f29e0bf53eec22dca65e3a62336cf84797e36bf2", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/7306", "iss_label": "", "title": "Problems sending push message with HTML5 notify", "body": "**Home Assistant release (`hass --version`):**\r\n0.43.0\r\n\r\n**Python release (`python3 --version`):**\r\nPython 3.4.2\r\n\r\n**Component/platform:**\r\nHTML5 notify on Raspberry PI 3\r\n\r\n**Description of problem:**\r\nI've followed the installation instructions here: https://home-assistant.io/components/notify.html5/\r\nWhen I try to send a message, manually triggering it through the services pane, I get the error below (see traceback).\r\n\r\nAt the bottom of the trace it says:\r\n`AttributeError: /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.0.0: undefined symbol: EVP_CIPHER_CTX_reset`\r\n\r\n\r\n**Expected:**\r\n\r\n\r\n**Problem-relevant `configuration.yaml` entries and steps to reproduce:**\r\nI followed the installation instructions and installed `libffi-dev`, `libpython-dev`, and `libssl-dev` before installing `pywebpush`. I've tried `pywebpush` 0.5, 0.6, 0.6.1 and 0.8. All resulted in the same error, which leads me to believe the error is elsewhere.\r\n\r\nI'm using a SSL certificate from let's encrypt and duckdns.org.\r\n\r\n```yaml\r\nnotify:\r\n - name: html5\r\n platform: html5\r\n gcm_api_key: !secret gcm_api_key \r\n gcm_sender_id: !secret gcm_sender_id\r\n```\r\n\r\n1. Go to services pane\r\n2. Use the `notify.html5push` service with data `{\"title\": \"Title\", \"message\": \"Message\"}`\r\n3. Call service, and the error pops up in the HASS logs.\r\n\r\n**Traceback (if applicable):**\r\n```bash\r\n\r\nApr 25 20:13:36 raspberrypi hass[2828]: INFO:homeassistant.core:Bus:Handling \r\nApr 25 20:13:37 raspberrypi hass[2828]: ERROR:homeassistant.core:Error doing job: Task exception was never retrieved\r\nApr 25 20:13:37 raspberrypi hass[2828]: Traceback (most recent call last):\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/usr/lib/python3.4/asyncio/tasks.py\", line 233, in _step\r\nApr 25 20:13:37 raspberrypi hass[2828]: result = coro.throw(exc)\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/srv/homeassistant/lib/python3.4/site-packages/homeassistant/core.py\", line 1015, in _event_to_service_call\r\nApr 25 20:13:37 raspberrypi hass[2828]: yield from service_handler.func(service_call)\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/srv/homeassistant/lib/python3.4/site-packages/homeassistant/components/notify/__init__.py\", line 136, in async_notify_message\r\nApr 25 20:13:37 raspberrypi hass[2828]: yield from notify_service.async_send_message(**kwargs)\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/usr/lib/python3.4/asyncio/futures.py\", line 388, in __iter__\r\nApr 25 20:13:37 raspberrypi hass[2828]: yield self # This tells Task to wait for completion.\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/usr/lib/python3.4/asyncio/tasks.py\", line 286, in _wakeup\r\nApr 25 20:13:37 raspberrypi hass[2828]: value = future.result()\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/usr/lib/python3.4/asyncio/futures.py\", line 277, in result\r\nApr 25 20:13:37 raspberrypi hass[2828]: raise self._exception\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/usr/lib/python3.4/concurrent/futures/thread.py\", line 54, in run\r\nApr 25 20:13:37 raspberrypi hass[2828]: result = self.fn(*self.args, **self.kwargs)\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/srv/homeassistant/lib/python3.4/site-packages/homeassistant/components/notify/html5.py\", line 349, in send_message\r\nApr 25 20:13:37 raspberrypi hass[2828]: from pywebpush import WebPusher\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/home/homeassistant/.homeassistant/deps/pywebpush/__init__.py\", line 10, in \r\nApr 25 20:13:37 raspberrypi hass[2828]: import http_ece\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/home/homeassistant/.homeassistant/deps/http_ece/__init__.py\", line 8, in \r\nApr 25 20:13:37 raspberrypi hass[2828]: import pyelliptic\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/home/homeassistant/.homeassistant/deps/pyelliptic/__init__.py\", line 43, in \r\nApr 25 20:13:37 raspberrypi hass[2828]: from .openssl import OpenSSL\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/home/homeassistant/.homeassistant/deps/pyelliptic/openssl.py\", line 310, in \r\nApr 25 20:13:37 raspberrypi hass[2828]: OpenSSL = _OpenSSL(libname)\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/home/homeassistant/.homeassistant/deps/pyelliptic/openssl.py\", line 144, in __init__\r\nApr 25 20:13:37 raspberrypi hass[2828]: self.EVP_CIPHER_CTX_reset = self._lib.EVP_CIPHER_CTX_reset\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/usr/lib/python3.4/ctypes/__init__.py\", line 364, in __getattr__\r\nApr 25 20:13:37 raspberrypi hass[2828]: func = self.__getitem__(name)\r\nApr 25 20:13:37 raspberrypi hass[2828]: File \"/usr/lib/python3.4/ctypes/__init__.py\", line 369, in __getitem__\r\nApr 25 20:13:37 raspberrypi hass[2828]: func = self._FuncPtr((name_or_ordinal, self))\r\nApr 25 20:13:37 raspberrypi hass[2828]: AttributeError: /usr/lib/arm-linux-gnueabihf/libcrypto.so.1.0.0: undefined symbol: EVP_CIPHER_CTX_reset\r\n```\r\n\r\n**Additional info:**\r\n\r\n", "pr_html_url": "https://github.com/home-assistant/core/pull/7310", "file_loc": {"base_commit": "f29e0bf53eec22dca65e3a62336cf84797e36bf2", "files": [{"path": "homeassistant/components/notify/html5.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [28]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [526]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/notify/html5.py"], "doc": [], "test": [], "config": ["requirements_all.txt"], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "b9753a9f920f002312dc115534afdb422043007c", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/83852", "iss_label": "integration: homekit\nintegration: braviatv", "title": "Sony Bravia TV Integration: Error setting up entry for homekit", "body": "### The problem\n\nStarting in version homeassistant=='2012.11.0' Sony Bravia TV integration can't work with Apple HomeKit ( HomeKit Integration) due some errors:\r\n\r\n2022-12-12 16:07:46.673 WARNING (MainThread) [homeassistant.components.homekit.type_remotes] media_player.sony_xbr_49x835d: Reached maximum number of sources (90)\r\n2022-12-12 16:07:46.690 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry Sony XBR-49X835D:21066 for homekit\r\nTraceback (most recent call last):\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/config_entries.py\", line 372, in async_setup\r\n result = await component.async_setup_entry(hass, self)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 344, in async_setup_entry\r\n await homekit.async_start()\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 781, in async_start\r\n if not await self._async_create_accessories():\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 959, in _async_create_accessories\r\n acc = self._async_create_single_accessory(entity_states)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 894, in _async_create_single_accessory\r\n acc = get_accessory(self.hass, self.driver, state, STANDALONE_AID, conf)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py\", line 253, in get_accessory\r\n return TYPES[a_type](hass, driver, name, state.entity_id, aid, config)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_media_players.py\", line 223, in __init__\r\n super().__init__(\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_remotes.py\", line 133, in __init__\r\n serv_input = self.add_preload_service(\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py\", line 129, in add_preload_service\r\n self.add_service(service)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py\", line 151, in add_service\r\n self.iid_manager.assign(s)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/iid_manager.py\", line 31, in assign\r\n iid = self.get_iid_for_obj(obj)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py\", line 669, in get_iid_for_obj\r\n raise RuntimeError(\r\nRuntimeError: Cannot assign IID 79 to as it is already in use by: \n\n### What version of Home Assistant Core has the issue?\n\n2022.12.3\n\n### What was the last working version of Home Assistant Core?\n\n2022.10.5\n\n### What type of installation are you running?\n\nHome Assistant Core\n\n### Integration causing the issue\n\nSony Bravia TV\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/braviatv/\n\n### Diagnostics information\n\n2022-12-12 16:07:46.673 WARNING (MainThread) [homeassistant.components.homekit.type_remotes] media_player.sony_xbr_49x835d: Reached maximum number of sources (90)\r\n2022-12-12 16:07:46.690 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry Sony XBR-49X835D:21066 for homekit\r\nTraceback (most recent call last):\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/config_entries.py\", line 372, in async_setup\r\n result = await component.async_setup_entry(hass, self)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 344, in async_setup_entry\r\n await homekit.async_start()\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 781, in async_start\r\n if not await self._async_create_accessories():\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 959, in _async_create_accessories\r\n acc = self._async_create_single_accessory(entity_states)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 894, in _async_create_single_accessory\r\n acc = get_accessory(self.hass, self.driver, state, STANDALONE_AID, conf)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py\", line 253, in get_accessory\r\n return TYPES[a_type](hass, driver, name, state.entity_id, aid, config)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_media_players.py\", line 223, in __init__\r\n super().__init__(\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_remotes.py\", line 133, in __init__\r\n serv_input = self.add_preload_service(\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py\", line 129, in add_preload_service\r\n self.add_service(service)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py\", line 151, in add_service\r\n self.iid_manager.assign(s)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/iid_manager.py\", line 31, in assign\r\n iid = self.get_iid_for_obj(obj)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py\", line 669, in get_iid_for_obj\r\n raise RuntimeError(\r\nRuntimeError: Cannot assign IID 79 to as it is already in use by: \n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n```txt\n2022-12-12 16:07:46.673 WARNING (MainThread) [homeassistant.components.homekit.type_remotes] media_player.sony_xbr_49x835d: Reached maximum number of sources (90)\r\n2022-12-12 16:07:46.690 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry Sony XBR-49X835D:21066 for homekit\r\nTraceback (most recent call last):\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/config_entries.py\", line 372, in async_setup\r\n result = await component.async_setup_entry(hass, self)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 344, in async_setup_entry\r\n await homekit.async_start()\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 781, in async_start\r\n if not await self._async_create_accessories():\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 959, in _async_create_accessories\r\n acc = self._async_create_single_accessory(entity_states)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py\", line 894, in _async_create_single_accessory\r\n acc = get_accessory(self.hass, self.driver, state, STANDALONE_AID, conf)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py\", line 253, in get_accessory\r\n return TYPES[a_type](hass, driver, name, state.entity_id, aid, config)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_media_players.py\", line 223, in __init__\r\n super().__init__(\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_remotes.py\", line 133, in __init__\r\n serv_input = self.add_preload_service(\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py\", line 129, in add_preload_service\r\n self.add_service(service)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py\", line 151, in add_service\r\n self.iid_manager.assign(s)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/pyhap/iid_manager.py\", line 31, in assign\r\n iid = self.get_iid_for_obj(obj)\r\n File \"/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py\", line 669, in get_iid_for_obj\r\n raise RuntimeError(\r\nRuntimeError: Cannot assign IID 79 to as it is already in use by: \n```\n\n\n### Additional information\n\nHomeKit Integration\r\n\r\nhttps://www.home-assistant.io/integrations/homekit/", "pr_html_url": "https://github.com/home-assistant/core/pull/83890", "file_loc": {"base_commit": "b9753a9f920f002312dc115534afdb422043007c", "files": [{"path": "homeassistant/components/homekit/type_remotes.py", "status": "modified", "Loc": {"('RemoteInputSelectAccessory', None, 78)": {"add": [145]}, "(None, None, None)": {"mod": [21]}, "('RemoteInputSelectAccessory', '__init__', 81)": {"mod": [99]}, "('RemoteInputSelectAccessory', '_async_update_input_state', 159)": {"mod": [172]}}}, {"path": "tests/components/homekit/test_type_media_players.py", "status": "modified", "Loc": {"(None, 'test_media_player_television_max_sources', 460)": {"add": [514]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/homekit/type_remotes.py"], "doc": [], "test": ["tests/components/homekit/test_type_media_players.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "185f7beafc05fc355109fd417350591459650366", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/59106", "iss_label": "integration: octoprint", "title": "Error adding entities for domain sensor with platform octoprint when no tool0", "body": "### The problem\n\nOne of my octoprint instances is connect to a CNC which does not have a tool0 as there is no extruder. In the previous integration, you could just not set it to monitor this, however, in the the new UI there is no option to ignore components. As a result, I have an \"Octoprint target tool0 temp\" that is listed as \"unavailable\" and receive the following errors in my log, which I believe are related.\r\n\r\n```\r\nLogger: homeassistant.components.sensor\r\nSource: components/octoprint/sensor.py:215\r\nIntegration: Sensor (documentation, issues)\r\nFirst occurred: 2:46:05 PM (2 occurrences)\r\nLast logged: 2:46:05 PM\r\n\r\nError adding entities for domain sensor with platform octoprint\r\nError while setting up octoprint platform for sensor\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity_platform.py\", line 382, in async_add_entities\r\n await asyncio.gather(*tasks)\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity_platform.py\", line 607, in _async_add_entity\r\n await entity.add_to_platform_finish()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 715, in add_to_platform_finish\r\n self.async_write_ha_state()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 486, in async_write_ha_state\r\n self._async_write_ha_state()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 519, in _async_write_ha_state\r\n state = self._stringify_state()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 492, in _stringify_state\r\n if (state := self.state) is None:\r\n File \"/usr/src/homeassistant/homeassistant/components/sensor/__init__.py\", line 273, in state\r\n value = self.native_value\r\n File \"/usr/src/homeassistant/homeassistant/components/octoprint/sensor.py\", line 215, in native_value\r\n return round(\r\nTypeError: type NoneType doesn't define __round__ method\r\n\r\n```\r\n\r\nand \r\n\r\n```\r\nLogger: homeassistant\r\nSource: components/octoprint/sensor.py:215\r\nFirst occurred: 2:46:34 PM (36 occurrences)\r\nLast logged: 3:04:04 PM\r\n\r\nError doing job: Task exception was never retrieved\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py\", line 134, in _handle_refresh_interval\r\n await self._async_refresh(log_failures=True, scheduled=True)\r\n File \"/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py\", line 265, in _async_refresh\r\n update_callback()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py\", line 325, in _handle_coordinator_update\r\n self.async_write_ha_state()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 486, in async_write_ha_state\r\n self._async_write_ha_state()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 519, in _async_write_ha_state\r\n state = self._stringify_state()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 492, in _stringify_state\r\n if (state := self.state) is None:\r\n File \"/usr/src/homeassistant/homeassistant/components/sensor/__init__.py\", line 273, in state\r\n value = self.native_value\r\n File \"/usr/src/homeassistant/homeassistant/components/octoprint/sensor.py\", line 215, in native_value\r\n return round(\r\nTypeError: type NoneType doesn't define __round__ method\r\n\r\n```\r\n\r\nWe should either have the option to ignore certain monitored components or have it at the very least handle this scenario gracefully.\n\n### What version of Home Assistant Core has the issue?\n\ncore-2021.11.0\n\n### What was the last working version of Home Assistant Core?\n\n2021.10.x\n\n### What type of installation are you running?\n\nHome Assistant Container\n\n### Integration causing the issue\n\nOctoprint\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/octoprint\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/59130", "file_loc": {"base_commit": "185f7beafc05fc355109fd417350591459650366", "files": [{"path": "homeassistant/components/octoprint/sensor.py", "status": "modified", "Loc": {"('OctoPrintTemperatureSensor', 'native_value', 206)": {"add": [219], "mod": [214, 217, 218]}}}, {"path": "tests/components/octoprint/test_sensor.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "(None, 'test_sensors', 8)": {"add": [76], "mod": [27]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/octoprint/sensor.py"], "doc": [], "test": ["tests/components/octoprint/test_sensor.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "dbaca51bb3b7b0cea2acd5d3cc6fd1b7a396daf9", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/45426", "iss_label": "integration: synology_dsm", "title": "Synology DSM CPU sensors report usage above 100%", "body": "## The problem\r\nCPU load for 15 and 5 minutes are reported above 100%\r\n\r\n![image](https://user-images.githubusercontent.com/4518931/105505702-9309b180-5cc9-11eb-921e-2966f9d739f1.png)\r\n\r\n## Environment\r\nRunning version 2021.1.4 as Home Assistant OS VM running on the synology nas itself\r\n\r\n## Problem-relevant `configuration.yaml`\r\nNo configuration file edited, everything done via UI\r\n\r\n## Traceback/Error logs\r\nnone\r\n\r\n## Additional information\r\n\r\n", "pr_html_url": "https://github.com/home-assistant/core/pull/45500", "file_loc": {"base_commit": "dbaca51bb3b7b0cea2acd5d3cc6fd1b7a396daf9", "files": [{"path": "homeassistant/components/synology_dsm/const.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [38], "mod": [97, 104, 111, 118, 125, 126, 132, 133, 139, 140]}}}, {"path": "homeassistant/components/synology_dsm/sensor.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21]}, "('SynoDSMUtilSensor', 'state', 75)": {"add": [90]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/synology_dsm/const.py", "homeassistant/components/synology_dsm/sensor.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "ed3ebdfea52b222560ee6cae21c84f1e73df4d9a", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/97324", "iss_label": "integration: renault", "title": "Error setting up entry Renault for renault", "body": "### The problem\r\n\r\nRenault integration fails to start\r\n\r\n### What version of Home Assistant Core has the issue?\r\n\r\ncore-2023.7.3\r\n\r\n### What was the last working version of Home Assistant Core?\r\n\r\n_No response_\r\n\r\n### What type of installation are you running?\r\n\r\nHome Assistant OS\r\n\r\n### Integration causing the issue\r\n\r\n_No response_\r\n\r\n### Link to integration documentation on our website\r\n\r\n_No response_\r\n\r\n### Diagnostics information\r\n\r\n_No response_\r\n\r\n### Example YAML snippet\r\n\r\n_No response_\r\n\r\n### Anything in the logs that might be useful for us?\r\n\r\n```txt\r\nLogger: homeassistant.config_entries\r\nSource: components/renault/renault_hub.py:59 \r\nFirst occurred: 10:45:13 (1 occurrences) \r\nLast logged: 10:45:13\r\n\r\nError setting up entry Renault for renault\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/config_entries.py\", line 390, in async_setup\r\n result = await component.async_setup_entry(hass, self)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/src/homeassistant/homeassistant/components/renault/__init__.py\", line 29, in async_setup_entry\r\n await renault_hub.async_initialise(config_entry)\r\n File \"/usr/src/homeassistant/homeassistant/components/renault/renault_hub.py\", line 59, in async_initialise\r\n vehicles = await self._account.get_vehicles()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/renault_api/renault_account.py\", line 62, in get_vehicles\r\n return await self.session.get_account_vehicles(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/renault_api/renault_session.py\", line 188, in get_account_vehicles\r\n return await kamereon.get_account_vehicles(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/renault_api/kamereon/__init__.py\", line 239, in get_account_vehicles\r\n await request(\r\n File \"/usr/local/lib/python3.11/site-packages/renault_api/kamereon/__init__.py\", line 152, in request\r\n http_response.raise_for_status()\r\n File \"/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py\", line 1005, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 504, message='Gateway Time-out', url=URL('https://api-wired-prod-1-euw1.wrd-aws.com/commerce/v1/accounts/31451f9e-34a5-45ea-83f3-e10f0e5a905e/vehicles?country=FR')\r\n\r\n```\r\n```\r\n\r\n\r\n### Additional information\r\n\r\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/97530", "file_loc": {"base_commit": "ed3ebdfea52b222560ee6cae21c84f1e73df4d9a", "files": [{"path": "homeassistant/components/renault/__init__.py", "status": "modified", "Loc": {"(None, 'async_setup_entry', 15)": {"mod": [29]}}}, {"path": "tests/components/renault/test_init.py", "status": "modified", "Loc": {"(None, 'test_setup_entry_exception', 63)": {"add": [78]}, "(None, None, None)": {"mod": [4]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/renault/__init__.py"], "doc": [], "test": ["tests/components/renault/test_init.py"], "config": [], "asset": []}}, {"organization": "home-assistant", "repo_name": "core", "base_commit": "0eae0cca2bf841f2c2cb87fc602bc8afa3557174", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/35196", "iss_label": "integration: metoffice", "title": "Met office componant does not provide future forecast data", "body": "\r\n## The problem\r\nThe met office weather componant does not provide a 5 day forecast in Home Assistant in the same way other weather integrations do (i.e. darkSky) even though the API is capable of returning 5 day forecast data.\r\n\r\n\r\n## Environment\r\n\r\n\r\n- Home Assistant Core release with the issue: 0.108.9\r\n- Last working Home Assistant Core release (if known):\r\n- Operating environment (Home Assistant/Supervised/Docker/venv): HassOS VM, Supervisor 220\r\n- Integration causing this issue: Met Office\r\n- Link to integration documentation on our website: https://www.home-assistant.io/integrations/metoffice/\r\n\r\n## Problem-relevant `configuration.yaml`\r\n\r\n\r\n```yaml\r\nweather:\r\n - platform: metoffice\r\n api_key: !secret api_metoffice\r\n latitude: !secret metoffice_lat\r\n longitude: !secret metoffice_lon\r\n```\r\n\r\n## Traceback/Error logs\r\n\r\n\r\n```txt\r\n\r\n```\r\n\r\n## Additional information\r\nIt may be worth noting that the Met Office have launched a new API called DataHub which will eventually replace the current DataPoint API\r\nhttps://metoffice.apiconnect.ibmcloud.com/metoffice/production/\r\n", "pr_html_url": "https://github.com/home-assistant/core/pull/50876", "file_loc": {"base_commit": "0eae0cca2bf841f2c2cb87fc602bc8afa3557174", "files": [{"path": "homeassistant/components/metoffice/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 4, 16, 18], "mod": [14, 15]}, "(None, 'async_setup_entry', 25)": {"add": [50], "mod": [33, 34, 35, 38, 41, 42, 48, 49, 54, 55, 56]}}}, {"path": "homeassistant/components/metoffice/config_flow.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 6], "mod": [11]}, "(None, 'validate_input', 16)": {"mod": [25, 26, 27, 30]}}}, {"path": "homeassistant/components/metoffice/const.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [33], "mod": [28, 29]}}}, {"path": "homeassistant/components/metoffice/data.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 5, 7, 9]}, "('MetOfficeData', None, 12)": {"mod": [13, 15, 16, 17, 18, 20]}, "('MetOfficeData', '__init__', 20)": {"mod": [22, 23, 24, 26, 27, 28, 30, 31, 32, 33, 35, 36, 37, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78]}}}, {"path": "homeassistant/components/metoffice/manifest.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}}}, {"path": "homeassistant/components/metoffice/sensor.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14, 22], "mod": [13, 20, 21]}, "(None, 'async_setup_entry', 80)": {"mod": [88]}, "('MetOfficeCurrentSensor', None, 95)": {"mod": [95, 98, 186, 187, 188, 189, 190, 191, 193, 194, 195, 197, 198, 199, 200, 201, 202, 203, 205, 206, 207, 208]}, "('MetOfficeCurrentSensor', '__init__', 98)": {"mod": [100, 101, 104, 105, 107, 108, 109]}, "('MetOfficeCurrentSensor', 'state', 122)": {"mod": [127, 129, 131, 132, 134, 138, 141, 142]}, "('MetOfficeCurrentSensor', 'extra_state_attributes', 174)": {"mod": [178, 180, 181, 182, 183]}, "('MetOfficeCurrentSensor', 'entity_registry_enabled_default', 211)": {"mod": [213, 215, 216, 217, 218]}}}, {"path": "homeassistant/components/metoffice/weather.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 14], "mod": [2, 4, 12, 13]}, "(None, 'async_setup_entry', 20)": {"mod": [28, 29, 30, 31]}, "('MetOfficeWeather', None, 37)": {"mod": [37, 40, 141, 142, 143, 144, 145, 146, 148, 149, 150, 151, 152, 154, 155, 156, 157, 159, 160, 161, 162]}, "('MetOfficeWeather', '__init__', 40)": {"mod": [42, 43, 45, 46, 48]}, "('MetOfficeWeather', 'condition', 61)": {"mod": [63, 64, 65, 66, 67, 68, 69, 70, 71]}, "('MetOfficeWeather', 'temperature', 74)": {"mod": [76, 77, 78, 79, 80]}, "('MetOfficeWeather', 'visibility', 88)": {"mod": [91, 92]}, "('MetOfficeWeather', 'pressure', 101)": {"mod": [103, 104, 105, 106, 107]}, "('MetOfficeWeather', 'humidity', 110)": {"mod": [112, 113, 114, 115, 116]}, "('MetOfficeWeather', 'wind_speed', 119)": {"mod": [121, 122, 123, 124, 125]}, "('MetOfficeWeather', 'wind_bearing', 128)": {"mod": [130, 131, 132, 133, 134]}}}, {"path": "requirements_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [476]}}}, {"path": "requirements_test_all.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [270]}}}, {"path": "tests/components/metoffice/test_config_flow.py", "status": "modified", "Loc": {"(None, 'test_form_already_configured', 56)": {"add": [70]}}}, {"path": "tests/components/metoffice/test_sensor.py", "status": "modified", "Loc": {"(None, 'test_one_sensor_site_running', 26)": {"add": [31, 37]}, "(None, 'test_two_sensor_sites_running', 68)": {"add": [74, 75, 80, 83]}}}, {"path": "tests/components/metoffice/test_weather.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 119]}, "(None, 'test_site_cannot_connect', 24)": {"add": [28], "mod": [38, 40]}, "(None, 'test_site_cannot_update', 49)": {"add": [55, 60, 73], "mod": [70, 79]}, "(None, 'test_one_weather_site_running', 87)": {"add": [93, 99], "mod": [109, 110]}, "(None, 'test_two_weather_sites_running', 125)": {"add": [131, 132, 137, 140, 176], "mod": [156, 157, 167, 168]}}}, {"path": "tests/fixtures/metoffice.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [1497]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["homeassistant/components/metoffice/weather.py", "homeassistant/components/metoffice/sensor.py", "tests/fixtures/metoffice.json", "homeassistant/components/metoffice/const.py", "homeassistant/components/metoffice/data.py", "homeassistant/components/metoffice/config_flow.py", "homeassistant/components/metoffice/__init__.py", "homeassistant/components/metoffice/manifest.json"], "doc": [], "test": ["tests/components/metoffice/test_weather.py", "tests/components/metoffice/test_sensor.py", "tests/components/metoffice/test_config_flow.py"], "config": ["requirements_test_all.txt", "requirements_all.txt"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "5a695e9767e24778ffd725ab195bf72916e27ba5", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/133", "iss_label": "", "title": "Need help with ingest.py", "body": "Running into this error - python ingest.py\r\n\r\n-Traceback (most recent call last):\r\n File \"C:\\Users\\krstr\\OneDrive\\Desktop\\privategpt\\privateGPT\\privateGPT\\ingest.py\", line 11, in \r\n from constants import CHROMA_SETTINGS\r\n File \"C:\\Users\\krstr\\OneDrive\\Desktop\\privategpt\\privateGPT\\privateGPT\\constants.py\", line 11, in \r\n CHROMA_SETTINGS = Settings(\r\n File \"pydantic\\env_settings.py\", line 39, in pydantic.env_settings.BaseSettings.__init__\r\n File \"pydantic\\main.py\", line 341, in pydantic.main.BaseModel.__init__\r\npydantic.error_wrappers.ValidationError: 1 validation error for Settings\r\npersist_directory\r\n none is not an allowed value (type=type_error.none.not_allowed) -\r\n\r\nI've installed the requirements and changed the .env file and followed the readme up to this point. Seeing some people solve but not answer what fixed the above errors. Help?", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/168", "file_loc": {"base_commit": "5a695e9767e24778ffd725ab195bf72916e27ba5", "files": [{"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [6]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "57a829a8e8cf5c31410c256ae59e0eda9f129a41", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1258", "iss_label": "", "title": "Add a list of supported file types to README and Docs", "body": "Maybe I'm blind, but I couldn't find a list of the file types supported by privateGPT.\r\n\r\nOne might add a list with the supported file types to the [README.md](https://github.com/imartinez/privateGPT/blob/main/README.md) and [PrivateGPT Docs](https://docs.privategpt.dev/).\r\n\r\nKinda related https://github.com/imartinez/privateGPT/issues/451 and apologize at this place, I haven't had the time yet to look further into a first implementation proposal.", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/1264", "file_loc": {"base_commit": "57a829a8e8cf5c31410c256ae59e0eda9f129a41", "files": [{"path": "Makefile", "status": "modified", "Loc": {"(None, None, None)": {"mod": [49]}}}, {"path": "fern/docs.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 6], "mod": [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 30]}}}, {"path": "fern/docs/pages/sdks.mdx", "status": "renamed", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fern/docs/pages/ingestion.mdx", "status": "removed", "Loc": {}}, {"path": "fern/docs/pages/installation.mdx", "status": "renamed", "Loc": {"(None, None, None)": {"add": [131, 135, 208], "mod": [12, 13, 14, 41, 46, 51, 53, 55, 56, 58, 60, 61, 62, 64, 65, 66, 67, 69, 70, 72, 74, 75, 77, 79, 80, 81, 83, 84, 85, 86, 125, 129, 130, 160, 197, 225, 227]}}}, {"path": "fern/docs/pages/welcome.mdx", "status": "renamed", "Loc": {"(None, None, None)": {"mod": [3, 41]}}}, {"path": "fern/docs/pages/quickstart.mdx", "status": "removed", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["fern/docs.yml", "fern/docs/pages/welcome.mdx", "fern/docs/pages/quickstart.mdx", "fern/docs/pages/sdks.mdx", "fern/docs/pages/ingestion.mdx", "fern/docs/pages/installation.mdx"], "test": [], "config": ["Makefile"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "60e6bd25eb7e54a6d62ab0a9642c09170c1729e3", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/448", "iss_label": "bug\nprimordial", "title": "ingest.py extracts only the first row from the CSV files", "body": "My suggestion for fixing the bug:\r\n\r\n1. Modify the load_single_document function as follows:\r\n\r\ndef load_single_document(file_path: str) -> List[Document]:\r\n ext = \".\" + file_path.rsplit(\".\", 1)[-1]\r\n if ext in LOADER_MAPPING:\r\n loader_class, loader_args = LOADER_MAPPING[ext]\r\n loader = loader_class(file_path, **loader_args)\r\n return loader.load()\r\n\r\n raise ValueError(f\"Unsupported file extension '{ext}'\")\r\n \r\n2. Modify the load_documents function as follows: \r\n \r\n def load_documents(source_dir: str, ignored_files: List[str] = []) -> List[Document]:\r\n \"\"\"\r\n Loads all documents from the source documents directory, ignoring specified files\r\n \"\"\"\r\n all_files = []\r\n for ext in LOADER_MAPPING:\r\n all_files.extend(\r\n glob.glob(os.path.join(source_dir, f\"**/*{ext}\"), recursive=True)\r\n )\r\n filtered_files = [file_path for file_path in all_files if file_path not in ignored_files]\r\n\r\n with Pool(processes=os.cpu_count()) as pool:\r\n results = []\r\n with tqdm(total=len(filtered_files), desc='Loading new documents', ncols=80) as pbar:\r\n for i, docs in enumerate(pool.imap_unordered(load_single_document, filtered_files)):\r\n results.extend(docs)\r\n pbar.update()\r\n\r\n return results", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/560", "file_loc": {"base_commit": "60e6bd25eb7e54a6d62ab0a9642c09170c1729e3", "files": [{"path": "ingest.py", "status": "modified", "Loc": {"(None, 'load_single_document', 84)": {"mod": [84, 89]}, "(None, 'load_documents', 94)": {"mod": [108, 109]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["ingest.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "86c2dcfe1b33ac467558487a1df408abee0d2321", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/875", "iss_label": "bug", "title": "I got a Traceback error while running privateGPT on Ubuntu 22.04", "body": "While running privateGPT.py, the error started after \"gptj_model_load: model size = 3609.38 MB / num tensors = 285\". The error reads as follows:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/dennis/privateGPT/privateGPT.py\", line 83, in \r\n main()\r\n File \"/home/dennis/privateGPT/privateGPT.py\", line 38, in main\r\n llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=False)\r\n File \"/home/dennis/.local/lib/python3.10/site-packages/langchain/load/serializable.py\", line 74, in __init__\r\n super().__init__(**kwargs)\r\n File \"pydantic/main.py\", line 341, in pydantic.main.BaseModel.__init__\r\npydantic.error_wrappers.ValidationError: 1 validation error for GPT4All\r\nn_ctx\r\n extra fields not permitted (type=value_error.extra)\r\n\r\nI have no idea what's happening here. Could anyone be able to fix it so that I can try privateGPT on my Ubuntu 22.04 on an old iMac late 2012?\r\n", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/881", "file_loc": {"base_commit": "86c2dcfe1b33ac467558487a1df408abee0d2321", "files": [{"path": "privateGPT.py", "status": "modified", "Loc": {"(None, 'main', 25)": {"mod": [36, 38]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["privateGPT.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "fdb45741e521d606b028984dbc2f6ac57755bb88", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/15", "iss_label": "", "title": "llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this", "body": "llama.cpp: loading model from ./models/ggml-model-q4_0.bin\r\nllama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this\r\nllama_model_load_internal: format = 'ggml' (old version with low tokenizer quality and no mmap support)\r\nllama_model_load_internal: n_vocab = 32000\r\nllama_model_load_internal: n_ctx = 512\r\nllama_model_load_internal: n_embd = 4096\r\nllama_model_load_internal: n_mult = 256\r\nllama_model_load_internal: n_head = 32\r\nllama_model_load_internal: n_layer = 32\r\nllama_model_load_internal: n_rot = 128\r\nllama_model_load_internal: ftype = 2 (mostly Q4_0)\r\nllama_model_load_internal: n_ff = 11008\r\nllama_model_load_internal: n_parts = 1\r\nllama_model_load_internal: model size = 7B\r\nllama_model_load_internal: ggml ctx size = 4113739.11 KB\r\nllama_model_load_internal: mem required = 5809.32 MB (+ 2052.00 MB per state)\r\n...................................................................................................\r\nI am using a recommended model, but I get this error message. How do you think I could solve it?", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/224", "file_loc": {"base_commit": "fdb45741e521d606b028984dbc2f6ac57755bb88", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4, 15, 17, 23, 25, 28, 58, 62, 86]}}}, {"path": "example.env", "status": "modified", "Loc": {"(None, None, None)": {"add": [4], "mod": [2]}}}, {"path": "ingest.py", "status": "modified", "Loc": {"(None, 'main', 71)": {"add": [79], "mod": [75, 76, 81, 84, 87, 90]}, "(None, None, None)": {"mod": [22]}}}, {"path": "privateGPT.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 11]}, "(None, 'main', 20)": {"mod": [21, 22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["ingest.py", "privateGPT.py"], "doc": ["README.md"], "test": [], "config": ["example.env"], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "c999bac02c5a4f755b2a82488a975e91c988ffd8", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/9506", "iss_label": "site-bug", "title": "[TikTok] Failed to parse JSON/ No video formats found", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\n#### EDIT:\r\n\r\nyt-dlp's TikTok extractor is failing to parse JSON from the feed API endpoint even on nightly/master or with passing `--extractor-args \"tiktok:api_hostname=api22-normal-c-useast2a.tiktokv.com\"`\r\n\r\n
      original log for reference\r\n\r\n```shell\r\nyt-dlp -f \"bv*[vcodec^=avc]+ba[ext=m4a]/b[ext=mp4]/b\" https://www.tiktok.com/@pouveronica/video/7322479967147740459\r\n[TikTok] Extracting URL: https://www.tiktok.com/@pouveronica/video/7322479967147740459\r\n[TikTok] 7322479967147740459: Downloading video feed\r\nWARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 1 of 4)\r\n[TikTok] 7322479967147740459: Downloading video feed\r\nWARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 2 of 4)\r\n[TikTok] 7322479967147740459: Downloading video feed\r\nWARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 3 of 4)\r\n[TikTok] 7322479967147740459: Downloading video feed\r\nWARNING: [TikTok] 7322479967147740459: Failed to parse JSON (caused by JSONDecodeError(\"Expecting value in '': line 1 column 1 (char 0)\")); trying with webpage\r\n[TikTok] 7322479967147740459: Downloading webpage\r\n[info] 7322479967147740459: Downloading 1 format(s): download\r\nERROR: unable to open for writing: [Errno 2] No such file or directory: 'Replying to @Vy Puthny some key differences in finance and accounting \ud83d\ude03 #hr #humanresources #hrinsight #hrrole #hrtips #hrtrend #hrknowledge #learning #careergrowth #accounting #finance #manpoweroutsourcing #eor @Nica - \u1793\u17b7\u1780\u17b6 [7322479967147740459].mp4.part'\r\n```\r\n\r\n
      \r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\r\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n```shell\r\n[debug] Command-line config: ['-v', '-U', '-o', '%(title).200B.%(ext)s', 'https://www.tiktok.com/@mix_editor_5/video/7342789941371571462']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version master@2024.03.20.232831 from yt-dlp/yt-dlp-master-builds [07f5b2f75] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg 6.1.1-full_build-www.gyan.dev (setts), ffprobe 6.1.1-full_build-www.gyan.dev, phantomjs 2.5.0, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.2.1, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1806 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest\r\nLatest version: master@2024.03.20.232831 from yt-dlp/yt-dlp-master-builds\r\nyt-dlp is up to date (master@2024.03.20.232831 from yt-dlp/yt-dlp-master-builds)\r\n[TikTok] Extracting URL: https://www.tiktok.com/@mix_editor_5/video/7342789941371571462\r\n[TikTok] 7342789941371571462: Downloading video feed\r\nWARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 1 of 4)\r\n[TikTok] 7342789941371571462: Downloading video feed\r\nWARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 2 of 4)\r\n[TikTok] 7342789941371571462: Downloading video feed\r\nWARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 3 of 4)\r\n[TikTok] 7342789941371571462: Downloading video feed\r\nWARNING: [TikTok] 7342789941371571462: Failed to parse JSON (caused by JSONDecodeError(\"Expecting value in '': line 1 column 1 (char 0)\")); trying with webpage\r\n[TikTok] 7342789941371571462: Downloading webpage\r\n[debug] [TikTok] Found universal data for rehydration\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] 7342789941371571462: Downloading 1 format(s): download\r\n[debug] Invoking http downloader on \"https://v16-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-ve-0068c004/okMCLjLWAqjFQ5CIXaAfaAiMNgbSzfFCh48fSV/?a=1988&ch=0&cr=3&dr=0&lr=tiktok_m&cd=0%7C0%7C1%7C&cv=1&br=1800&bt=900&bti=ODszNWYuMDE6&cs=0&ds=3&ft=4fUEKMFx8Zmo0H.5Y94jV..7rpWrKsd.&mime_type=video_mp4&qs=0&rc=NGQ5OTY1NTdnaDM0Ojs1ZUBpMzs4N3Q5cnlncTMzNzczM0AzLi9gMi4vNjUxX14uLV4yYSNqZWpoMmQ0NWdgLS1kMTZzcw%3D%3D&btag=e00088000&expire=1711056619&l=20240321153003FA72D5DD8E2EFA514E6F&ply_type=2&policy=2&signature=eb207c9a24f5509f1e4668cbac840d00&tk=tt_chain_token\"\r\n[debug] File locking is not supported. Proceeding without locking\r\n[download] Destination: #CapCut #\ud83e\udd7a\ud83d\udc94 #new #trending #plz #plz #\ud83d\ude2d\ud83d\ude2d #viralvideo #plunfrezzmyaccount\ud83d\ude4f\ud83e\udd7a #plzvirulvideo\ud83d\ude25 #plzviral\ud83e\udd7a\ud83e\udd7a\ud83d\ude4f\ud83d\ude4fforyoupage \u29f8\u29f8 \ud835\udc6b\ud835\udc86\ud835\udc82\ud835\udc93 \ud835\udc7b\ud835\udc8a\ud835\udc8c\ud835\udc95\ud835\udc90\ud835\udc8c \ud835\udc7b.mp4\r\n[download] 100% of 1.62MiB in 00:00:00 at 14.79MiB/s\r\n```", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/9960", "file_loc": {"base_commit": "3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4", "files": [{"path": "yt_dlp/extractor/tiktok.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21]}, "('TikTokBaseIE', None, 33)": {"add": [241]}, "('TikTokBaseIE', '_parse_aweme_video_app', 242)": {"add": [298], "mod": [246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 315, 316]}, "('TikTokBaseIE', '_parse_aweme_video_web', 412)": {"add": [422, 425, 429, 433, 442, 453], "mod": [427, 436, 437, 438, 457, 472, 473, 474, 475, 476]}, "('TikTokBaseIE', 'extract_addr', 272)": {"mod": [273, 287]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/tiktok.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2350", "iss_label": "site-enhancement", "title": "[YouTube] [ChannelTab] extract subscriber count and channel views", "body": "### Checklist\n\n- [X] I'm reporting a site feature request\n- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Example URLs\n\nhttps://www.youtube.com/channel/UCR1IuLEqb6UEA_zQ81kwXfg\n\n### Description\n\nI have implemented a scraper with BeautifulSoup to extract some additional metadata from YouTube Channel for a [project](https://github.com/bbilly1/tubearchivist/blob/7028621bc576936c1b9808336b481a00252ab997/tubearchivist/home/src/index.py#L88) of mine. I was wondering if there would be interest to integrate that into yt-dlp? Both of these fields are extractable from the page without an API call.\r\n\r\n- Channel Subscribers: This information is available in the ytInitialData script in header -> c4TabbedHeaderRenderer -> subscriberCountText\r\n - The number is unfortunately truncated and as a string, e.g. \"2.03M subscribers\"\r\n - As far as I have observed the unit can be *M* for *millions*, *K* for *thousands* and none for below 1000.\r\n - That is language specific, but as far as I know yt-dlp defaults to english already?\r\n- Channel Views: This information is in ytInitialData at itemSectionRenderer -> contents -> viewCountText.\r\n - This is as a string and will need some logic to extract the numbers.\r\n\r\nAdditionally, for extracting banners there is already an issue open: #2237.\r\n\r\nThis would be a great addition to have upstream directly in yt-dlp.\n\n### Verbose log\n\n```shell\nDoes not apply...\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/2399", "file_loc": {"base_commit": "135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [1140]}}}, {"path": "yt_dlp/extractor/common.py", "status": "modified", "Loc": {"('InfoExtractor', None, 94)": {"add": [262]}}}, {"path": "yt_dlp/extractor/youtube.py", "status": "modified", "Loc": {"('YoutubeIE', None, 852)": {"add": [1034, 1077, 1129, 1161, 1188, 1215, 1246, 1284, 1316, 1347, 1515, 1573, 1604, 1667, 1776, 1831, 1864, 1908, 1943, 1969, 2010, 2053]}, "('YoutubeTabIE', None, 4200)": {"add": [4238, 4254, 4270, 4286, 4339, 4355, 4371, 4387, 4403, 4419, 4436, 4617, 4798, 4818], "mod": [4596, 4607, 4612, 4614]}, "('YoutubeBaseInfoExtractor', '_extract_visitor_data', 511)": {"mod": [517]}, "('YoutubeIE', '_real_extract', 3118)": {"mod": [3490]}, "('YoutubeTabBaseInfoExtractor', '_extract_from_tabs', 3894)": {"mod": [3943]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/youtube.py", "yt_dlp/extractor/common.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "c8a61a910096c77ce08dad5e1b2fbda5eb964156", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/9635", "iss_label": "site-bug", "title": "Vkplay Unsupported URL", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] records: Extracting information\r\nERROR: Unsupported URL: \r\n[in#0 @ 00000274e3899b80] Error opening input: Invalid data found when processing input\r\nError opening input file -.\r\nError opening input files: Invalid data found when processing input\r\n\r\nDownload is not working anymore?\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n_No response_", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/9636", "file_loc": {"base_commit": "c8a61a910096c77ce08dad5e1b2fbda5eb964156", "files": [{"path": "yt_dlp/extractor/vk.py", "status": "modified", "Loc": {"('VKPlayBaseIE', None, 709)": {"add": [709]}, "('VKPlayIE', None, 767)": {"add": [785], "mod": [768, 779]}, "('VKPlayLiveIE', None, 804)": {"add": [824], "mod": [805, 816]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/vk.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "93864403ea7c982be9a78af38835ac0747ed12d1", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2043", "iss_label": "bug\nexternal issue", "title": "[ceskatelevize.cz] Cannot download manifest - SSLV3_ALERT_HANDSHAKE_FAILURE", "body": "I'm sorry, but I think that the extractor is still broken. For instance:\r\n\r\n```\r\n$ yt-dlp --verbose \"https://www.ceskatelevize.cz/porady/10095426857-interview-ct24/221411058041217/\"\r\n[debug] Command-line config: ['--verbose', 'https://www.ceskatelevize.cz/porady/10095426857-interview-ct24/221411058041217/']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2021.12.01 [91f071af6]\r\n[debug] Python version 3.9.9 (CPython 64bit) - Linux-5.15.5-gentoo-x86_64-x86_64-AMD_Ryzen_9_3900X_12-Core_Processor-with-glibc2.33\r\n[debug] exe versions: ffmpeg 4.4.1 (setts), ffprobe 4.4.1\r\n[debug] Optional libraries: Crypto, sqlite\r\n[debug] Proxy map: {}\r\n[debug] [CeskaTelevize] Extracting URL: https://www.ceskatelevize.cz/porady/10095426857-interview-ct24/221411058041217/\r\n[CeskaTelevize] 221411058041217: Downloading webpage\r\n[CeskaTelevize] 221411058041217: Downloading webpage\r\n[CeskaTelevize] 221411058041217: Downloading webpage\r\n[CeskaTelevize] 221411058041217: Downloading JSON metadata\r\n[CeskaTelevize] 221411058041217: Downloading JSON metadata\r\n[CeskaTelevize] 221411058041217: Downloading MPD manifest\r\nWARNING: [CeskaTelevize] Failed to download MPD manifest: \r\n[CeskaTelevize] 221411058041217: Downloading JSON metadata\r\n[CeskaTelevize] 221411058041217: Downloading JSON metadata\r\n[CeskaTelevize] 221411058041217: Downloading m3u8 information\r\nWARNING: [CeskaTelevize] Failed to download m3u8 information: \r\n[download] Downloading playlist: 17. prosinec - Interview \u010cT24 | \u010cesk\u00e1 televize\r\n[CeskaTelevize] playlist 17. prosinec - Interview \u010cT24 | \u010cesk\u00e1 televize: Collected 1 videos; downloading 1 of them\r\n[download] Downloading video 1 of 1\r\nERROR: [CeskaTelevize] 61924494877975106: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; see https://github.com/yt-dlp/yt-dlp on how to update. Be sure to call yt-dlp with the --verbose flag and include its complete output.\r\n```\r\n\r\n_Originally posted by @zippy2 in https://github.com/yt-dlp/yt-dlp/issues/1899#issuecomment-997226548_", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/1904", "file_loc": {"base_commit": "93864403ea7c982be9a78af38835ac0747ed12d1", "files": [{"path": "yt_dlp/extractor/ceskatelevize.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [15, 16]}, "('CeskaTelevizeIE', '_real_extract', 89)": {"mod": [102, 103, 104, 105, 106]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/ceskatelevize.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "195c22840c594c8f9229cb47ffec2a8984c53a0c", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2239", "iss_label": "bug", "title": "--no-continue is bugged and does nothing (--force-overwrites also)", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Description\n\nI originally posted this issue months ago in Discord but didn't create an issue for it. \r\n--no-continue doesn't seem to do anything right now.\r\n--force-overwrites which is meant to include it doesn't either.\r\n\r\nThe only reason I want this to work is to workaround https://github.com/yt-dlp/yt-dlp/issues/2001 .\r\n\r\n--force-overwrites log: https://pastebin.com/raw/PZ03eWb2\n\n### Verbose log\n\n```shell\n[debug] Command-line config: ['-v', 'https://www.funimation.com/v/k-on/disband-the-club', '--config-location', 'funimation.conf', '--exec', 'start /B yt-dlp --config-location funimation.conf -q --fixup force --embed-subs --load-info-json %(__infojson_filename)q', '--write-subs', '--download-archive', 'archive.txt', '--ffmpeg-location', 'D:\\\\dummy', '--write-info-json', '-P', 'D:\\\\Temp']\r\n[debug] | Config \"funimation.conf\": ['--config-location', 'base.conf', '-f', '(bv*+ba/b)[format_note=Uncut] / (bv*+ba/b)', '-n', '--cookies', 'cookies-funimation-com.txt', '--extractor-args', 'funimation:language=english', '--no-continue']\r\n[debug] | | Config \"base.conf\": ['-o', '%(extractor)s\\\\%(title)s%(myindex)s.%(ext)s', '-P', 'D:\\\\Videos', '-P', 'temp:D:\\\\Temp', '--parse-metadata', 'original_url:#%(playlist_index)s', '--parse-metadata', ' - %(playlist_index)d:^(?P - \\\\d+)$', '--parse-metadata', '%(series)s - S%(season_number)sE%(episode_number)s - %(episode)s:^(?P.+ - S\\\\d+E\\\\d+ - \\\\S+.*)$', '--output-na', '', '--sub-langs', 'enUS,en', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0', '--fragment-retries', '500', '-N', '5', '-R', '0', '--no-mtime']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8, err utf-8, pref cp1252\r\n[debug] yt-dlp version 2021.12.27 [6223f67a8] (win_exe)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Python version 3.10.1 (CPython 64bit) - Windows-10-10.0.19044-SP0\r\nWARNING: ffmpeg-location D:\\dummy does not exist! Continuing without ffmpeg.\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[debug] Loading archive file 'archive.txt'\r\n[funimation:page] Logging in\r\n[debug] [funimation:page] Extracting URL: https://www.funimation.com/v/k-on/disband-the-club\r\n[funimation:page] k-on_disband-the-club: Downloading JSON metadata\r\n[debug] [Funimation] Extracting URL: https://www.funimation.com/player/1135013\r\n[Funimation] 1135013: Downloading player webpage for 1135013\r\n[Funimation] disband-the-club: Downloading Uncut english (1135013) JSON\r\n[Funimation] disband-the-club: Downloading Uncut english (1135013) m3u8 information\r\n[debug] Sort order given by extractor: lang, source\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, source, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, id\r\n[debug] Searching for '\\\\#(?P<playlist_index>.+)' in '%(original_url)s'\r\nWARNING: Could not interpret 'original_url' as '#%(playlist_index)s'\r\n[debug] Searching for '^(?P<myindex> - \\\\d+)$' in ' - %(playlist_index)d'\r\nWARNING: Could not interpret ' - %(playlist_index)d' as '^(?P<myindex> - \\\\d+)$'\r\n[debug] Searching for '^(?P<title>.+ - S\\\\d+E\\\\d+ - \\\\S+.*)$' in '%(series)s - S%(season_number)sE%(episode_number)s - %(episode)s'\r\n[MetadataParser] Parsed title from '%(series)s - S%(season_number)sE%(episode_number)s - %(episode)s': 'K-On! - S1E1 - Disband the Club!'\r\n[info] 1133115: Downloading 1 format(s): 1135013-hls-6819+1135013-hls-audio-aacl-256-English\r\n[info] Writing video metadata as JSON to: D:\\Temp\\Funimation\\K-On! - S1E1 - Disband the Club!.info.json\r\nWARNING: ffmpeg-location D:\\dummy does not exist! Continuing without ffmpeg.\r\nWARNING: You have requested merging of multiple formats but ffmpeg is not installed. The formats won't be merged.\r\n[debug] Invoking downloader on \"https://vmfst-api.prd.funimationsvc.com/FunimationStoreFront/V1757083/26d6f23c-a90f-45c6-80e0-e2c01864b291strnv-hl154_streaming_video_1920_1080_7800000_index.m3u8?Key-Pair-Id=APKAIHNXECY27H4O6NIA&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kMzNldDc3ZXZkOWJnZy5jbG91ZGZyb250Lm5ldC9GdW5pbWF0aW9uU3RvcmVGcm9udC9WMTc1NzA4My8qIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNjQxNDY1MzI2fX19XX0_&Signature=i5t~O2TJZ~8XNKwFb~3huANBs5rUvs2nq2OqNsOHecNz4NkJKDJdj2sGC0zCLFu9~Kmu05wsgY-5xNChkwJ3BEM42lqiNdf~F1CJm4vJikyAVXSq--SHUHNjKXq5BWaGVMwWDd~1YHtBWlyoplYO9HnInG6~mIMMhAMGcTBkOBv1el9r2JcpI4V5CMPvCOA2TaDwKr9HeVTmHnVOOfApAfKfRR60CRsVVXgFBNdT6NGP6myy9ITdZzYinqcnggNiO2mza6jtotnokX0tOnrefthhkLikAcpzUnDZg0YC4Uj2AfTAxK~A6yGPvTp2~iR6yGayibhqFIq~-XiZK48KMw__&rt=1450032\"\r\n[hlsnative] Downloading m3u8 manifest\r\n[hlsnative] Total fragments: 727\r\n[download] Destination: D:\\Temp\\Funimation\\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4\r\nWARNING: The download speed shown is only of one thread. This is a known issue and patches are welcome\r\n[download] D:\\Temp\\Funimation\\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag17 has already been downloaded\r\n[download] D:\\Temp\\Funimation\\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag18 has already been downloaded\r\n[download] 2.2% of ~1.12GiB at 21.40MiB/s ETA Unknown (frag 16/727)[download] D:\\Temp\\Funimation\\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag19 has already been downloaded\r\n[download] 2.3% of ~1.12GiB at 781.98MiB/s ETA Unknown (frag 17/727)[download] D:\\Temp\\Funimation\\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag20 has already been downloaded\r\n[download] D:\\Temp\\Funimation\\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag21 has already been downloaded\r\n[download] 0.3% of ~39.60GiB at 11.05MiB/s ETA 56:42 (frag 20/727)\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/2901", "file_loc": {"base_commit": "195c22840c594c8f9229cb47ffec2a8984c53a0c", "files": [{"path": "yt_dlp/downloader/fragment.py", "status": "modified", "Loc": {"('FragmentFD', '_prepare_frag_download', 165)": {"mod": [181]}}}, {"path": "yt_dlp/downloader/http.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18], "mod": [8]}, "('HttpFD', 'real_download', 28)": {"add": [61]}, "('HttpFD', 'establish_connection', 89)": {"add": [93], "mod": [102, 127, 128, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143]}}}, {"path": "yt_dlp/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5254]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/utils.py", "yt_dlp/downloader/http.py", "yt_dlp/downloader/fragment.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "1f6b90ed8db7006e2f2d539c41c8f3e59058dd00", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/4587", "iss_label": "good first issue\nsite-enhancement", "title": "9gag.com - NineGagIE - InfoExtractor - add Uploader info to the returned metadata", "body": "### Checklist\n\n- [X] I'm requesting a site-specific feature\n- [X] I've verified that I'm running yt-dlp version **2022.07.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Example URLs\n\nhttps://9gag.com/gag/a119eY2 (Anonymous Uploader)\r\nhttps://9gag.com/gag/ajgp66G (Non-Anonymous Uploader)\n\n### Provide a description that is worded well enough to be understood\n\n9gag recently added the uploader of a post on the website.\r\n\r\nI would like to know if you could add uploader informations to the return metadata if the uploader isn't anonymous.\r\n\r\nThat could be done with the already extracted JSON stored by the variable `post`.\r\nThen it's a matter of getting the `creator = post.get('creator')`, if it is not `null` we can get :\r\n\r\n`uploader = creator['fullName']`,\r\n`uploader_id = creator['username']`,\r\n`uploader_url = url_or_none(creator[('profileUrl'])`\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\nNo verbose it's just a request feature to add addiotional metadata to an already existing InfoExtractor.\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/4597", "file_loc": {"base_commit": "1f6b90ed8db7006e2f2d539c41c8f3e59058dd00", "files": [{"path": "yt_dlp/extractor/ninegag.py", "status": "modified", "Loc": {"('NineGagIE', None, 12)": {"add": [13, 23, 34], "mod": [20, 25]}, "('NineGagIE', '_real_extract', 37)": {"add": [119], "mod": [49, 101, 113, 117, 122, 123, 124]}, "(None, None, None)": {"mod": [6]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/ninegag.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "2b18a8c59018a863cfac5b959ee14e474a7a87bc", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/417", "iss_label": "bug", "title": "[Broken] [YouTube] Can't get full Chat Replay when using cookies", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:\r\n- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.06.09. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.\r\n- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.\r\n- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.\r\n- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.\r\n- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)\r\n-->\r\n\r\n- [x] I'm reporting a broken site support\r\n- [x] I've verified that I'm running yt-dlp version **2021.06.09**\r\n- [x] I've checked that all provided URLs are alive and playable in a browser\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n\r\n<!--\r\nProvide the complete verbose output of yt-dlp that clearly demonstrates the problem.\r\nAdd the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:\r\n [debug] System config: []\r\n [debug] User config: []\r\n [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']\r\n [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251\r\n [debug] yt-dlp version 2021.06.09\r\n [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2\r\n [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4\r\n [debug] Proxy map: {}\r\n <more lines>\r\n-->\r\n\r\n```\r\nPASTE VERBOSE LOG HERE\r\n\r\n```\r\n<!--\r\nDo not remove the above ```\r\n-->\r\n\r\n\r\n## Description\r\n\r\n<!--\r\nProvide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.\r\nIf work on your issue requires account credentials please provide them or explain how one can obtain them.\r\n-->\r\n\r\nWhen I Download the chat replay with my cookies it doesn't start from the beginning instead it starts from a time a few minutes into the video this is usually really close to where i am in the Youtube player (few seconds to a minute)\r\n\r\nWhen i don't use cookies it can get the chat replay from the start without any problem", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/437", "file_loc": {"base_commit": "2b18a8c59018a863cfac5b959ee14e474a7a87bc", "files": [{"path": "yt_dlp/downloader/youtube_live_chat.py", "status": "modified", "Loc": {"('YoutubeLiveChatFD', 'real_download', 22)": {"add": [61, 144, 146], "mod": [93, 94, 95, 96, 98, 157, 158, 159, 160, 161]}, "('YoutubeLiveChatFD', 'download_and_parse_fragment', 98)": {"mod": [105, 109]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/downloader/youtube_live_chat.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "f6c73aad5f1a67544bea137ebd9d1e22e0e56567", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/9512", "iss_label": "site-bug", "title": "[Globo] Unable to download JSON metadata: HTTP Error 404: Not Found", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nBrazil\n\n### Provide a description that is worded well enough to be understood\n\n```shell\r\nyt-dlp --cookies-from-browser chrome -F https://globoplay.globo.com/v/12450434\r\n```\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', '--cookies-from-browser', 'chrome', '-F', 'https://globoplay.globo.com/v/12450434']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (pip)\r\n[debug] Python 3.12.2 (CPython x86_64 64bit) - macOS-14.2.1-x86_64-i386-64bit (OpenSSL 3.2.1 30 Jan 2024)\r\n[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.31.0, sqlite3-3.45.2, urllib3-2.2.1, websockets-12.0\r\n[debug] Proxy map: {}\r\nExtracting cookies from chrome\r\n[debug] Extracting cookies from: \"/Users/USER/Library/Application Support/Google/Chrome/Default/Cookies\"\r\n[debug] using find-generic-password to obtain password from OSX keychain\r\nExtracted 3210 cookies from chrome\r\n[debug] cookie version breakdown: {'v10': 3254, 'other': 0, 'unencrypted': 51}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1803 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.03.10 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)\r\n[Globo] Extracting URL: https://globoplay.globo.com/v/12450434\r\n[Globo] 12450434: Getting cookies\r\n[Globo] 12450434: Downloading JSON metadata\r\n[Globo] 12450434: Downloading security hash for 12450434\r\nERROR: [Globo] 12450434: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: Not Found>)\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py\", line 732, in extract\r\n ie_result = self._real_extract(url)\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/globo.py\", line 99, in _real_extract\r\n security = self._download_json(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py\", line 1086, in download_content\r\n res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py\", line 1050, in download_handle\r\n res = self._download_webpage_handle(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py\", line 920, in _download_webpage_handle\r\n urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py\", line 877, in _request_webpage\r\n raise ExtractorError(errmsg, cause=err)\r\n\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py\", line 864, in _request_webpage\r\n return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py\", line 4101, in urlopen\r\n return self._request_director.send(req)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/networking/common.py\", line 115, in send\r\n response = handler.send(request)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/networking/_helper.py\", line 204, in wrapper\r\n return func(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/networking/common.py\", line 326, in send\r\n return self._send(request)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/networking/_requests.py\", line 351, in _send\r\n raise HTTPError(res, redirect_loop=max_redirects_exceeded)\r\nyt_dlp.networking.exceptions.HTTPError: HTTP Error 404: Not Found\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/11795", "file_loc": {"base_commit": "f6c73aad5f1a67544bea137ebd9d1e22e0e56567", "files": [{"path": "yt_dlp/extractor/globo.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 11, 14, 15], "mod": [1, 2, 4, 8, 10]}, "('GloboIE', None, 18)": {"add": [20], "mod": [19, 22, 28, 29, 41, 42, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 65, 66, 68, 70, 71, 72, 73]}, "('GloboIE', '_real_extract', 80)": {"mod": [83, 84, 85, 87, 88, 89, 90, 91, 93, 96, 97, 98, 99, 103, 104, 105, 107, 109, 110, 111, 112, 113, 114, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 128, 129, 130, 131, 132, 133, 134, 136, 137, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 158, 159, 160, 164, 165, 166, 167]}, "('GloboArticleIE', None, 173)": {"mod": [174]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/globo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "8c53322cda75394a8d551dde20b2529ee5ad6e89", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/5744", "iss_label": "site-enhancement\npatch-available", "title": "[ok.ru] Download subtitle", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm requesting a site-specific feature\n- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Example URLs\n\nhttps://ok.ru/video/4249587550747\n\n### Provide a description that is worded well enough to be understood\n\nDownload subtitle from ok.ru\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['https://ok.ru/video/4249587550747', '--no-download', '--list-subs', '-vU']\r\n[debug] User config \"/home/nir/.config/yt-dlp/config\": ['--no-overwrites', '--restrict-filenames', '--merge-output-format', 'mkv', '--paths', '~/Downloads/youtube_dl', '--output', '%(title)s_%(id)s_%(autonumber)d.%(ext)s']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2022.11.11 [8b644025b] (source)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']\r\n[debug] Git HEAD: 935bac1e\r\n[debug] Python 3.8.10 (CPython x86_64 64bit) - Linux-5.15.0-56-generic-x86_64-with-glibc2.29 (OpenSSL 1.1.1f 31 Mar 2020, glibc 2.31)\r\n[debug] exe versions: ffmpeg 4.2.7, ffprobe 4.2.7\r\n[debug] Optional libraries: certifi-2019.11.28, secretstorage-2.3.1, sqlite3-2.6.0\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1731 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.11.11, Current version: 2022.11.11\r\nyt-dlp is up to date (2022.11.11)\r\n[Odnoklassniki] Extracting URL: https://ok.ru/video/4249587550747\r\n[Odnoklassniki] 4249587550747: Downloading desktop webpage\r\n[Odnoklassniki] 4249587550747: Downloading m3u8 information\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n4249587550747 has no subtitles\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/5920", "file_loc": {"base_commit": "8c53322cda75394a8d551dde20b2529ee5ad6e89", "files": [{"path": "yt_dlp/extractor/odnoklassniki.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13]}, "('OdnoklassnikiIE', None, 21)": {"add": [155, 204]}, "('OdnoklassnikiIE', '_extract_desktop', 222)": {"add": [296, 307]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/odnoklassniki.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "5f2da312fa66d6f001ca4d8d79ee281b9b62e9ed", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/840", "iss_label": "enhancement", "title": "UnicodeDecodeError when configuration saved as UTF-8 and OS default encoding is GBK", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:\r\n- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.08.10. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.\r\n- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.\r\n- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.\r\n- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.\r\n- Read bugs section in FAQ: https://github.com/yt-dlp/yt-dlp\r\n- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)\r\n-->\r\n\r\n- [x] I'm reporting a bug unrelated to a specific site\r\n- [x] I've verified that I'm running yt-dlp version **2021.08.10**\r\n- [x] I've checked that all provided URLs are alive and playable in a browser\r\n- [x] The provided URLs do not contain any DRM to the best of my knowledge\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar bug reports including closed ones\r\n- [x] I've read bugs section in FAQ\r\n\r\n## Verbose log\r\n\r\n<!--\r\nProvide the complete verbose output of yt-dlp that clearly demonstrates the problem.\r\nAdd the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:\r\n [debug] System config: []\r\n [debug] User config: []\r\n [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']\r\n [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251\r\n [debug] yt-dlp version 2021.08.10\r\n [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2\r\n [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4\r\n [debug] Proxy map: {}\r\n <more lines>\r\n-->\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\__main__.py\", line 19, in <module>\r\n File \"yt_dlp\\__init__.py\", line 750, in main\r\n File \"yt_dlp\\__init__.py\", line 73, in _real_main\r\n File \"yt_dlp\\options.py\", line 1496, in parseOpts\r\n File \"yt_dlp\\options.py\", line 1476, in get_configs\r\n File \"yt_dlp\\options.py\", line 1471, in read_options\r\n File \"yt_dlp\\options.py\", line 60, in _readOptions\r\nUnicodeDecodeError: 'gbk' codec can't decode byte 0xa8 in position 16: illegal multibyte sequence\r\n[40668] Failed to execute script '__main__' due to unhandled exception!\r\n```\r\n<!--\r\nDo not remove the above ```\r\n-->\r\n\r\n\r\n## Description\r\n\r\n<!--\r\nProvide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.\r\nIf work on your issue requires account credentials please provide them or explain how one can obtain them.\r\n-->\r\n\r\nI'm a Chinese user and I tried to write comments in my native language in the configuration file `yt-dlp.conf`, putting the file along with the executable:\r\n\r\n```bash\r\n# \u4ee3\u7406\u670d\u52a1\u5668 (which means Proxy Server)\r\n--proxy 127.0.0.1:29970\r\n```\r\nThen I saved the file as UTF-8. While using yt-dlp with or without any arguments, it reports `UnicodeDecodeError`.\r\n\r\nBecause I'm using Chinese as my display language, the default encoding of my system is GBK. It seems that yt-dlp tries to decode the configuration file as GBK, regardless of the actual encoding.\r\n\r\nChanging the code page to 65001 (UTF-8) with `chcp 65001` in `cmd` doesn't work. Removing CJK characters or changing the file encoding into GBK solves the problem, but I think saving the file as UTF-8 might be more reasonable.\r\n\r\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/4357", "file_loc": {"base_commit": "5f2da312fa66d6f001ca4d8d79ee281b9b62e9ed", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [1163]}}}, {"path": "test/test_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [41, 1824]}}}, {"path": "yt_dlp/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3487, 5396]}, "('Config', 'read_file', 5446)": {"add": [5450], "mod": [5448, 5453]}, "(None, 'is_html', 3488)": {"mod": [3491, 3492, 3493, 3494, 3495, 3496, 3497]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/utils.py"], "doc": ["README.md"], "test": ["test/test_utils.py"], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "2fd226f6a76715e429709d7172183d48e07c7ab3", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/544", "iss_label": "bug", "title": "Program not running without `_sqlite3` module", "body": "## Checklist\r\n\r\n- [ ] I'm reporting a broken site support issue\r\n- [x] I've verified that I'm running yt-dlp version **2021.07.21**\r\n- [ ] I've checked that all provided URLs are alive and playable in a browser\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar bug reports including closed ones\r\n- [x] I've read bugs section in FAQ\r\n\r\n\r\n## Verbose log\r\n\r\n```\r\n$ yt-dlp --verbose --version\r\nTraceback (most recent call last):\r\n File \"/home/me/.local/lib/python3.9/runpy.py\", line 188, in _run_module_as_main\r\n mod_name, mod_spec, code = _get_module_details(mod_name, _Error)\r\n File \"/home/me/.local/lib/python3.9/runpy.py\", line 147, in _get_module_details\r\n return _get_module_details(pkg_main_name, error)\r\n File \"/home/me/.local/lib/python3.9/runpy.py\", line 111, in _get_module_details\r\n __import__(pkg_name)\r\n File \"/home/me/.local/lib/python3.9/site-packages/yt_dlp/__init__.py\", line 16, in <module>\r\n from .options import (\r\n File \"/home/me/.local/lib/python3.9/site-packages/yt_dlp/options.py\", line 22, in <module>\r\n from .cookies import SUPPORTED_BROWSERS\r\n File \"/home/me/.local/lib/python3.9/site-packages/yt_dlp/cookies.py\", line 5, in <module>\r\n import sqlite3\r\n File \"/home/me/.local/lib/python3.9/sqlite3/__init__.py\", line 23, in <module>\r\n from sqlite3.dbapi2 import *\r\n File \"/home/me/.local/lib/python3.9/sqlite3/dbapi2.py\", line 27, in <module>\r\n from _sqlite3 import *\r\nModuleNotFoundError: No module named '_sqlite3'\r\n```\r\n\r\n\r\n## Description\r\n\r\nThe `_sqlite3` Python module seems to be required since version `2021.07.21`.\r\n\r\nCan we make the program work without that module? It is of course OK that some functions are disabled in that situation.", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/554", "file_loc": {"base_commit": "2fd226f6a76715e429709d7172183d48e07c7ab3", "files": [{"path": "yt_dlp/cookies.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [25], "mod": [5]}, "(None, '_extract_firefox_cookies', 91)": {"add": [92]}, "(None, '_extract_chrome_cookies', 196)": {"add": [197]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/cookies.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "f14c2333481c63c24017a41ded7d8f36726504b7", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/3005", "iss_label": "site-bug", "title": "Can't extract from sportdeutschland.tv", "body": "### Checklist\n\n- [X] I'm reporting a site feature request\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Example URLs\n\nhttps://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0\n\n### Description\n\nCan't extract from this link:\r\n\r\nhttps://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0\n\n### Verbose log\n\n```shell\n[debug] Command-line config: ['-vU', 'https://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0', '--verbose']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8 (No ANSI), err utf-8 (No ANSI), pref cp1252\r\n[debug] yt-dlp version 2022.03.08.1 [c0c2c57] (win_exe)\r\n[debug] Python version 3.8.10 (CPython 64bit) - Windows-7-6.1.7601-SP1\r\n[debug] exe versions: ffmpeg 2022-01-10-git-f37e66b393-full_build-www.gyan.dev (setts), ffprobe 2022-01-10-git-f37e66b393-full_build-www.gyan.dev\r\n[debug] Optional libraries: brotli, Cryptodome, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\nLatest version: 2022.03.08.1, Current version: 2022.03.08.1\r\nyt-dlp is up to date (2022.03.08.1)\r\n[debug] [SportDeutschland] Extracting URL: https://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0\r\n[SportDeutschland] deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0: Downloading JSON metadata\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\extractor\\common.py\", line 735, in _request_webpage\r\n File \"yt_dlp\\YoutubeDL.py\", line 3591, in urlopen\r\n File \"urllib\\request.py\", line 531, in open\r\n File \"urllib\\request.py\", line 640, in http_response\r\n File \"urllib\\request.py\", line 569, in error\r\n File \"urllib\\request.py\", line 502, in _call_chain\r\n File \"urllib\\request.py\", line 649, in http_error_default\r\nurllib.error.HTTPError: HTTP Error 404: Not Found\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\extractor\\common.py\", line 617, in extract\r\n File \"yt_dlp\\extractor\\sportdeutschland.py\", line 47, in _real_extract\r\n File \"yt_dlp\\extractor\\common.py\", line 997, in _download_json\r\n File \"yt_dlp\\extractor\\common.py\", line 976, in _download_json_handle\r\n File \"yt_dlp\\extractor\\common.py\", line 768, in _download_webpage_handle\r\n File \"yt_dlp\\extractor\\common.py\", line 753, in _request_webpage\r\nyt_dlp.utils.ExtractorError: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the \"Broken site\" issue template\r\nproperly. Confirm you are on the latest version using yt-dlp -U\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\YoutubeDL.py\", line 1389, in wrapper\r\n File \"yt_dlp\\YoutubeDL.py\", line 1459, in __extract_info\r\n File \"yt_dlp\\extractor\\common.py\", line 643, in extract\r\nyt_dlp.utils.ExtractorError: [SportDeutschland] deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the \"Broken site\" issue template properly. Confirm you are on the latest version using yt-dlp -U\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\__main__.py\", line 19, in <module>\r\n File \"yt_dlp\\__init__.py\", line 864, in main\r\n File \"yt_dlp\\__init__.py\", line 854, in _real_main\r\n File \"yt_dlp\\YoutubeDL.py\", line 3254, in download\r\n File \"yt_dlp\\YoutubeDL.py\", line 3227, in wrapper\r\n File \"yt_dlp\\YoutubeDL.py\", line 1380, in extract_info\r\n File \"yt_dlp\\YoutubeDL.py\", line 1407, in wrapper\r\n File \"yt_dlp\\utils.py\", line 1088, in format_traceback\r\nTypeError: format_exception() missing 2 required positional arguments: 'value' and 'tb'\r\n[31380] Failed to execute script '__main__' due to unhandled exception!\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/6041", "file_loc": {"base_commit": "f14c2333481c63c24017a41ded7d8f36726504b7", "files": [{"path": "yt_dlp/extractor/sportdeutschland.py", "status": "modified", "Loc": {"('SportDeutschlandIE', '_real_extract', 42)": {"add": [95], "mod": [44, 45, 47, 48, 49, 52, 53, 54, 56, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94]}, "(None, None, None)": {"mod": [3, 4, 5, 6, 7, 8, 9]}, "('SportDeutschlandIE', None, 13)": {"mod": [16, 18, 20, 21, 22, 23, 24, 25, 26, 27, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/sportdeutschland.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/9652", "iss_label": "DRM\nsite-bug\npatch-available", "title": "on.orf.at not complete DRM detection", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n\r\n### Checklist\r\n\r\n- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\r\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\r\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\r\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\r\n\r\n### Region\r\n\r\nAustria\r\n\r\n### Provide a description that is worded well enough to be understood\r\n\r\nI found a video that is DRM Protected but the `-F` parameter reports available formats to download:\r\nI added the `--allow-unplayable-formats` for better understanding, what is marked as DRM and what not.\r\nAll should be marked but some aren't\r\n_See Complete Verbose Output_\r\n\r\n\r\nall of them are DRM protected what can be found out by `--check-formats`\r\n\r\n```\r\n/tmp 3.9s [1] nix run -- nixpkgs#yt-dlp \"https://on.orf.at/video/14217002/dsf\" --check-formats -vU\r\n[debug] Command-line config: ['https://on.orf.at/video/14217002/dsf', '--check-formats', '-vU']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (pip)\r\n[debug] Python 3.11.8 (CPython x86_64 64bit) - Linux-6.1.77-x86_64-with-glibc2.38 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.38)\r\n[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome-3.18.0, brotlicffi-1.1.0.0, certifi-2023.07.22, mutagen-1.47.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.43.2, urllib3-2.0.7, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests\r\n[debug] Loaded 1803 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.03.10 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)\r\n[orf:on] Extracting URL: https://on.orf.at/video/14217002/dsf\r\n[orf:on] dsf: Downloading webpage\r\n[orf:on] dsf: Downloading JSON metadata\r\n[orf:on] dsf: Downloading m3u8 information\r\n[orf:on] dsf: Downloading m3u8 information\r\n[orf:on] dsf: Downloading MPD manifest\r\n[orf:on] dsf: Downloading MPD manifest\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] Testing format hls-3192-1\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-3192-1. Skipping...\r\n[info] Testing format hls-3192-0\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-3192-0. Skipping...\r\n[info] Testing format hls-1992-1\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-1992-1. Skipping...\r\n[info] Testing format hls-1992-0\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-1992-0. Skipping...\r\n[info] Testing format hls-992-1\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-992-1. Skipping...\r\n[info] Testing format hls-992-0\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-992-0. Skipping...\r\n[info] Testing format dash-p0aa0br192000-1\r\n[dashsegments] Total fragments: 1\r\n[download] Destination: /tmp/tmp973tt42o.tmp\r\n[download] 100% of 651.00B in 00:00:00 at 9.21KiB/s\r\n[info] Testing format hls-3192-1\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-3192-1. Skipping...\r\n[info] Testing format hls-3192-0\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-3192-0. Skipping...\r\n[info] Testing format hls-1992-1\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-1992-1. Skipping...\r\n[info] Testing format hls-1992-0\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-1992-0. Skipping...\r\n[info] Testing format hls-992-1\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-992-1. Skipping...\r\n[info] Testing format hls-992-0\r\n[hlsnative] Downloading m3u8 manifest\r\nERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format\r\n\r\n[info] Unable to download format hls-992-0. Skipping...\r\nERROR: [orf:on] 14217002: Requested format is not available. Use --list-formats for a list of available formats\r\nTraceback (most recent call last):\r\n File \"/nix/store/rmfvh66k2rr05djcqxx61v59wr569xmb-python3.11-yt-dlp-2024.3.10/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py\", line 1594, in wrapper\r\n return func(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/nix/store/rmfvh66k2rr05djcqxx61v59wr569xmb-python3.11-yt-dlp-2024.3.10/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py\", line 1750, in __extract_info\r\n return self.process_ie_result(ie_result, download, extra_info)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/nix/store/rmfvh66k2rr05djcqxx61v59wr569xmb-python3.11-yt-dlp-2024.3.10/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py\", line 1809, in process_ie_result\r\n ie_result = self.process_video_result(ie_result, download=download)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/nix/store/rmfvh66k2rr05djcqxx61v59wr569xmb-python3.11-yt-dlp-2024.3.10/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py\", line 2930, in process_video_result\r\n raise ExtractorError(\r\nyt_dlp.utils.ExtractorError: [orf:on] 14217002: Requested format is not available. Use --list-formats for a list of available formats\r\n```\r\n\r\nI created a openapi description for the new `v4.3` api that can be found here:\r\nhttps://gist.github.com/TuxCoder/6987f49e01d8ef826037cb99afdcc1b2\r\nThe interessting part is the public api\r\nhttps://gist.github.com/TuxCoder/6987f49e01d8ef826037cb99afdcc1b2#file-openapiv3-yaml-L626\r\nwith the content of an `Episode`\r\nhttps://gist.github.com/TuxCoder/6987f49e01d8ef826037cb99afdcc1b2#file-openapiv3-yaml-L813\r\n\r\nthere is a field calld `is_drm_protected` what should be reliable \r\n\r\nEdit:\r\nThere is also the same field for each `Source`\r\nhttps://gist.github.com/TuxCoder/6987f49e01d8ef826037cb99afdcc1b2#file-openapiv3-yaml-L674\r\n\r\nThe Json in question can be fetched here:\r\nhttps://api-tvthek.orf.at/api/v4.3/public/episode/encrypted/M2RTbGZlazAzbnNMS2RqNEpzZDE0MjE3MDAy\r\n\r\nEndEdit\r\n\r\nI tested this with a small patch\r\n```patch\r\ndiff --git a/yt_dlp/extractor/orf.py b/yt_dlp/extractor/orf.py\r\nindex 526e9acaf..4ff4cf90c 100644\r\n--- a/yt_dlp/extractor/orf.py\r\n+++ b/yt_dlp/extractor/orf.py\r\n@@ -590,6 +590,9 @@ def _extract_video(self, video_id, display_id):\r\n api_json = self._download_json(\r\n f'https://api-tvthek.orf.at/api/v4.3/public/episode/encrypted/{encrypted_id}', display_id)\r\n \r\n+\r\n+ has_drm = traverse_obj(api_json, 'is_drm_protected', {bool})\r\n+\r\n formats, subtitles = [], {}\r\n for manifest_type in traverse_obj(api_json, ('sources', {dict.keys}, ...)):\r\n for manifest_url in traverse_obj(api_json, ('sources', manifest_type, ..., 'src', {url_or_none})):\r\n@@ -601,6 +604,8 @@ def _extract_video(self, video_id, display_id):\r\n manifest_url, display_id, fatal=False, mpd_id='dash')\r\n else:\r\n continue\r\n+ for fmt in fmts:\r\n+ fmt['has_drm'] = has_drm\r\n formats.extend(fmts)\r\n self._merge_subtitles(subs, target=subtitles)\r\n```\r\n\r\nwhat looks like to fix the problem:\r\nNow all are formats are shown as DRM protected\r\n\r\n```\r\n[~/projects/yt-dlp]$ python3 yt_dlp/__main__.py \"https://on.orf.at/video/14217002/dsf\" -F --allow-unplayable-formats -vU\r\n[debug] Command-line config: ['https://on.orf.at/video/14217002/dsf', '-F', '--allow-unplayable-formats', '-vU']\r\nWARNING: You have asked for UNPLAYABLE formats to be listed/downloaded. This is a developer option intended for debugging.\r\n If you experience any issues while using this option, DO NOT open a bug report\r\n[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (source)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Git HEAD: 79a451e57\r\n[debug] Python 3.11.8 (CPython x86_64 64bit) - Linux-6.1.77-x86_64-with-glibc2.38 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.38)\r\n[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0\r\n[debug] Optional libraries: Cryptodome-3.18.0, brotlicffi-1.1.0.0, certifi-2023.07.22, mutagen-1.47.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.43.2, urllib3-2.0.7, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests\r\n[debug] Loaded 1810 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.03.10 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)\r\n[orf:on] Extracting URL: https://on.orf.at/video/14217002/dsf\r\n[orf:on] dsf: Downloading webpage\r\n[orf:on] dsf: Downloading JSON metadata\r\n[orf:on] dsf: Downloading m3u8 information\r\n[orf:on] dsf: Downloading m3u8 information\r\n[orf:on] dsf: Downloading MPD manifest\r\n[orf:on] dsf: Downloading MPD manifest\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] Available formats for 14217002:\r\nID EXT RESOLUTION FPS \u2502 FILESIZE TBR PROTO \u2502 VCODEC VBR ACODEC ABR ASR MORE INFO\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nhls-audio-Deutsch-0 m3u8 audio only \u2502 m3u8 \u2502 audio only unknown [de] DRM, Deutsch\r\nhls-audio-Deutsch-1 m3u8 audio only \u2502 m3u8 \u2502 audio only unknown [de] DRM, Deutsch\r\ndash-p0aa0br192000-0 m4a audio only \u2502 ~ 54.70MiB 192k dash \u2502 audio only mp4a.40.2 192k 48k [de] DRM, DASH audio, m4a_dash\r\ndash-p0aa0br192000-1 m4a audio only \u2502 ~ 54.70MiB 192k dash \u2502 audio only mp4a.40.2 192k 48k [de] DRM, DASH audio, m4a_dash\r\nhls-992-0 mp4 640x360 \u2502 ~282.63MiB 992k m3u8 \u2502 unknown unknown DRM\r\nhls-992-1 mp4 640x360 \u2502 ~282.63MiB 992k m3u8 \u2502 unknown unknown DRM\r\ndash-p0va0br801596-0 mp4 640x360 25 \u2502 ~228.38MiB 802k dash \u2502 avc1.64001e 802k video only DRM, DASH video, mp4_dash\r\ndash-p0va0br801596-1 mp4 640x360 25 \u2502 ~228.38MiB 802k dash \u2502 avc1.64001e 802k video only DRM, DASH video, mp4_dash\r\nhls-1992-0 mp4 960x540 \u2502 ~567.54MiB 1992k m3u8 \u2502 unknown unknown DRM\r\nhls-1992-1 mp4 960x540 \u2502 ~567.54MiB 1992k m3u8 \u2502 unknown unknown DRM\r\ndash-p0va0br1801680-0 mp4 960x540 25 \u2502 ~513.32MiB 1802k dash \u2502 avc1.64001f 1802k video only DRM, DASH video, mp4_dash\r\ndash-p0va0br1801680-1 mp4 960x540 25 \u2502 ~513.32MiB 1802k dash \u2502 avc1.64001f 1802k video only DRM, DASH video, mp4_dash\r\nhls-3192-0 mp4 1280x720 \u2502 ~909.43MiB 3192k m3u8 \u2502 unknown unknown DRM\r\nhls-3192-1 mp4 1280x720 \u2502 ~909.43MiB 3192k m3u8 \u2502 unknown unknown DRM\r\ndash-p0va0br3001976-0 mp4 1280x720 25 \u2502 ~855.29MiB 3002k dash \u2502 avc1.64001f 3002k video only DRM, DASH video, mp4_dash\r\ndash-p0va0br3001976-1 mp4 1280x720 25 \u2502 ~855.29MiB 3002k dash \u2502 avc1.64001f 3002k video only DRM, DASH video, mp4_dash\r\n```\r\n\r\nbut I'm not sure its the right place\r\n\r\n\r\nAlso thanks to all Maintainer / Contributor, this is a awesome tool.\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\r\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n```shell\r\n/tmp 1.6s \u2771 nix run -- nixpkgs#yt-dlp \"https://on.orf.at/video/14217002/dsf\" --allow-unplayable-formats -F -vU\r\n[debug] Command-line config: ['https://on.orf.at/video/14217002/dsf', '--allow-unplayable-formats', '-F', '-vU']\r\nWARNING: You have asked for UNPLAYABLE formats to be listed/downloaded. This is a developer option intended for debugging.\r\n If you experience any issues while using this option, DO NOT open a bug report\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (pip)\r\n[debug] Python 3.11.8 (CPython x86_64 64bit) - Linux-6.1.77-x86_64-with-glibc2.38 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.38)\r\n[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome-3.18.0, brotlicffi-1.1.0.0, certifi-2023.07.22, mutagen-1.47.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.43.2, urllib3-2.0.7, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests\r\n[debug] Loaded 1803 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.03.10 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)\r\n[orf:on] Extracting URL: https://on.orf.at/video/14217002/dsf\r\n[orf:on] dsf: Downloading webpage\r\n[orf:on] dsf: Downloading JSON metadata\r\n[orf:on] dsf: Downloading m3u8 information\r\n[orf:on] dsf: Downloading m3u8 information\r\n[orf:on] dsf: Downloading MPD manifest\r\n[orf:on] dsf: Downloading MPD manifest\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] Available formats for 14217002:\r\nID EXT RESOLUTION FPS \u2502 FILESIZE TBR PROTO \u2502 VCODEC VBR ACODEC ABR ASR MORE INFO\r\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\r\nhls-audio-Deutsch-0 m3u8 audio only \u2502 m3u8 \u2502 audio only unknown [de] Deutsch\r\nhls-audio-Deutsch-1 m3u8 audio only \u2502 m3u8 \u2502 audio only unknown [de] Deutsch\r\ndash-p0aa0br192000-0 m4a audio only \u2502 ~ 56.02MiB 192k dash \u2502 audio only mp4a.40.2 192k 48k [de] DASH audio, m4a_dash\r\ndash-p0aa0br192000-1 m4a audio only \u2502 ~ 56.02MiB 192k dash \u2502 audio only mp4a.40.2 192k 48k [de] DASH audio, m4a_dash\r\nhls-992-0 mp4 640x360 \u2502 ~289.41MiB 992k m3u8 \u2502 unknown unknown\r\nhls-992-1 mp4 640x360 \u2502 ~289.41MiB 992k m3u8 \u2502 unknown unknown\r\ndash-p0va0br801596-0 mp4 640x360 25 \u2502 ~233.86MiB 802k dash \u2502 avc1.64001e 802k video only DRM, DASH video, mp4_dash\r\ndash-p0va0br801596-1 mp4 640x360 25 \u2502 ~233.86MiB 802k dash \u2502 avc1.64001e 802k video only DRM, DASH video, mp4_dash\r\nhls-1992-0 mp4 960x540 \u2502 ~581.16MiB 1992k m3u8 \u2502 unknown unknown\r\nhls-1992-1 mp4 960x540 \u2502 ~581.16MiB 1992k m3u8 \u2502 unknown unknown\r\ndash-p0va0br1801680-0 mp4 960x540 25 \u2502 ~525.64MiB 1802k dash \u2502 avc1.64001f 1802k video only DRM, DASH video, mp4_dash\r\ndash-p0va0br1801680-1 mp4 960x540 25 \u2502 ~525.64MiB 1802k dash \u2502 avc1.64001f 1802k video only DRM, DASH video, mp4_dash\r\nhls-3192-0 mp4 1280x720 \u2502 ~931.26MiB 3192k m3u8 \u2502 unknown unknown\r\nhls-3192-1 mp4 1280x720 \u2502 ~931.26MiB 3192k m3u8 \u2502 unknown unknown\r\ndash-p0va0br3001976-0 mp4 1280x720 25 \u2502 ~875.82MiB 3002k dash \u2502 avc1.64001f 3002k video only DRM, DASH video, mp4_dash\r\ndash-p0va0br3001976-1 mp4 1280x720 25 \u2502 ~875.82MiB 3002k dash \u2502 avc1.64001f 3002k video only DRM, DASH video, mp4_dash\r\n```\r\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/9677", "file_loc": {"base_commit": "3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4", "files": [{"path": "yt_dlp/extractor/orf.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16]}, "('ORFONIE', None, 570)": {"add": [585], "mod": [572, 588]}, "('ORFONIE', '_extract_video', 588)": {"add": [606, 611], "mod": [591, 598, 601]}, "('ORFONIE', '_real_extract', 619)": {"mod": [620, 621, 628, 629]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/orf.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "2314b4d89fc111ddfcb25937210f1f1c2390cc4a", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/4776", "iss_label": "bug", "title": "`InfoExtractor._get_cookies` fails if values contain quotes", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.08.19** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Provide a description that is worded well enough to be understood\n\n`InfoExtractor._get_cookies` uses `http.cookies.SimpleCookie` to process the cookies. Analogue to #4692 the parsing will fail fast instead of skipping the invalid values.\r\n\r\nSimpleCookie allows values with quotes if set explicitly.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['--cookies-from-browser', 'firefox', '-j', 'https://beta.crunchyroll.com/de/watch/GG1U2Q50J/the-former-couple-refuses-to-say', '-vU']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 \r\n[debug] yt-dlp version 2022.08.19 [48c88e0] (pip) \r\n[debug] Python 3.10.5 (CPython 64bit) - Windows-10-10.0.22000-SP0 \r\n[debug] Checking exe version: ffmpeg -bsfs \r\n[debug] Checking exe version: ffprobe -bsfs \r\n[debug] exe versions: ffmpeg n5.1-10-g6ee1996721-20220822 (setts), ffprobe n5.1-10-g6ee1996721-20220822 \r\n[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3\r\n[Cookies] Extracting cookies from firefox\r\n[debug] Extracting cookies from: \"C:\\Users\\grub4k\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\kbiex092.default-release\\cookies.sqlite\"\r\n[Cookies] Extracted 790 cookies from firefox\r\n[debug] Proxy map: {} \r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest \r\nLatest version: 2022.08.19, Current version: 2022.08.19 \r\nyt-dlp is up to date (2022.08.19) \r\n[debug] [crunchyroll:beta] Extracting URL: https://beta.crunchyroll.com/de/watch/GG1U2Q50J/the-former-couple-refuses-to-say\r\n[crunchyroll:beta] the-former-couple-refuses-to-say: Downloading webpage \r\nERROR: GG1U2Q50J: An extractor error has occurred. (caused by KeyError('byId')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"C:\\Users\\grub4k\\AppData\\Local\\pypoetry\\Cache\\virtualenvs\\crunchyload-sk9Pq0JJ-py3.10\\lib\\site-packages\\yt_dlp\\extractor\\common.py\", line 666, in extract\r\n ie_result = self._real_extract(url)\r\n File \"C:\\Users\\grub4k\\AppData\\Local\\pypoetry\\Cache\\virtualenvs\\crunchyload-sk9Pq0JJ-py3.10\\lib\\site-packages\\yt_dlp\\extractor\\crunchyroll.py\", line 805, in _real_extract\r\n return self._redirect_from_beta(url, lang, internal_id, display_id, True, CrunchyrollIE.ie_key())\r\n File \"C:\\Users\\grub4k\\AppData\\Local\\pypoetry\\Cache\\virtualenvs\\crunchyload-sk9Pq0JJ-py3.10\\lib\\site-packages\\yt_dlp\\extractor\\crunchyroll.py\", line 752, in _redirect_from_beta\r\n content_data = initial_state['content']['byId'][internal_id]\r\nKeyError: 'byId'\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/4780", "file_loc": {"base_commit": "2314b4d89fc111ddfcb25937210f1f1c2390cc4a", "files": [{"path": "test/test_cookies.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "('TestCookies', 'test_pbkdf2_sha1', 137)": {"add": [139]}}}, {"path": "yt_dlp/cookies.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "(None, '_parse_browser_specification', 985)": {"add": [991]}}}, {"path": "yt_dlp/extractor/common.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23]}, "('InfoExtractor', '_get_cookies', 3633)": {"mod": [3634]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2\u6709\u70b9\u72b9\u8c6b\uff0c\u51fa\u9519\u4e86\u4f46\u662f\u8be5\u62a5\u9519\u662f\u7528\u4e8e\u9a8c\u8bc1\u67d0\u4e2a\u95ee\u9898\u3002", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/cookies.py", "yt_dlp/extractor/common.py"], "doc": [], "test": ["test/test_cookies.py"], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "a79cba0c95b8b74d2ca4f7fbf6ffe76e34ed7221", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2840", "iss_label": "site-request", "title": "Site support request for: ixigua.com", "body": "### Checklist\r\n\r\n- [X] I'm reporting a new site support request\r\n- [X] I've verified that I'm running yt-dlp version **2022.02.04**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\r\n- [X] I've checked that all provided URLs are alive and playable in a browser\r\n- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge\r\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required\r\n\r\n### Region\r\n\r\nThailand, Mainland China, probably worldwide.\r\n\r\n### Example URLs\r\n\r\nhttps://www.ixigua.com/6996881461559165471\r\nhttps://www.ixigua.com/6901922393657180679?id=6963688388327113255&logTag=c159ae59d579c199c066\r\n\r\n### Description\r\n\r\nXigua Video (https://www.ixigua.com/) is an online video-sharing platform owned by ByteDance. As of June 2020, the platform has 131 million monthly active users.\r\n\r\n### Verbose log\r\n\r\n```shell\r\n[debug] Command-line config: ['-vU', 'https://www.ixigua.com/6996881461559165471']\r\n[debug] Encodings: locale cp874, fs utf-8, out utf-8, err utf-8, pref cp874\r\n[debug] yt-dlp version 2022.02.04 [c1653e9ef]\r\n[debug] Python version 3.7.2 (CPython 64bit) - Windows-10-10.0.18362-SP0\r\n[debug] exe versions: ffmpeg 2021-10-28-git-e84c83ef98-full_build-www.gyan.dev (setts), ffprobe 2021-10-28-git-e84c83ef98-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\nLatest version: 2022.02.04, Current version: 2022.02.04\r\nyt-dlp is up to date (2022.02.04)\r\n[debug] [generic] Extracting URL: https://www.ixigua.com/6996881461559165471\r\n[generic] 6996881461559165471: Requesting header\r\nWARNING: [generic] Falling back on generic information extractor.\r\n[generic] 6996881461559165471: Downloading webpage\r\n[generic] 6996881461559165471: Extracting information\r\n[debug] Looking for video embeds\r\nERROR: Unsupported URL: https://www.ixigua.com/6996881461559165471\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 1381, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 1451, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\yt_dlp\\extractor\\common.py\", line 612, in extract\r\n ie_result = self._real_extract(url)\r\n File \"C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\yt_dlp\\extractor\\generic.py\", line 3986, in _real_extract\r\n raise UnsupportedError(url)\r\nyt_dlp.utils.UnsupportedError: Unsupported URL: https://www.ixigua.com/6996881461559165471\r\n\r\n\r\nC:\\Users\\User>\r\n```\r\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/3953", "file_loc": {"base_commit": "a79cba0c95b8b74d2ca4f7fbf6ffe76e34ed7221", "files": [{"path": "yt_dlp/extractor/_extractors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [722]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/_extractors.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "700444c23ddb65f618c2abd942acdc0c58c650b1", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/3355", "iss_label": "bug\npatch-available\nregression", "title": "problem with double-dot segments (`/../`) after the hostname", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.04.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Description\n\nSome URLs have a double-dot section after the hostname, which causes problems in yt-dlp.\r\n\r\nExample: https://streamwo.com/v/gp445h2f \r\nif we resolve this URL we get this:\r\n```\r\n$ yt-dlp --get-url https://streamwo.com/v/gp445h2f \r\nhttps://reoa92d.com/../uploaded/1649416469.mp4#t=0.1\r\n```\r\nWhich has a `../` segment right after the hostname.\r\nOpening this result in browsers, or downloading it using curl is no problem:\r\n```\r\n$ curl -O https://reoa92d.com/../uploaded/1649416469.mp4\r\n...\r\nSucceeds\r\n```\r\nBut yt-dlp fails:\r\n\r\n```\r\n$ yt-dlp https://streamwo.com/v/gp445h2f \r\n[generic] gp445h2f: Requesting header\r\nWARNING: [generic] Falling back on generic information extractor.\r\n[generic] gp445h2f: Downloading webpage\r\n[generic] gp445h2f: Extracting information\r\n[download] Downloading playlist: Streamwo\r\n[generic] playlist Streamwo: Collected 1 videos; downloading 1 of them\r\n[download] Downloading video 1 of 1\r\n[info] gp445h2f: Downloading 1 format(s): 0\r\nERROR: unable to download video data: HTTP Error 400: Bad Request\r\n[download] Finished downloading playlist: Streamwo\r\n```\r\n\r\nmpv (which uses yt-dlp in it's ytdl_hook) fails as well:\r\n\r\n```\r\n$ mpv https://streamwo.com/v/gp445h2f \r\n[ffmpeg] https: HTTP error 400 Bad Request\r\nFailed to open https://reoa92d.com/../uploaded/1649416469.mp4#t=0.1.\r\n\r\nExiting... (Errors when loading file)\r\n```\n\n### Verbose log\n\n```shell\n$ yt-dlp -vU https://streamwo.com/v/gp445h2f \r\n[debug] Command-line config: ['-vU', 'https://streamwo.com/v/gp445h2f']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2022.04.08 [7884ade65] (zip)\r\n[debug] Python version 3.10.4 (CPython 64bit) - Linux-5.15.32-1-lts-x86_64-with-glibc2.35\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] exe versions: ffmpeg 5.0 (setts), ffprobe 5.0, phantomjs 2.1.1, rtmpdump 2.4\r\n[debug] Optional libraries: mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\nLatest version: 2022.04.08, Current version: 2022.04.08\r\nyt-dlp is up to date (2022.04.08)\r\n[debug] [generic] Extracting URL: https://streamwo.com/v/gp445h2f\r\n[generic] gp445h2f: Requesting header\r\nWARNING: [generic] Falling back on generic information extractor.\r\n[generic] gp445h2f: Downloading webpage\r\n[generic] gp445h2f: Extracting information\r\n[debug] Looking for video embeds\r\n[debug] Identified a HTML5 media\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[download] Downloading playlist: Streamwo\r\n[generic] playlist Streamwo: Collected 1 videos; downloading 1 of them\r\n[download] Downloading video 1 of 1\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] gp445h2f: Downloading 1 format(s): 0\r\n[debug] Invoking downloader on \"https://reoa92d.com/../uploaded/1649416469.mp4#t=0.1\"\r\nERROR: unable to download video data: HTTP Error 400: Bad Request\r\nTraceback (most recent call last):\r\n File \"/home/koonix/./yt-dlp/yt_dlp/YoutubeDL.py\", line 3138, in process_info\r\n success, real_download = self.dl(temp_filename, info_dict)\r\n File \"/home/koonix/./yt-dlp/yt_dlp/YoutubeDL.py\", line 2846, in dl\r\n return fd.download(name, new_info, subtitle)\r\n File \"/home/koonix/./yt-dlp/yt_dlp/downloader/common.py\", line 457, in download\r\n ret = self.real_download(filename, info_dict)\r\n File \"/home/koonix/./yt-dlp/yt_dlp/downloader/http.py\", line 369, in real_download\r\n establish_connection()\r\n File \"/home/koonix/./yt-dlp/yt_dlp/downloader/http.py\", line 128, in establish_connection\r\n ctx.data = self.ydl.urlopen(request)\r\n File \"/home/koonix/./yt-dlp/yt_dlp/YoutubeDL.py\", line 3601, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/lib/python3.10/urllib/request.py\", line 525, in open\r\n response = meth(req, response)\r\n File \"/usr/lib/python3.10/urllib/request.py\", line 634, in http_response\r\n response = self.parent.error(\r\n File \"/usr/lib/python3.10/urllib/request.py\", line 563, in error\r\n return self._call_chain(*args)\r\n File \"/usr/lib/python3.10/urllib/request.py\", line 496, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib/python3.10/urllib/request.py\", line 643, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\r\nurllib.error.HTTPError: HTTP Error 400: Bad Request\r\n\r\n[download] Finished downloading playlist: Streamwo\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/7662", "file_loc": {"base_commit": "25b6e8f94679b4458550702b46e61249b875a4fd", "files": [{"path": "test/test_networking.py", "status": "modified", "Loc": {"('HTTPTestRequestHandler', 'do_GET', 142)": {"add": [175]}, "('TestHTTPRequestHandler', None, 316)": {"add": [357]}}}, {"path": "test/test_utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [50, 51, 135]}, "('TestUtil', None, 138)": {"mod": [936]}, "('TestUtil', 'test_escape_url', 936)": {"mod": [938, 942, 946, 950, 953]}}}, {"path": "yt_dlp/cookies.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [44], "mod": [36]}, "('YoutubeDLCookieJar', 'get_cookie_header', 1309)": {"mod": [1311]}, "('YoutubeDLCookieJar', 'get_cookies_for_url', 1315)": {"mod": [1320]}}}, {"path": "yt_dlp/networking/_urllib.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [44]}, "('HTTPHandler', 'http_request', 172)": {"mod": [182]}, "('HTTPHandler', 'http_response', 190)": {"mod": [215]}}}, {"path": "yt_dlp/networking/common.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [29, 32]}, "('Request', 'url', 366)": {"mod": [371]}}}, {"path": "yt_dlp/utils/_legacy.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}, "(None, 'sanitized_Request', 199)": {"mod": [200]}}}, {"path": "yt_dlp/utils/_utils.py", "status": "modified", "Loc": {"(None, 'escape_rfc3986', 2467)": {"mod": [2467, 2468, 2469, 2472, 2473, 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481]}}}, {"path": "yt_dlp/utils/networking.py", "status": "modified", "Loc": {"(None, 'clean_headers', 114)": {"add": [117]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/utils/_legacy.py", "yt_dlp/networking/_urllib.py", "yt_dlp/utils/_utils.py", "yt_dlp/utils/networking.py", "yt_dlp/cookies.py", "yt_dlp/networking/common.py"], "doc": [], "test": ["test/test_utils.py", "test/test_networking.py"], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "4b5eec0aaa7c02627f27a386591b735b90e681a8", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/11641", "iss_label": "site-bug\npatch-available", "title": "[TikTok] ERROR: Postprocessing: Conversion failed! when embedding thumbnail", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Provide a description that is worded well enough to be understood\n\nMainly the error \"conversion failed\", post processor errors. Lots of videos don't download. Also errors that have to do with \"skipping unsupported chunk: ANMF\" and \"Nothing was written into output file, because at least one of its streams received no packets. Conversion failed!\"\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['https://www.tiktok.com/@cooperspamsasf/video/7432045283686632710', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] [TikTok] Found universal data for rehydration\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[debug] Invoking http downloader on \"https://v19-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-pve-0068/oAXOvcjeEAZzgjgfgQLKR5SGzeNrxA9ICICxHI/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=2404&bt=1202&cs=2&ds=4&ft=4fUEKMk88Zmo0WRLZb4jVaThrpWrKsd.&mime_type=video_mp4&qs=15&rc=NzNpZWU8OzRmNzs0Nzk1aUBpam93dnY5cnh4djMzNzczM0AtMS0uNS41NTIxMTBhXzEyYSNmZW9uMmRjbGVgLS1kMTZzcw%3D%3D&btag=e00088000&expire=1732609535&l=2024112602251903ACD4E62348E641B01E&ply_type=2&policy=2&signature=1e746658933c8ee3a81756c4afee15d3&tk=tt_chain_token\"\r\n[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i \"file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].webp\" -update 1 -movflags +faststart \"file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].png\"\r\n[debug] ffmpeg version 7.0.2-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers\r\n built with gcc 13.2.0 (Rev5, Built by MSYS2 project)\r\n configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint\r\n libavutil 59. 8.100 / 59. 8.100\r\n libavcodec 61. 3.100 / 61. 3.100\r\n libavformat 61. 1.100 / 61. 1.100\r\n libavdevice 61. 1.100 / 61. 1.100\r\n libavfilter 10. 1.100 / 10. 1.100\r\n libswscale 8. 1.100 / 8. 1.100\r\n libswresample 5. 1.100 / 5. 1.100\r\n libpostproc 58. 1.100 / 58. 1.100\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANIM\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c4432c0] image data not found\r\n[image2 @ 000001545c441940] Could not find codec parameters for stream 0 (Video: webp, none): unspecified size\r\nConsider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options\r\nInput #0, image2, from 'file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].webp':\r\n Duration: 00:00:00.04, start: 0.000000, bitrate: N/A\r\n Stream #0:0: Video: webp, none, 25 fps, 25 tbr, 25 tbn\r\nStream mapping:\r\n Stream #0:0 -> #0:0 (webp (native) -> png (native))\r\nPress [q] to stop, [?] for help\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANIM\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001545c469fc0] image data not found\r\n[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Decoding error: Invalid data found when processing input\r\n[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Decode error rate 1 exceeds maximum 0.666667\r\n[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Task finished with error code: -1145393733 (Error number -1145393733 occurred)\r\n[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Terminating thread with return code -1145393733 (Error number -1145393733 occurred)\r\nCannot determine format of input 0:0 after EOF\r\n[vf#0:0 @ 000001545c44ac80] Task finished with error code: -1094995529 (Invalid data found when processing input)\r\n[vf#0:0 @ 000001545c44ac80] Terminating thread with return code -1094995529 (Invalid data found when processing input)\r\n[vost#0:0/png @ 000001545c448c00] Could not open encoder before EOF\r\n[vost#0:0/png @ 000001545c448c00] Task finished with error code: -22 (Invalid argument)\r\n[vost#0:0/png @ 000001545c448c00] Terminating thread with return code -22 (Invalid argument)\r\n[out#0/image2 @ 000001545c467e40] Nothing was written into output file, because at least one of its streams received no packets.\r\nframe= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A \r\nConversion failed!\r\n\r\nERROR: Postprocessing: Conversion failed!\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3556, in process_info\r\n replace_info_dict(self.post_process(dl_filename, info_dict, files_to_move))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3740, in post_process\r\n info = self.run_all_pps('post_process', info, additional_pps=info.get('__postprocessors'))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3722, in run_all_pps\r\n info = self.run_pp(pp, info)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3700, in run_pp\r\n files_to_delete, infodict = pp.run(infodict)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\common.py\", line 22, in run\r\n ret = func(self, info, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\common.py\", line 127, in wrapper\r\n return func(self, info)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\embedthumbnail.py\", line 84, in run\r\n thumbnail_filename = convertor.convert_thumbnail(thumbnail_filename, 'png')\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\ffmpeg.py\", line 1107, in convert_thumbnail\r\n self.real_run_ffmpeg(\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\ffmpeg.py\", line 367, in real_run_ffmpeg\r\n raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])\r\nyt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!\r\n\r\n[ERROR] Failed to process URL: https://www.tiktok.com/@cooperspamsasf/video/7432045283686632710 \r\n[debug] Command-line config: ['https://www.tiktok.com/@bris.main/video/7439516415444536606', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@leilaaaaaaaaa34/video/7430073853495299350', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@erindottie/video/7428505324375559457', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@user415387491623/video/7434688554627910968', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@elsa.vikstrom/video/7431528033044942102', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@ellatomine2/video/7440197178603228449', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@elena__blondie/video/7440396119076506912', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@johaanssson/video/7440864222747086112', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] [TikTok] Found universal data for rehydration\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[debug] Invoking http downloader on \"https://v19-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-ve-0068-euttp/ok6GJnAQE2q0AFfyAaPQIQDhK0KQBwD1EIcfR4/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1346&bt=673&cs=2&ds=4&eid=256&ft=4fUEKMk88Zmo0bRLZb4jVHCurpWrKsd.&mime_type=video_mp4&qs=15&rc=ZDVnOzplaWlpZzdmNmdpOUBpM3ZuM3Q5cndudzMzZjczM0AxY2A0LzZjNTMxLTAwY2JfYSNgLTBoMmQ0MS5gLS1kMWNzcw%3D%3D&btag=e00088000&expire=1732609546&l=202411260225363935CAF2808D524710A5&ply_type=2&policy=2&signature=c15a759aebb22c7a55843e0c19030be4&tk=tt_chain_token\"\r\n[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i \"file:Ghettooo #fyp #viral #trend [7440864222747086112].webp\" -update 1 -movflags +faststart \"file:Ghettooo #fyp #viral #trend [7440864222747086112].png\"\r\n[debug] ffmpeg version 7.0.2-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers\r\n built with gcc 13.2.0 (Rev5, Built by MSYS2 project)\r\n configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint\r\n libavutil 59. 8.100 / 59. 8.100\r\n libavcodec 61. 3.100 / 61. 3.100\r\n libavformat 61. 1.100 / 61. 1.100\r\n libavdevice 61. 1.100 / 61. 1.100\r\n libavfilter 10. 1.100 / 10. 1.100\r\n libswscale 8. 1.100 / 8. 1.100\r\n libswresample 5. 1.100 / 5. 1.100\r\n libpostproc 58. 1.100 / 58. 1.100\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANIM\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb512c0] image data not found\r\n[image2 @ 000001c6cbb569c0] Could not find codec parameters for stream 0 (Video: webp, none): unspecified size\r\nConsider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options\r\nInput #0, image2, from 'file:Ghettooo #fyp #viral #trend [7440864222747086112].webp':\r\n Duration: 00:00:00.04, start: 0.000000, bitrate: N/A\r\n Stream #0:0: Video: webp, none, 25 fps, 25 tbr, 25 tbn\r\nStream mapping:\r\n Stream #0:0 -> #0:0 (webp (native) -> png (native))\r\nPress [q] to stop, [?] for help\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANIM\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF\r\n[webp @ 000001c6cbb61cc0] image data not found\r\n[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Decoding error: Invalid data found when processing input\r\n[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Decode error rate 1 exceeds maximum 0.666667\r\n[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Task finished with error code: -1145393733 (Error number -1145393733 occurred)\r\n[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Terminating thread with return code -1145393733 (Error number -1145393733 occurred)\r\nCannot determine format of input 0:0 after EOF\r\n[vf#0:0 @ 000001c6cbb53140] Task finished with error code: -1094995529 (Invalid data found when processing input)\r\n[vf#0:0 @ 000001c6cbb53140] Terminating thread with return code -1094995529 (Invalid data found when processing input)\r\n[vost#0:0/png @ 000001c6cbb6f7c0] Could not open encoder before EOF\r\n[vost#0:0/png @ 000001c6cbb6f7c0] Task finished with error code: -22 (Invalid argument)\r\n[vost#0:0/png @ 000001c6cbb6f7c0] Terminating thread with return code -22 (Invalid argument)\r\n[out#0/image2 @ 000001c6cbb6ef40] Nothing was written into output file, because at least one of its streams received no packets.\r\nframe= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A \r\nConversion failed!\r\n\r\nERROR: Postprocessing: Conversion failed!\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3556, in process_info\r\n replace_info_dict(self.post_process(dl_filename, info_dict, files_to_move))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3740, in post_process\r\n info = self.run_all_pps('post_process', info, additional_pps=info.get('__postprocessors'))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3722, in run_all_pps\r\n info = self.run_pp(pp, info)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3700, in run_pp\r\n files_to_delete, infodict = pp.run(infodict)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\common.py\", line 22, in run\r\n ret = func(self, info, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\common.py\", line 127, in wrapper\r\n return func(self, info)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\embedthumbnail.py\", line 84, in run\r\n thumbnail_filename = convertor.convert_thumbnail(thumbnail_filename, 'png')\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\ffmpeg.py\", line 1107, in convert_thumbnail\r\n self.real_run_ffmpeg(\r\n File \"C:\\Users\\J\\miniconda3\\Lib\\site-packages\\yt_dlp\\postprocessor\\ffmpeg.py\", line 367, in real_run_ffmpeg\r\n raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])\r\nyt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!\r\n\r\n[ERROR] Failed to process URL: https://www.tiktok.com/@johaanssson/video/7440864222747086112 \r\n[debug] Command-line config: ['https://www.tiktok.com/@filippasekesan0/video/7440543183844560150', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@elana.maguire15/video/7439872632234708257', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@smostervik/video/7434809831665503520', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@bille.135/video/7439449253501603104', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@kristal.329/video/7435311238092950815', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@johanna_nordstrand/video/7440174704758983969', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@cassidyannpayne/video/7440590041866456362', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@backup_josefinelykk/video/7440092940057267488', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8\r\n[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)\r\n[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)\r\n[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1\r\n[debug] Proxy map: {}\r\n[debug] Extracting cookies from: \"C:\\Users\\J\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\c2ty66d6.default-release\\cookies.sqlite\"\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1837 extractors\r\n[debug] Loading archive file 'F:\\\\yt-dlp tiktok likes\\\\archive.txt'\r\n[debug] Command-line config: ['https://www.tiktok.com/@elina.pp3/video/7439466484176391456', '--download-archive', 'F:\\\\yt-dlp tiktok likes\\\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/11645", "file_loc": {"base_commit": "4b5eec0aaa7c02627f27a386591b735b90e681a8", "files": [{"path": "yt_dlp/extractor/tiktok.py", "status": "modified", "Loc": {"('TikTokBaseIE', '_parse_aweme_video_app', 322)": {"mod": [416, 417, 418, 419, 420, 421, 422, 423, 470]}, "('TikTokBaseIE', '_parse_aweme_video_web', 567)": {"mod": [603, 604, 605, 606, 607]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/tiktok.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "a40b0070c2a00d3ed839897462171a82323aa875", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/9003", "iss_label": "site-enhancement", "title": "[linkedin] yt-dlp see no subtitles but they exist (webvtt)", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Provide a description that is worded well enough to be understood\n\nyt-dlp downloads video fine but tells \"no subtitles\" for the video which really has them (webvtt, could be downloaded manually).\r\nRelated to Linkedin only. Don't know if it's typical at this site / did not check with other LI videos.\r\nOS: Fedora Linux 39, x86_64\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n$ yt-dlp-night -vU --list-subs https://www.linkedin.com/posts/the-mathworks_2_why-use-kalman-filters-activity-7150516916539805696-HSe3\r\n[debug] Command-line config: ['-vU', '--list-subs', 'https://www.linkedin.com/posts/the-mathworks_2_why-use-kalman-filters-activity-7150516916539805696-HSe3']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2024.01.09.232723 from yt-dlp/yt-dlp-nightly-builds [95e82347b] (zip)\r\n[debug] Python 3.12.1 (CPython x86_64 64bit) - Linux-6.6.9-200.fc39.x86_64-x86_64-with-glibc2.38 (OpenSSL 3.1.1 30 May 2023, glibc 2.38)\r\n[debug] exe versions: ffmpeg 6.0.1 (setts), ffprobe 6.0.1\r\n[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.46.0, requests-2.28.2, sqlite3-3.42.0, urllib3-1.26.18, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib\r\n[debug] Loaded 1798 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest\r\nLatest version: nightly@2024.01.09.232723 from yt-dlp/yt-dlp-nightly-builds\r\nyt-dlp is up to date (nightly@2024.01.09.232723 from yt-dlp/yt-dlp-nightly-builds)\r\n[LinkedIn] Extracting URL: https://www.linkedin.com/posts/the-mathworks_2_why-use-kalman-filters-activity-7150516916539805696-HSe3\r\n[LinkedIn] 2: Downloading webpage\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n2 has no subtitles\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/9056", "file_loc": {"base_commit": "a40b0070c2a00d3ed839897462171a82323aa875", "files": [{"path": "yt_dlp/extractor/linkedin.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 14, 15], "mod": [6, 7, 10, 13]}, "('LinkedInIE', '_real_extract', 98)": {"add": [112], "mod": [102, 103, 104, 105, 107, 117, 118, 119, 121]}, "('LinkedInIE', None, 85)": {"mod": [86, 92, 93, 94]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/linkedin.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "b965087396ddb2d40dfe5bc12391ee000945129d", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/110", "iss_label": "PR-needed", "title": "zsh completions are not installed", "body": "## Checklist\r\n- [ ] I'm reporting a broken site support issue\r\n- [x] I've verified that I'm running yt-dlp version **2021.02.24**\r\n- [ ] I've checked that all provided URLs are alive and playable in a browser\r\n- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar bug reports including closed ones\r\n- [x] I've read bugs section in FAQ\r\n\r\n## Description\r\n\r\n`python setup.py build` skips the zsh completion file because it expects to find `_yt-dlp` but finds `yt-dlp.zsh` instead.\r\nOne possibility is to abandon installing completions via `setup.py` and have them installed via `make` instead, in which case it'd probably be a good idea to have the `yt-dlp` target either call `setup.py` or build the self-executing zip based on a flag.\r\nAnother possibilty (which I haven't researched in depth yet) is to try prodding `setup.py` into accepting `yt-dlp.zsh` and renaming it.\r\nIn any case, it might be a good idea to have `setup.py` be as declarative as possible, following PEP-517/518.", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/114", "file_loc": {"base_commit": "b965087396ddb2d40dfe5bc12391ee000945129d", "files": [{"path": "Makefile", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 7, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 64, 105, 108, 110, 113, 115, 118, 126, 139, 140, 141]}}}, {"path": "devscripts/bash-completion.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [11]}}}, {"path": "devscripts/fish-completion.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13]}}}, {"path": "devscripts/zsh-completion.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [11]}}}, {"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [30, 31]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["devscripts/bash-completion.py", "devscripts/fish-completion.py", "setup.py", "devscripts/zsh-completion.py"], "doc": [], "test": [], "config": ["Makefile"], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "4f08e586553755ab61f64a5ef9b14780d91559a7", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/4409", "iss_label": "site-bug", "title": "ERROR: 03354: An extractor error has occurred.", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.07.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Provide a description that is worded well enough to be understood\n\nThe command downloads a 26-episode series from Tubi. The entire download proceeds without issue, getting me all 26 episodes. But then it thinks there is a 27th episode, which is when I get the error:\r\n\r\n[download] Downloading video 27 of 27\r\n[debug] [TubiTv] Extracting URL: tubitv:03354\r\n[TubiTv] 03354: Downloading JSON metadata\r\nERROR: 03354: An extractor error has occurred. (caused by KeyError('url')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"/home/melissa/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 644, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/melissa/.local/lib/python3.8/site-packages/yt_dlp/extractor/tubitv.py\", line 76, in _real_extract\r\n url = video_data['url']\r\nKeyError: 'url'\r\n\r\n[download] Finished downloading playlist: stargate-infinity\r\n\r\nI don't know if this is just some weird issue with the playlist data from Tubi, but I'm reporting it like the error text asked.\r\n\r\n[yt-dlp-vU.txt](https://github.com/yt-dlp/yt-dlp/files/9163183/yt-dlp-vU.txt)\r\n\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\nThe output is over 100k lines, so instead of pasting it I attached it as a file.\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/4416", "file_loc": {"base_commit": "4f08e586553755ab61f64a5ef9b14780d91559a7", "files": [{"path": "yt_dlp/extractor/tubitv.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9]}, "('TubiTvShowIE', '_entries', 130)": {"add": [137]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/tubitv.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "c459d45dd4d417fb80a52e1a04e607776a44baa4", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/6029", "iss_label": "site-bug\npatch-available", "title": "Chilloutzone: Unable to extract video data", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Provide a description that is worded well enough to be understood\n\nWhen trying to download a video from chilloutzone.net - e.g. https://www.chilloutzone.net/video/ordentlich-abgeschuettelt.html - the correct extractor is chosen, but then throws the error \"Unable to extract video data\" is thrown.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', 'https://www.chilloutzone.net/video/ordentlich-abgeschuettelt.html']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2023.01.06 [6becd25] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg 2022-10-13-git-9e8a327e68-full_build-www.gyan.dev (setts), ffprobe 2022-10-13-git-9e8a327e68-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.16.0, brotli-1.0.9, certifi-2022.12.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1760 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2023.01.06, Current version: 2023.01.06\r\nyt-dlp is up to date (2023.01.06)\r\n[Chilloutzone] Extracting URL: https://www.chilloutzone.net/video/ordentlich-abgeschuettelt.html\r\n[Chilloutzone] ordentlich-abgeschuettelt: Downloading webpage\r\nERROR: [Chilloutzone] ordentlich-abgeschuettelt: Unable to extract video data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"yt_dlp\\extractor\\common.py\", line 680, in extract\r\n File \"yt_dlp\\extractor\\chilloutzone.py\", line 56, in _real_extract\r\n File \"yt_dlp\\extractor\\common.py\", line 1264, in _html_search_regex\r\n File \"yt_dlp\\extractor\\common.py\", line 1228, in _search_regex\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/6445", "file_loc": {"base_commit": "c459d45dd4d417fb80a52e1a04e607776a44baa4", "files": [{"path": "yt_dlp/extractor/chilloutzone.py", "status": "modified", "Loc": {"('ChilloutzoneIE', None, 12)": {"add": [21, 33], "mod": [13, 15, 25, 32, 36, 37, 38, 40, 42, 43, 44, 45, 46]}, "('ChilloutzoneIE', '_real_extract', 50)": {"add": [54], "mod": [51, 52, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 75, 76, 77, 79, 80, 81, 82, 84, 85, 91, 92]}, "(None, None, None)": {"mod": [1, 4, 5, 8]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/chilloutzone.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "68be95bd0ca3f76aa63c9812935bd826b3a42e53", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/6551", "iss_label": "good first issue\nsite-bug\npatch-available", "title": "[youku] HTML in error message", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Provide a description that is worded well enough to be understood\n\nTest link (seems to be some paywall or \"member's area\"): \r\nhttps://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1\r\n\r\nNot a big deal, but thought this should be reported. \r\n\r\nOther regular links seem to be fine:\r\nhttps://v.youku.com/v_show/id_XNTA3MzUyMTUyMA==.html?spm=a2hja.14919748_WEBHOME_NEW.drawer15.d_zj1_3&s=efbfbd4a46efbfbd5975&scm=20140719.manual.19594.show_efbfbd4a46efbfbd5975\r\n\r\n\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n./yt-dlp -vU \"https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1\"\r\n[debug] Command-line config: ['-vU', 'https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2023.03.03 [934496428] (zip)\r\n[debug] Python 3.11.2 (CPython arm64 64bit) - macOS-13.2.1-arm64-arm-64bit (OpenSSL 1.1.1t 7 Feb 2023)\r\n[debug] exe versions: phantomjs 2.1.1\r\n[debug] Optional libraries: sqlite3-2.6.0\r\n[debug] Proxy map: {}\r\n[debug] Extractor Plugins: SamplePluginIE\r\n[debug] Post-Processor Plugins: SamplePluginPP\r\n[debug] Loaded 1845 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nAvailable version: stable@2023.03.04, Current version: stable@2023.03.03\r\nCurrent Build Hash: 5a6829509847cbe86cd5200e0e285f154c50416cf28a1b49b341ae1a030a98d6\r\n[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec\r\nUpdating to stable@2023.03.04 ...\r\n[debug] Downloading yt-dlp from https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp\r\n[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS\r\nUpdated yt-dlp to stable@2023.03.04\r\n[debug] Restarting: /opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/Resources/Python.app/Contents/MacOS/Python ./yt-dlp -vU 'https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1'\r\n[debug] Command-line config: ['-vU', 'https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2023.03.04 [392389b7d] (zip)\r\n[debug] Python 3.11.2 (CPython arm64 64bit) - macOS-13.2.1-arm64-arm-64bit (OpenSSL 1.1.1t 7 Feb 2023)\r\n[debug] exe versions: phantomjs 2.1.1\r\n[debug] Optional libraries: no_Cryptodome-None, sqlite3-2.6.0\r\n[debug] Proxy map: {}\r\n[debug] Extractor Plugins: SamplePluginIE\r\n[debug] Post-Processor Plugins: SamplePluginPP\r\n[debug] Loaded 1787 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nAvailable version: stable@2023.03.04, Current version: stable@2023.03.04\r\nCurrent Build Hash: 91cad9f121c1f6f0a81b747415c46ecba0ff331ed38cc6433040b4ac7b6e15ca\r\nyt-dlp is up to date (stable@2023.03.04)\r\n[youku] Extracting URL: https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1\r\n[youku] XNTg4NTg3MjI4MA: Retrieving cna info\r\n[youku] XNTg4NTg3MjI4MA: Downloading JSON metadata\r\nERROR: [youku] XNTg4NTg3MjI4MA: Youku server reported error -2002: \u8be5\u89c6\u9891\u5df2\u7ecf\u52a0\u5bc6\uff0c\u8bf7<font color=\"#FF0000\">\u8f93\u5165\u5bc6\u7801</font>; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"./yt-dlp/yt_dlp/extractor/common.py\", line 694, in extract\r\n ie_result = self._real_extract(url)\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"./yt-dlp/yt_dlp/extractor/youku.py\", line 196, in _real_extract\r\n raise ExtractorError(msg)\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/6690", "file_loc": {"base_commit": "68be95bd0ca3f76aa63c9812935bd826b3a42e53", "files": [{"path": "yt_dlp/extractor/youku.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8]}, "('YoukuIE', None, 16)": {"add": [83], "mod": [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70]}, "('YoukuIE', '_real_extract', 150)": {"mod": [195]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/youku.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "8e6e3651727b0b85764857fc6329fe5e0a3f00de", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/7520", "iss_label": "enhancement", "title": "ValueError: could not find firefox container \"XYZ\" in containers.json", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Provide a description that is worded well enough to be understood\n\nFirefox container support seems to be broken in Linux\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['--cookies-from-browser', 'firefox::Gmail at Home', '--verbose']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2023.07.06.133255 [90db9a3c0] (linux_exe)\r\n[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-6.2.12-surface-x86_64-with-glibc2.36 (OpenSSL 3.1.1 30 May 2023, glibc 2.36)\r\n[debug] exe versions: ffmpeg 5.1.1 (setts), ffprobe 5.1.1, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3\r\n[Cookies] Extracting cookies from firefox\r\n[debug] Extracting cookies from: \"/home/jay/.mozilla/firefox/xpzt0btw.default-release/cookies.sqlite\"\r\nTraceback (most recent call last):\r\n File \"yt_dlp/__main__.py\", line 17, in <module>\r\n File \"yt_dlp/__init__.py\", line 1008, in main\r\n File \"yt_dlp/__init__.py\", line 962, in _real_main\r\n File \"yt_dlp/YoutubeDL.py\", line 674, in __init__\r\n File \"yt_dlp/YoutubeDL.py\", line 3876, in print_debug_header\r\n File \"yt_dlp/YoutubeDL.py\", line 3920, in _setup_opener\r\n File \"yt_dlp/cookies.py\", line 106, in load_cookies\r\n File \"yt_dlp/cookies.py\", line 123, in extract_cookies_from_browser\r\n File \"yt_dlp/cookies.py\", line 163, in _extract_firefox_cookies\r\nValueError: could not find firefox container \"Gmail at Home\" in containers.json\r\n[11419] Failed to execute script '__main__' due to unhandled exception!\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/9016", "file_loc": {"base_commit": "8e6e3651727b0b85764857fc6329fe5e0a3f00de", "files": [{"path": "yt_dlp/cookies.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "(None, '_extract_firefox_cookies', 117)": {"mod": [125, 127, 129, 131]}, "(None, '_firefox_browser_dir', 185)": {"mod": [185, 187, 189, 190]}, "(None, '_extract_chrome_cookies', 249)": {"mod": [271]}, "(None, '_get_windows_v10_key', 945)": {"mod": [950]}, "(None, '_find_most_recently_used_file', 1052)": {"mod": [1052, 1054, 1056, 1061, 1062]}, "(None, '_is_path', 1075)": {"mod": [1076]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/cookies.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "aebb4f4ba78ec7542416832e9dd5e47788cb12aa", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/4649", "iss_label": "site-request", "title": "https://nos.nl/", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.08.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nNetherlands\n\n### Provide a description that is worded well enough to be understood\n\nhttps://nos.nl/\r\nVideos from this website (the Dutch BBC) don't work: Unsupported URL..\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2022.08.08 [3157158] (win32_exe)\r\n[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.19044-SP0\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: avconv -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] Checking exe version: avprobe -bsfs\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3\r\n[debug] Proxy map: {}\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.08.08, Current version: 2022.08.08\r\nyt-dlp is up to date (2022.08.08)\r\n[debug] [generic] Extracting URL: https://nos.nl/nieuwsuur/artikel/2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen\r\n[generic] 2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] 2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen: Extracting information\r\n[debug] Looking for Brightcove embeds\r\n[debug] Looking for embeds\r\nERROR: Unsupported URL: https://nos.nl/nieuwsuur/artikel/2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\YoutubeDL.py\", line 1441, in wrapper\r\n File \"yt_dlp\\YoutubeDL.py\", line 1517, in __extract_info\r\n File \"yt_dlp\\extractor\\common.py\", line 666, in extract\r\n File \"yt_dlp\\extractor\\generic.py\", line 3077, in _real_extract\r\nyt_dlp.utils.UnsupportedError: Unsupported URL: https://nos.nl/nieuwsuur/artikel/2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/4822", "file_loc": {"base_commit": "aebb4f4ba78ec7542416832e9dd5e47788cb12aa", "files": [{"path": "yt_dlp/extractor/_extractors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1182]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/_extractors.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "2530b68d4476fe6cb4b25897b906cbb1774ca7c9", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/5209", "iss_label": "site-request", "title": "Genius.com support request", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a new site support request\n- [X] I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required\n\n### Region\n\nUnited states \n\n### Example URLs\n\nhttps://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly\r\nhttps://genius.com/videos/Breaking-down-drakes-certified-lover-boy-kanye-beef-way-2-sexy-cudi\n\n### Provide a description that is worded well enough to be understood\n\nyt-dlp can't extract audio nor video from the site \n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', 'https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version 2022.10.04 [4e0511f] (win32_exe) [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.19044-SP0 [debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffprobe -bsfs [debug] exe versions: ffmpeg 2022-08-13-git-c469c3c3b1-full_build-www.gyan.dev (setts), ffprobe 2022-08-13-git-c469c3c3b1-full_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Proxy map: {} [debug] Loaded 1690 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: 2022.10.04, Current version: 2022.10.04 yt-dlp is up to date (2022.10.04) [debug] [generic] Extracting URL: https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly [generic] Vince-staples-breaks-down-the-meaning-of-when-sparks-fly: Downloading webpage WARNING: [generic] Falling back on generic information extractor [generic] Vince-staples-breaks-down-the-meaning-of-when-sparks-fly: Extracting information [debug] Looking for Brightcove embeds [debug] Looking for embeds ERROR: Unsupported URL: https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly Traceback (most recent call last): File \"yt_dlp\\YoutubeDL.py\", line 1477, in wrapper File \"yt_dlp\\YoutubeDL.py\", line 1553, in __extract_info File \"yt_dlp\\extractor\\common.py\", line 672, in extract File \"yt_dlp\\extractor\\generic.py\", line 3062, in _real_extract yt_dlp.utils.UnsupportedError: Unsupported URL: https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/5221", "file_loc": {"base_commit": "2530b68d4476fe6cb4b25897b906cbb1774ca7c9", "files": [{"path": "yt_dlp/extractor/_extractors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [631]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/_extractors.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "d1c4f6d4da75ac55cf573afe53b1e4a0f776a8f7", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/982", "iss_label": "geo-blocked", "title": "[Broken] TF1.fr multi-language videos: no detection of other languages than French", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:\r\n- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.09.02. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.\r\n- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.\r\n- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.\r\n- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.\r\n- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)\r\n-->\r\n\r\n- [x] I'm reporting a broken site support\r\n- [x] I've verified that I'm running yt-dlp version **2021.09.02**\r\n- [x] I've checked that all provided URLs are alive and playable in a browser\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n\r\n<!--\r\nProvide the complete verbose output of yt-dlp that clearly demonstrates the problem.\r\nAdd the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:\r\n [debug] System config: []\r\n [debug] User config: []\r\n [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']\r\n [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251\r\n [debug] yt-dlp version 2021.09.02\r\n [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2\r\n [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4\r\n [debug] Proxy map: {}\r\n <more lines>\r\n-->\r\n\r\n```\r\nyt-dlp -v -F https://www.tf1.fr/tf1/get-the-gringo/videos/kill-the-gringo-86959372.html\r\n[debug] Command-line config: ['-v', '-F', 'https://www.tf1.fr/tf1/get-the-gringo/videos/kill-the-gringo-86959372.html']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] yt-dlp version 2021.09.02 (zip)\r\n[debug] Python version 3.7.3 (CPython 64bit) - Linux-5.10.0-0.bpo.5-amd64-x86_64-with-debian-10.10\r\n[debug] exe versions: ffmpeg 4.1.6-1, ffprobe 4.1.6-1, phantomjs 2.1.1, rtmpdump 2.4\r\n[debug] Optional libraries: mutagen, sqlite\r\n[debug] Proxy map: {}\r\n[debug] [TF1] Extracting URL: https://www.tf1.fr/tf1/get-the-gringo/videos/kill-the-gringo-86959372.html\r\n[TF1] kill-the-gringo-86959372: Downloading JSON metadata\r\n[debug] [wat.tv] Extracting URL: wat:13802773\r\n[wat.tv] 13802773: Downloading JSON metadata\r\n[wat.tv] 13802773: Downloading MPD manifest\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[info] Available formats for 13802773:\r\nID EXT RESOLUTION FPS | TBR PROTO | VCODEC VBR ACODEC ABR ASR MORE INFO\r\n--------------------- --- ---------- --- - ----- ----- - ----------- ----- --------- ---- ------- --------------------------\r\ndash-audio_fra=64000 m4a audio only | 64k dash | mp4a.40.2 64k 48000Hz [fr], DASH audio, m4a_dash\r\ndash-audio_fra=128000 m4a audio only | 128k dash | mp4a.40.2 128k 48000Hz [fr], DASH audio, m4a_dash\r\ndash-video=200033 mp4 416x234 25 | 200k dash | avc1.42C01E 200k DASH video, mp4_dash\r\ndash-video=400072 mp4 480x270 25 | 400k dash | avc1.42C01E 400k DASH video, mp4_dash\r\ndash-video=600100 mp4 640x360 25 | 600k dash | avc1.42C01E 600k DASH video, mp4_dash\r\ndash-video=1200222 mp4 1024x576 25 | 1200k dash | avc1.4D401F 1200k DASH video, mp4_dash\r\ndash-video=1700265 mp4 1024x576 25 | 1700k dash | avc1.4D401F 1700k DASH video, mp4_dash\r\ndash-video=2500406 mp4 1280x720 25 | 2500k dash | avc1.4D401F 2500k DASH video, mp4_dash\r\n\r\nyt-dlp -v -F \"https://das-q1-ssl.tf1.fr/2/USP-0x0/27/73/13802773/ssm/6a2603d6515db50912cfb89b775467977b553a21ca27ae131180d424ecab3a73.ism/13802773.mpd?e=1631733432&max_bitrate=2700000&st=uIejdE925vcQ9GqvWS9w_Q\"\r\n[debug] Command-line config: ['-v', '-F', 'https://das-q1-ssl.tf1.fr/2/USP-0x0/27/73/13802773/ssm/6a2603d6515db50912cfb89b775467977b553a21ca27ae131180d424ecab3a73.ism/13802773.mpd?e=1631733432&max_bitrate=2700000&st=uIejdE925vcQ9GqvWS9w_Q']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] yt-dlp version 2021.09.02 (zip)\r\n[debug] Python version 3.7.3 (CPython 64bit) - Linux-5.10.0-0.bpo.5-amd64-x86_64-with-debian-10.10\r\n[debug] exe versions: ffmpeg 4.1.6-1, ffprobe 4.1.6-1, phantomjs 2.1.1, rtmpdump 2.4\r\n[debug] Optional libraries: mutagen, sqlite\r\n[debug] Proxy map: {}\r\n[debug] [generic] Extracting URL: https://das-q1-ssl.tf1.fr/2/USP-0x0/27/73/13802773/ssm/6a2603d6515db50912cfb89b775467977b553a21ca27ae131180d424ecab3a73.ism/13802773.mpd?e=1631733432&max_bitrate=2700000&st=uIejdE925vcQ9GqvWS9w_Q\r\n[generic] 13802773: Requesting header\r\nWARNING: [generic] Falling back on generic information extractor.\r\n[generic] 13802773: Downloading webpage\r\n[generic] 13802773: Extracting information\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[info] Available formats for 13802773:\r\nID EXT RESOLUTION FPS | TBR PROTO | VCODEC VBR ACODEC ABR ASR MORE INFO\r\n---------------- --- ---------- --- - ----- ----- - ----------- ----- --------- ---- ------- --------------------------\r\naudio_eng=64000 m4a audio only | 64k dash | mp4a.40.2 64k 48000Hz [en], DASH audio, m4a_dash\r\naudio_fra=64000 m4a audio only | 64k dash | mp4a.40.2 64k 48000Hz [fr], DASH audio, m4a_dash\r\naudio_eng=128000 m4a audio only | 128k dash | mp4a.40.2 128k 48000Hz [en], DASH audio, m4a_dash\r\naudio_fra=128000 m4a audio only | 128k dash | mp4a.40.2 128k 48000Hz [fr], DASH audio, m4a_dash\r\nvideo=200033 mp4 416x234 25 | 200k dash | avc1.42C01E 200k DASH video, mp4_dash\r\nvideo=400072 mp4 480x270 25 | 400k dash | avc1.42C01E 400k DASH video, mp4_dash\r\nvideo=600100 mp4 640x360 25 | 600k dash | avc1.42C01E 600k DASH video, mp4_dash\r\nvideo=1200222 mp4 1024x576 25 | 1200k dash | avc1.4D401F 1200k DASH video, mp4_dash\r\nvideo=1700265 mp4 1024x576 25 | 1700k dash | avc1.4D401F 1700k DASH video, mp4_dash\r\nvideo=2500406 mp4 1280x720 25 | 2500k dash | avc1.4D401F 2500k DASH video, mp4_dash\r\n\r\n```\r\n<!--\r\nDo not remove the above ```\r\n-->\r\n\r\n\r\n## Description\r\n\r\n<!--\r\nProvide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.\r\nIf work on your issue requires account credentials please provide them or explain how one can obtain them.\r\n-->\r\n\r\nOn TF1.fr some videos are available with original audio in addition to the French dub, the latter being the default audio when playing the video in a browser.\r\nHowever, yt-dlp does not currently detect any other audio streams than the default French, as shown in the first part of the log for the example video located at \"https://www.tf1.fr/tf1/get-the-gringo/videos/kill-the-gringo-86959372.html\" since all the currently detected audio streams are identified with \"audio_fra=XXX\" IDs whereas this video also has English audio.\r\n\r\nI suspect that the problem is related to the detection or analysis of the mpd manifest of the video since English streams can be detected (and downloaded) when using yt-dlp with the url of the mpd manifest. Indeed, using the browser inspector and especially the network analysis, I have identified that the mpd manifest of the example video is located at \"https://das-q1-ssl.tf1.fr/2/USP-0x0/27/73/13802773/ssm/6a2603d6515db50912cfb89b775467977b553a21ca27ae131180d424ecab3a73.ism/13802773.mpd?e=1631733432&max_bitrate=2700000&st=uIejdE925vcQ9GqvWS9w_Q\". As shown in the second part of the log, when using yt-dlp with the mpd manifest url, original audio streams (in English) are detected and identified with \"audio_eng=XXX\" IDs. There is then no problem to download them when using the mpd url.\r\n\r\nAdditional information on TF1.fr:\r\n- an account may be needed in order to watch the videos with a browser but creating this account only requires providing an email address and password. The account is not needed when using yt-dlp.\r\n- videos are usually geo-restricted to France, as a consequence the use of a proxy may be needed to work on the issue outside of France\r\n- many videos are time-limited (usually 7 days) when they are provided as part of the TV-catchup service but the example video above (and others I could provide if needed) should not have a time limit (or a very long one).", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/3739", "file_loc": {"base_commit": "d1c4f6d4da75ac55cf573afe53b1e4a0f776a8f7", "files": [{"path": "yt_dlp/extractor/wat.py", "status": "modified", "Loc": {"('WatIE', '_real_extract', 47)": {"mod": [57]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/wat.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "aa4b0545120becc11a5992384ce52c943da8ead5", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/1945", "iss_label": "site-bug", "title": "SonyLIV Premium Content giving 406 ERROR", "body": "### Checklist\r\n\r\n- [X] I'm reporting a broken site\r\n- [X] I've verified that I'm running yt-dlp version **2021.12.01**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\r\n- [X] I've checked that all provided URLs are alive and playable in a browser\r\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\r\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\r\n\r\n### Region\r\n\r\n_India_\r\n\r\n### Description\r\n\r\nRequesting support for SonyLIV to download the latest episodes .\r\n->The content is a subscriber-only episode.\r\n->The content is non DRM, I have verified.\r\n->I have passed cookies from my premium account using --cookies.\r\n\r\nRunning URL : \r\nhttps://www.sonyliv.com/shows/kaun-banega-crorepati-1700000195/fighting-all-odds-on-the-hot-seat-1000148334?watch=true\r\n\r\nGives an error 406.\r\nI am running the latest version.\r\n\r\n\r\n### Verbose log\r\n\r\n```shell\r\n[debug] Command-line config: ['https://www.sonyliv.com/shows/kaun-banega-crorepati-1700000195/fighting-all-odds-on-the-hot-seat-1000148334?watch=true', '--cookies', 'sony-cookie.txt', '--verbose']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2021.12.01 [91f071af6] (zip)\r\n[debug] Python version 3.9.7 (CPython 64bit) - macOS-11.5.2-x86_64-i386-64bit\r\n[debug] exe versions: ffmpeg 4.4 (setts), ffprobe 4.4, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[debug] Using fake IP 117.195.44.37 (IN) as X-Forwarded-For\r\n[SonyLIV] Downloading JSON metadata\r\n[debug] [SonyLIV] Extracting URL: https://www.sonyliv.com/shows/kaun-banega-crorepati-1700000195/fighting-all-odds-on-the-hot-seat-1000148334?watch=true\r\n[SonyLIV] 1000148334: Downloading JSON metadata\r\nERROR: [SonyLIV] 1000148334: Unable to download JSON metadata: HTTP Error 406: Not Acceptable (caused by <HTTPError 406: 'Not Acceptable'>); please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; type yt-dlp -U to update. Be sure to call yt-dlp with the --verbose flag and include its complete output. (caused by <HTTPError 406: 'Not Acceptable'>); please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; type yt-dlp -U to update. Be sure to call yt-dlp with the --verbose flag and include its complete output.\r\n File \"/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py\", line 715, in _request_webpage\r\n return self._downloader.urlopen(url_or_request)\r\n File \"/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py\", line 3385, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py\", line 632, in http_response\r\n response = self.parent.error(\r\n File \"/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py\", line 561, in error\r\n return self._call_chain(*args)\r\n File \"/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py\", line 494, in _call_chain\r\n result = func(*args)\r\n File \"/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py\", line 641, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\r\n```\r\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/1959", "file_loc": {"base_commit": "aa4b0545120becc11a5992384ce52c943da8ead5", "files": [{"path": "yt_dlp/extractor/sonyliv.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "('SonyLIVIE', '_call_api', 61)": {"add": [69], "mod": [62, 63, 64, 68]}, "('SonyLIVIE', None, 16)": {"mod": [59]}, "('SonyLIVIE', '_real_initialize', 78)": {"mod": [79]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2\u7531\u4e8e\u9879\u76ee\u4e0d\u5b8c\u5584\u5bfc\u81f4\u7684\u62a5\u9519", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/sonyliv.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/9640", "iss_label": "site-request", "title": "Support NTS.live", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a new site support request\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required\n\n### Region\n\n_No response_\n\n### Example URLs\n\n- Single embedded Soundcloud link: https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024\r\n- Single embedded Mixcloud link: https://www.nts.live/shows/absolute-fiction/episodes/absolute-fiction-23rd-july-2022\n\n### Provide a description that is worded well enough to be understood\n\nnts.live is an internet radio site with curated music mixes. As far as I know, the mixes are all hosted on Soundcloud or Mixcloud, and the site simply embeds an instance of one of the latter players.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', 'https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024']\r\n[debug] User config \"/home/<user>/.yt-dlp/config\": ['--no-mtime', '--merge-output-format', 'mp4/mkv']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (pip)\r\n[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-<redacted>\r\n[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2022.12.07, mutagen-1.46.0, requests-2.31.0, sqlite3-3.37.2, urllib3-2.1.0, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1803 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.03.10 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)\r\n[generic] Extracting URL: https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024\r\n[generic] yu-su-2nd-april-2024: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] yu-su-2nd-april-2024: Extracting information\r\n[debug] Looking for embeds\r\nERROR: Unsupported URL: https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024\r\nTraceback (most recent call last):\r\n File \"/home/<user>/.local/pipx/venvs/yt-dlp/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1594, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/<user>/.local/pipx/venvs/yt-dlp/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1729, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/<user>/.local/pipx/venvs/yt-dlp/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 732, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/<user>/.local/pipx/venvs/yt-dlp/lib/python3.10/site-packages/yt_dlp/extractor/generic.py\", line 2530, in _real_extract\r\n raise UnsupportedError(url)\r\nyt_dlp.utils.UnsupportedError: Unsupported URL: https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024\n```\n", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/9641", "file_loc": {"base_commit": "3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4", "files": [{"path": "yt_dlp/extractor/_extractors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1334]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["yt_dlp/extractor/_extractors.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "3cd7d84b53724a97c1436f70b6da6975e3d93484", "iss_has_pr": 1, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/5627", "iss_label": "Potential Bug", "title": "Boolean value of Tensor with more than one value is ambiguous", "body": "### Expected Behavior\n\nGenerate image using Pulid with flux model\n\n### Actual Behavior\n\nStops generation. Few hours earlier everything was fine\n\n### Steps to Reproduce\n\n[Pulid_workglow_v1.json](https://github.com/user-attachments/files/17777510/Pulid_workglow_v1.json)\r\n\n\n### Debug Logs\n\n```powershell\n# ComfyUI Error Report\r\n## Error Details\r\n- **Node Type:** SamplerCustomAdvanced\r\n- **Exception Type:** RuntimeError\r\n- **Exception Message:** Boolean value of Tensor with more than one value is ambiguous\r\n## Stack Trace\r\n\r\n File \"/workspace/ComfyUI/execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n\r\n File \"/workspace/ComfyUI/execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy_extras/nodes_custom_sampler.py\", line 633, in sample\r\n samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 740, in sample\r\n output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 719, in inner_sample\r\n samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 624, in sample\r\n samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/k_diffusion/sampling.py\", line 1058, in sample_deis\r\n denoised = model(x_cur, t_cur * s_in, **extra_args)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 299, in __call__\r\n out = self.inner_model(x, sigma, model_options=model_options, seed=seed)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 706, in __call__\r\n return self.predict_noise(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 709, in predict_noise\r\n return sampling_function(self.inner_model, x, timestep, self.conds.get(\"negative\", None), self.conds.get(\"positive\", None), self.cfg, model_options=model_options, seed=seed)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 279, in sampling_function\r\n out = calc_cond_batch(model, conds, x, timestep, model_options)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 228, in calc_cond_batch\r\n output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/model_base.py\", line 144, in apply_model\r\n model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py\", line 1553, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py\", line 1562, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/comfy/ldm/flux/model.py\", line 181, in forward\r\n out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/workspace/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-Enhanced/pulidflux.py\", line 113, in forward_orig\r\n if node_data['sigma_start'] >= timesteps >= node_data['sigma_end']:\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n```\r\n## System Information\r\n- **ComfyUI Version:** v0.2.7-21-g3b9a6cf\r\n- **Arguments:** main.py --listen 0.0.0.0 --port 3001\r\n- **OS:** posix\r\n- **Python Version:** 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]\r\n- **Embedded Python:** false\r\n- **PyTorch Version:** 2.4.0+cu121\r\n## Devices\r\n\r\n- **Name:** cuda:0 NVIDIA RTX A6000 : cudaMallocAsync\r\n - **Type:** cuda\r\n - **VRAM Total:** 51033931776\r\n - **VRAM Free:** 8504171346\r\n - **Torch VRAM Total:** 42177921024\r\n - **Torch VRAM Free:** 44522322\r\n\r\n## Logs\r\n```\r\n2024-11-15T15:10:07.506380 - [START] Security scan2024-11-15T15:10:07.506419 - \r\n2024-11-15T15:10:14.304713 - [DONE] Security scan2024-11-15T15:10:14.304749 - \r\n2024-11-15T15:10:14.666461 - ## ComfyUI-Manager: installing dependencies done.2024-11-15T15:10:14.666711 - \r\n2024-11-15T15:10:14.666906 - ** ComfyUI startup time:2024-11-15T15:10:14.667072 - 2024-11-15T15:10:14.667286 - 2024-11-15 15:10:14.6668072024-11-15T15:10:14.667467 - \r\n2024-11-15T15:10:14.667646 - ** Platform:2024-11-15T15:10:14.667827 - 2024-11-15T15:10:14.668015 - Linux2024-11-15T15:10:14.668182 - \r\n2024-11-15T15:10:14.668352 - ** Python version:2024-11-15T15:10:14.668498 - 2024-11-15T15:10:14.668676 - 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]2024-11-15T15:10:14.668833 - \r\n2024-11-15T15:10:14.668989 - ** Python executable:2024-11-15T15:10:14.669143 - 2024-11-15T15:10:14.669299 - /workspace/ComfyUI/venv/bin/python32024-11-15T15:10:14.669440 - \r\n2024-11-15T15:10:14.669604 - ** ComfyUI Path:2024-11-15T15:10:14.669753 - 2024-11-15T15:10:14.669908 - /workspace/ComfyUI2024-11-15T15:10:14.670052 - \r\n2024-11-15T15:10:14.670240 - ** Log path:2024-11-15T15:10:14.670409 - 2024-11-15T15:10:14.670553 - /workspace/ComfyUI/comfyui.log2024-11-15T15:10:14.670711 - \r\n2024-11-15T15:10:14.695455 - \r\nPrestartup times for custom nodes:2024-11-15T15:10:14.695654 - \r\n2024-11-15T15:10:14.695875 - 0.0 seconds:2024-11-15T15:10:14.696062 - 2024-11-15T15:10:14.696247 - /workspace/ComfyUI/custom_nodes/rgthree-comfy2024-11-15T15:10:14.696411 - \r\n2024-11-15T15:10:14.696598 - 7.2 seconds:2024-11-15T15:10:14.696753 - 2024-11-15T15:10:14.696918 - /workspace/ComfyUI/custom_nodes/ComfyUI-Manager2024-11-15T15:10:14.697090 - \r\n2024-11-15T15:10:14.697257 - \r\n2024-11-15T15:10:17.991611 - Total VRAM 48670 MB, total RAM 1031687 MB\r\n2024-11-15T15:10:17.991921 - pytorch version: 2.4.0+cu121\r\n2024-11-15T15:10:22.112364 - /usr/local/lib/python3.11/dist-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.\r\n @torch.library.impl_abstract(\"xformers_flash::flash_fwd\")\r\n2024-11-15T15:10:22.789455 - /usr/local/lib/python3.11/dist-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.\r\n @torch.library.impl_abstract(\"xformers_flash::flash_bwd\")\r\n2024-11-15T15:10:23.157733 - xformers version: 0.0.27.post2\r\n2024-11-15T15:10:23.158121 - Set vram state to: NORMAL_VRAM\r\n2024-11-15T15:10:23.158371 - Device: cuda:0 NVIDIA RTX A6000 : cudaMallocAsync\r\n2024-11-15T15:10:23.467250 - Using xformers cross attention\r\n2024-11-15T15:10:28.313956 - [Prompt Server] web root: /workspace/ComfyUI/web\r\n2024-11-15T15:10:30.360426 - /workspace/ComfyUI/venv/lib/python3.11/site-packages/kornia/feature/lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.\r\n @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)\r\n2024-11-15T15:10:31.565415 - Total VRAM 48670 MB, total RAM 1031687 MB\r\n2024-11-15T15:10:31.565871 - pytorch version: 2.4.0+cu121\r\n2024-11-15T15:10:31.566108 - xformers version: 0.0.27.post2\r\n2024-11-15T15:10:31.566488 - Set vram state to: NORMAL_VRAM\r\n2024-11-15T15:10:31.566752 - Device: cuda:0 NVIDIA RTX A6000 : cudaMallocAsync\r\n2024-11-15T15:10:34.269461 - /workspace/ComfyUI/venv/lib/python3.11/site-packages/albumentations/__init__.py:13: UserWarning: A new version of Albumentations is available: 1.4.21 (you have 1.4.15). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.\r\n check_for_updates()\r\n2024-11-15T15:10:36.030672 - generated new fontManager\r\n2024-11-15T15:10:38.280816 - /workspace/ComfyUI/venv/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers\r\n warnings.warn(f\"Importing from {__name__} is deprecated, please import via timm.layers\", FutureWarning)\r\n2024-11-15T15:10:38.304404 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2024-11-15T15:10:38.304652 - \r\n2024-11-15T15:10:38.637033 - \u001b[36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: /workspace/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts\u001b[0m\r\n2024-11-15T15:10:38.637271 - \u001b[36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False\u001b[0m\r\n2024-11-15T15:10:38.637476 - \u001b[36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']\u001b[0m\r\n2024-11-15T15:10:38.928121 - DWPose: Onnxruntime with acceleration providers detected2024-11-15T15:10:38.928314 - \r\n2024-11-15T15:10:39.359309 - \r\n2024-11-15T15:10:39.359585 - \u001b[92m[rgthree-comfy] Loaded 42 exciting nodes. \ud83c\udf89\u001b[00m2024-11-15T15:10:39.359786 - \r\n2024-11-15T15:10:39.359987 - \r\n2024-11-15T15:10:45.309073 - \u001b[34mWAS Node Suite: \u001b[0mOpenCV Python FFMPEG support is enabled\u001b[0m2024-11-15T15:10:45.309329 - \r\n2024-11-15T15:10:45.309623 - \u001b[34mWAS Node Suite \u001b[93mWarning: \u001b[0m`ffmpeg_bin_path` is not set in `/workspace/ComfyUI/custom_nodes/was-node-suite-comfyui/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.\u001b[0m2024-11-15T15:10:45.309812 - \r\n2024-11-15T15:10:48.681237 - \u001b[34mWAS Node Suite: \u001b[0mFinished.\u001b[0m \u001b[32mLoaded\u001b[0m \u001b[0m218\u001b[0m \u001b[32mnodes successfully.\u001b[0m2024-11-15T15:10:48.681597 - \r\n2024-11-15T15:10:48.681918 - \r\n\t\u001b[3m\u001b[93m\"Every artist was first an amateur.\"\u001b[0m\u001b[3m - Ralph Waldo Emerson\u001b[0m\r\n2024-11-15T15:10:48.682130 - \r\n2024-11-15T15:10:50.489733 - [Crystools \u001b[0;32mINFO\u001b[0m] Crystools version: 1.21.0\r\n2024-11-15T15:10:50.667784 - [Crystools \u001b[0;32mINFO\u001b[0m] CPU: Intel(R) Xeon(R) Gold 6238R CPU @ 2.20GHz - Arch: x86_64 - OS: Linux 6.5.0-41-generic\r\n2024-11-15T15:10:50.668198 - [Crystools \u001b[0;32mINFO\u001b[0m] Pynvml (Nvidia) initialized.\r\n2024-11-15T15:10:50.668654 - [Crystools \u001b[0;32mINFO\u001b[0m] GPU/s:\r\n2024-11-15T15:10:50.668945 - [Crystools \u001b[0;32mINFO\u001b[0m] 0) NVIDIA RTX A6000\r\n2024-11-15T15:10:50.669186 - [Crystools \u001b[0;32mINFO\u001b[0m] NVIDIA Driver: 550.54.14\r\n2024-11-15T15:10:50.833539 - Creating new Ultralytics Settings v0.0.6 file \u2705 \r\nView Ultralytics Settings with 'yolo settings' or at '/root/.config/Ultralytics/settings.json'\r\nUpdate Settings with 'yolo settings key=value', i.e. 'yolo settings runs_dir=path/to/dir'. For help see https://docs.ultralytics.com/quickstart/#ultralytics-settings.\r\n2024-11-15T15:10:51.536848 - ### Loading: ComfyUI-Impact-Pack (V7.11.3)2024-11-15T15:10:51.537021 - \r\n2024-11-15T15:10:51.635434 - ### Loading: ComfyUI-Impact-Pack (Subpack: V0.8)2024-11-15T15:10:51.635602 - \r\n2024-11-15T15:10:51.818086 - [Impact Pack] Wildcards loading done.2024-11-15T15:10:51.818259 - \r\n2024-11-15T15:10:51.833380 - ### Loading: ComfyUI-Manager (V2.51.9)2024-11-15T15:10:51.833510 - \r\n2024-11-15T15:10:51.998868 - ### ComfyUI Revision: 2829 [3b9a6cf2] | Released on '2024-11-13'2024-11-15T15:10:51.999022 - \r\n2024-11-15T15:10:52.010212 - \r\nImport times for custom nodes:\r\n2024-11-15T15:10:52.010417 - 0.0 seconds: /workspace/ComfyUI/custom_nodes/websocket_image_save.py\r\n2024-11-15T15:10:52.010570 - 0.0 seconds: /workspace/ComfyUI/custom_nodes/cg-use-everywhere\r\n2024-11-15T15:10:52.010699 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/comfy-image-saver\r\n2024-11-15T15:10:52.010839 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale\r\n2024-11-15T15:10:52.010980 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI_essentials\r\n2024-11-15T15:10:52.011120 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-GGUF\r\n2024-11-15T15:10:52.011264 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts\r\n2024-11-15T15:10:52.011382 - 0.2 seconds: /workspace/ComfyUI/custom_nodes/rgthree-comfy\r\n2024-11-15T15:10:52.011516 - 0.2 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Manager\r\n2024-11-15T15:10:52.011641 - 0.3 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Impact-Pack\r\n2024-11-15T15:10:52.011771 - 0.3 seconds: /workspace/ComfyUI/custom_nodes/comfyui_controlnet_aux\r\n2024-11-15T15:10:52.011923 - 0.7 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-KJNodes\r\n2024-11-15T15:10:52.012058 - 0.9 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-AdvancedLivePortrait\r\n2024-11-15T15:10:52.012186 - 2.0 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Crystools\r\n2024-11-15T15:10:52.012301 - 6.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-Enhanced\r\n2024-11-15T15:10:52.012421 - 9.3 seconds: /workspace/ComfyUI/custom_nodes/was-node-suite-comfyui\r\n2024-11-15T15:10:52.012541 - \r\n2024-11-15T15:10:52.030777 - Starting server\r\n\r\n2024-11-15T15:10:52.031183 - To see the GUI go to: http://0.0.0.0:3001\r\n2024-11-15T15:10:52.067762 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-11-15T15:10:52.067937 - \r\n2024-11-15T15:10:52.076078 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-11-15T15:10:52.076222 - \r\n2024-11-15T15:10:52.094295 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-11-15T15:10:52.094425 - \r\n2024-11-15T15:10:52.133768 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-11-15T15:10:52.133914 - \r\n2024-11-15T15:10:52.179770 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-11-15T15:10:52.179921 - \r\n2024-11-15T15:12:56.132631 - FETCH DATA from: /workspace/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json2024-11-15T15:12:56.132991 - 2024-11-15T15:12:56.144376 - [DONE]2024-11-15T15:12:56.144542 - \r\n2024-11-15T15:13:14.903062 - got prompt\r\n2024-11-15T15:13:23.615984 - Using xformers attention in VAE\r\n2024-11-15T15:13:23.619955 - Using xformers attention in VAE\r\n2024-11-15T15:13:27.463924 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:27.464130 - \r\n2024-11-15T15:13:27.658365 - find model:2024-11-15T15:13:27.658531 - 2024-11-15T15:13:27.658645 - /workspace/ComfyUI/models/insightface/models/antelopev2/1k3d68.onnx2024-11-15T15:13:27.658802 - 2024-11-15T15:13:27.658921 - landmark_3d_682024-11-15T15:13:27.659051 - 2024-11-15T15:13:27.659182 - ['None', 3, 192, 192]2024-11-15T15:13:27.659319 - 2024-11-15T15:13:27.659442 - 0.02024-11-15T15:13:27.659558 - 2024-11-15T15:13:27.659678 - 1.02024-11-15T15:13:27.659787 - \r\n2024-11-15T15:13:27.794506 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:27.794680 - \r\n2024-11-15T15:13:27.806739 - find model:2024-11-15T15:13:27.806880 - 2024-11-15T15:13:27.807020 - /workspace/ComfyUI/models/insightface/models/antelopev2/2d106det.onnx2024-11-15T15:13:27.807171 - 2024-11-15T15:13:27.807288 - landmark_2d_1062024-11-15T15:13:27.807407 - 2024-11-15T15:13:27.807522 - ['None', 3, 192, 192]2024-11-15T15:13:27.807629 - 2024-11-15T15:13:27.807732 - 0.02024-11-15T15:13:27.807836 - 2024-11-15T15:13:27.807968 - 1.02024-11-15T15:13:27.808093 - \r\n2024-11-15T15:13:27.878881 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:27.879064 - \r\n2024-11-15T15:13:27.883868 - find model:2024-11-15T15:13:27.884070 - 2024-11-15T15:13:27.884235 - /workspace/ComfyUI/models/insightface/models/antelopev2/genderage.onnx2024-11-15T15:13:27.884406 - 2024-11-15T15:13:27.884569 - genderage2024-11-15T15:13:27.884718 - 2024-11-15T15:13:27.884867 - ['None', 3, 96, 96]2024-11-15T15:13:27.885012 - 2024-11-15T15:13:27.885142 - 0.02024-11-15T15:13:27.885272 - 2024-11-15T15:13:27.885418 - 1.02024-11-15T15:13:27.885546 - \r\n2024-11-15T15:13:30.280279 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:30.280424 - \r\n2024-11-15T15:13:30.604582 - find model:2024-11-15T15:13:30.604745 - 2024-11-15T15:13:30.604894 - /workspace/ComfyUI/models/insightface/models/antelopev2/glintr100.onnx2024-11-15T15:13:30.605035 - 2024-11-15T15:13:30.605176 - recognition2024-11-15T15:13:30.605303 - 2024-11-15T15:13:30.605458 - ['None', 3, 112, 112]2024-11-15T15:13:30.605586 - 2024-11-15T15:13:30.605717 - 127.52024-11-15T15:13:30.605842 - 2024-11-15T15:13:30.605938 - 127.52024-11-15T15:13:30.606042 - \r\n2024-11-15T15:13:30.805291 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:30.805496 - \r\n2024-11-15T15:13:30.805866 - find model:2024-11-15T15:13:30.806346 - 2024-11-15T15:13:30.806497 - /workspace/ComfyUI/models/insightface/models/antelopev2/scrfd_10g_bnkps.onnx2024-11-15T15:13:30.806641 - 2024-11-15T15:13:30.806772 - detection2024-11-15T15:13:30.806890 - 2024-11-15T15:13:30.807067 - [1, 3, '?', '?']2024-11-15T15:13:30.807207 - 2024-11-15T15:13:30.807728 - 127.52024-11-15T15:13:30.808817 - 2024-11-15T15:13:30.808966 - 128.02024-11-15T15:13:30.809095 - \r\n2024-11-15T15:13:30.809248 - set det-size:2024-11-15T15:13:30.809382 - 2024-11-15T15:13:30.809514 - (640, 640)2024-11-15T15:13:30.809634 - \r\n2024-11-15T15:13:30.810521 - Loaded EVA02-CLIP-L-14-336 model config.\r\n2024-11-15T15:13:30.929776 - Shape of rope freq: torch.Size([576, 64])\r\n2024-11-15T15:13:46.076137 - Loading pretrained EVA02-CLIP-L-14-336 weights (eva_clip).\r\n2024-11-15T15:13:48.288170 - incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin']\r\n2024-11-15T15:13:51.867986 - Loading PuLID-Flux model.\r\n2024-11-15T15:14:01.781958 - model weight dtype torch.bfloat16, manual cast: None\r\n2024-11-15T15:14:01.783274 - model_type FLUX\r\n2024-11-15T15:14:57.687329 - /workspace/ComfyUI/venv/lib/python3.11/site-packages/insightface/utils/transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.\r\nTo use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.\r\n P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4\r\n2024-11-15T15:15:17.860517 - Requested to load FluxClipModel_\r\n2024-11-15T15:15:17.860863 - Loading 1 new model\r\n2024-11-15T15:16:01.924433 - loaded completely 0.0 9320.35888671875 True\r\n2024-11-15T15:16:02.546263 - Requested to load ControlNetFlux\r\n2024-11-15T15:16:02.546512 - Requested to load Flux\r\n2024-11-15T15:16:02.546655 - Loading 2 new models\r\n2024-11-15T15:16:04.190233 - loaded completely 0.0 6297.97265625 True\r\n2024-11-15T15:16:11.191997 - loaded completely 0.0 23500.488403320312 True\r\n2024-11-15T15:16:11.285167 - \r\n 0%| | 0/25 [00:00<?, ?it/s]2024-11-15T15:16:11.375186 - Requested to load AutoencodingEngine\r\n2024-11-15T15:16:11.375480 - Loading 1 new model\r\n2024-11-15T15:16:11.542628 - loaded completely 0.0 159.87335777282715 True\r\n2024-11-15T15:16:12.035048 - \r\n 0%| | 0/25 [00:00<?, ?it/s]2024-11-15T15:16:12.035194 - \r\n2024-11-15T15:16:12.038298 - !!! Exception during processing !!! Boolean value of Tensor with more than one value is ambiguous\r\n2024-11-15T15:16:12.058483 - Traceback (most recent call last):\r\n File \"/workspace/ComfyUI/execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n File \"/workspace/ComfyUI/execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy_extras/nodes_custom_sampler.py\", line 633, in sample\r\n samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 740, in sample\r\n output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 719, in inner_sample\r\n samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 624, in sample\r\n samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/k_diffusion/sampling.py\", line 1058, in sample_deis\r\n denoised = model(x_cur, t_cur * s_in, **extra_args)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 299, in __call__\r\n out = self.inner_model(x, sigma, model_options=model_options, seed=seed)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 706, in __call__\r\n return self.predict_noise(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 709, in predict_noise\r\n return sampling_function(self.inner_model, x, timestep, self.conds.get(\"negative\", None), self.conds.get(\"positive\", None), self.cfg, model_options=model_options, seed=seed)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 279, in sampling_function\r\n out = calc_cond_batch(model, conds, x, timestep, model_options)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/samplers.py\", line 228, in calc_cond_batch\r\n output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/model_base.py\", line 144, in apply_model\r\n model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py\", line 1553, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py\", line 1562, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/comfy/ldm/flux/model.py\", line 181, in forward\r\n out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/workspace/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-Enhanced/pulidflux.py\", line 113, in forward_orig\r\n if node_data['sigma_start'] >= timesteps >= node_data['sigma_end']:\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Boolean value of Tensor with more than one value is ambiguous\r\n\r\n2024-11-15T15:16:12.062181 - Prompt executed in 170.11 seconds\r\n\r\n```\r\n## Attached Workflow\r\nPlease make sure that workflow does not contain any sensitive information such as API keys or passwords.\r\n```\r\nWorkflow too large. Please manually upload the workflow from local file system.\r\n```\r\n\r\n## Additional Context\r\n(Please add any additional context or steps to reproduce the error here)\n```\n\n\n### Other\n\n_No response_", "pr_html_url": "https://github.com/comfyanonymous/ComfyUI/pull/27", "file_loc": {"base_commit": "3cd7d84b53724a97c1436f70b6da6975e3d93484", "files": [{"path": "webshit/index.html", "status": "modified", "Loc": {"(None, None, None)": {"mod": [274, 275, 276, 277, 278, 279, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 299, 301, 302, 303, 304, 305, 307, 308, 311, 312, 313, 315, 316, 318, 319, 321, 322, 325]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["webshit/index.html"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f7695b5f9e007136da72bd3e79d601e2814a3890", "iss_has_pr": 1, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/5890", "iss_label": "Feature", "title": "Support wildcard type \"*\" in ComfyUI core", "body": "### Feature Idea\n\nThere are many custom nodes that currently hacking the string comparison to achieve wildcard type (\"*\"). This implementation is very hacky and hard to debug. We should properly support wildcard types in ComfyUI core.\n\n### Existing Solutions\n\n- https://github.com/pythongosssss/ComfyUI-Custom-Scripts/blob/d6657cc1f04539dbeea38d7bf6d73bc025004fa4/py/repeater.py\r\n- https://github.com/FredBill1/comfyui-fb-utils/blob/main/core/types.py\n\n### Other\n\n_No response_", "pr_html_url": "https://github.com/comfyanonymous/ComfyUI/pull/5900", "file_loc": {"base_commit": "f7695b5f9e007136da72bd3e79d601e2814a3890", "files": [{"path": "execution.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}, "(None, 'validate_inputs', 531)": {"mod": [592, 593]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["execution.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f81dbe26e2e363c28ad043db67b59c11bb33f446", "iss_has_pr": 1, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/2671", "iss_label": "", "title": "Feature Request: Support Differential Diffusion for inpainting.", "body": "This is a nice alternative to standard inpainting, it allows for the mask to be a gradient for control of strength on top of denoising.\r\n\r\n\r\nhttps://github.com/exx8/differential-diffusion", "pr_html_url": "https://github.com/comfyanonymous/ComfyUI/pull/2876", "file_loc": {"base_commit": "f81dbe26e2e363c28ad043db67b59c11bb33f446", "files": [{"path": "comfy/samplers.py", "status": "modified", "Loc": {"('KSamplerX0Inpaint', 'forward', 277)": {"add": [278]}}}, {"path": "nodes.py", "status": "modified", "Loc": {"(None, 'init_custom_nodes', 1936)": {"add": [1963]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["nodes.py", "comfy/samplers.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "fe421d4acd76e8a19098e942b7bd9c3bbef6ebc4", "iss_has_pr": 1, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/242", "iss_label": "", "title": " imread() got an unexpected keyword argument 'mode'", "body": "* face_recognition version: 1.0.0\r\n* Python version: 2.7\r\n* Operating System: mac EI Capitan 10.11.6\r\n\r\n### Description\r\nafter install the face_recognition, I tried to run examples/facerec_from_webcam_faster.py, but it show error as following:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/johnwang/workspace/PycharmProjects/Demo1/Face_Recognizer.py\", line 18, in <module>\r\n obama_image = face_recognition.load_image_file(\"obama.jpg\")\r\n File \"/Library/Python/2.7/site-packages/face_recognition/api.py\", line 81, in load_image_file\r\n return scipy.misc.imread(file, mode=mode)\r\nTypeError: imread() got an unexpected keyword argument 'mode'\r\n\r\n\r\nand I checked the scipy version and tried to upgrade, and scipy version which I installed is already 1.0.0\r\njohns-MacBook-Pro:kaggle johnwang$ pip install --upgrade scipy\r\nRequirement already up-to-date: scipy in /Library/Python/2.7/site-packages\r\nRequirement already up-to-date: numpy>=1.8.2 in /Library/Python/2.7/site-packages (from scipy)\r\n\r\ncould you help on this problem? thanks in advance.\r\n", "pr_html_url": "https://github.com/ageitgey/face_recognition/pull/383", "file_loc": {"base_commit": "fe421d4acd76e8a19098e942b7bd9c3bbef6ebc4", "files": [{"path": "docs/conf.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [25]}}}, {"path": "face_recognition/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}}}, {"path": "face_recognition/api.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}, "(None, 'load_image_file', 76)": {"mod": [84]}}}, {"path": "face_recognition/cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11], "mod": [6, 7]}, "(None, 'test_image', 42)": {"mod": [46, 47, 48, 49, 50]}}}, {"path": "setup.cfg", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2]}}}, {"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [17, 18, 28]}}}, {"path": "tests/test_face_recognition.py", "status": "modified", "Loc": {"('Test_face_recognition', None, 21)": {"add": [248]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["face_recognition/api.py", "face_recognition/__init__.py", "docs/conf.py", "setup.py", "setup.cfg", "face_recognition/cli.py"], "doc": [], "test": ["tests/test_face_recognition.py"], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "8322e7c00b7da9cbde8216c01d42330f03c5dcb9", "iss_has_pr": 1, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/59", "iss_label": "", "title": "PIL/Image.py - ValueError: height and width must be > 0", "body": "* face_recognition version: latest\r\n* Python version: import dlib works for Python 2 and 3\r\n* Operating System: Ubuntu 16.04.2 LTS\r\n\r\n### Description\r\n\r\nknown_people directory has three images of each of four different people\r\npic1.jpg has 10 unidentified people in it, 2 of which are in known_people\r\npic2.jpg has 4 unidentified people in it, 1 of which is in known_people\r\n\r\n### What I Did\r\n\r\n```\r\nPaste the command(s) you ran and the output.\r\nIf there was a crash, please include the traceback here.\r\n```\r\ngpu@gpu:~$ face_recognition known_people pic1.jpg \r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/face_recognition\", line 11, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python2.7/dist-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python2.7/dist-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python2.7/dist-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/face_recognition/cli.py\", line 66, in main\r\n test_image(image_to_check, known_names, known_face_encodings)\r\n File \"/usr/local/lib/python2.7/dist-packages/face_recognition/cli.py\", line 40, in test_image\r\n unknown_image = scipy.misc.imresize(unknown_image, scale_factor)\r\n File \"/usr/local/lib/python2.7/dist-packages/scipy/misc/pilutil.py\", line 490, in imresize\r\n imnew = im.resize(size, resample=func[interp])\r\n File \"/usr/local/lib/python2.7/dist-packages/PIL/Image.py\", line 1645, in resize\r\n return self._new(self.im.resize(size, resample))\r\nValueError: height and width must be > 0\r\n\r\ngpu@gpu:~$ face_recognition known_people pic2.jpg \r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/face_recognition\", line 11, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python2.7/dist-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python2.7/dist-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python2.7/dist-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/face_recognition/cli.py\", line 66, in main\r\n test_image(image_to_check, known_names, known_face_encodings)\r\n File \"/usr/local/lib/python2.7/dist-packages/face_recognition/cli.py\", line 40, in test_image\r\n unknown_image = scipy.misc.imresize(unknown_image, scale_factor)\r\n File \"/usr/local/lib/python2.7/dist-packages/scipy/misc/pilutil.py\", line 490, in imresize\r\n imnew = im.resize(size, resample=func[interp])\r\n File \"/usr/local/lib/python2.7/dist-packages/PIL/Image.py\", line 1645, in resize\r\n return self._new(self.im.resize(size, resample))\r\nValueError: height and width must be > 0\r\n\r\n", "pr_html_url": "https://github.com/ageitgey/face_recognition/pull/65", "file_loc": {"base_commit": "8322e7c00b7da9cbde8216c01d42330f03c5dcb9", "files": [{"path": "face_recognition/cli.py", "status": "modified", "Loc": {"(None, 'test_image', 32)": {"mod": [37]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["face_recognition/cli.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "2062b5097ce6800a6dc23fcc1648e128a27d6353", "iss_has_pr": 1, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/10223", "iss_label": "good first issue\nstatus/close", "title": "\ud83c\udfc5\ufe0f\u98de\u6868\u5957\u4ef6\u5feb\u4e50\u5f00\u6e90\u5e38\u89c4\u8d5b", "body": "## \u6d3b\u52a8\u8bf4\u660e\r\n\u98de\u6868\u5957\u4ef6\u5feb\u4e50\u5f00\u6e90\u5e38\u89c4\u8d5b\u6d3b\u52a8\u65e8\u5728\u8ba9\u4f17\u591a\u5f00\u53d1\u8005\u80fd\u53c2\u4e0e\u5230\u5404\u5927CV/NLP\u5957\u4ef6\u7684\u5efa\u8bbe\u5de5\u4f5c\u4e2d\uff08\u4e5f\u662f\u6211\u4eec\u539f\u6709Issue\u653b\u5173\u6d3b\u52a8\u7684\u5347\u7ea7\u7248\u672c\uff09\uff0c\u5305\u62ec\u4e0d\u9650\u4e8e\u65b0\u589e\u57fa\u7840\u529f\u80fd\u3001\u8bba\u6587\u590d\u73b0\u3001Issue\u56de\u590d\u7b49\uff0c\u4efb\u4f55\u6709\u5229\u4e8e\u793e\u533a\u610f\u89c1\u6d41\u52a8\u548c\u95ee\u9898\u89e3\u51b3\u7684\u884c\u4e3a\u90fd\u70ed\u5207\u5e0c\u671b\u5927\u5bb6\u7684\u53c2\u4e0e\u3002\u8ba9\u6211\u4eec\u5171\u540c\u6210\u957f\u4e3a\u6210\u4e3a\u98de\u6868CV/NLP\u5957\u4ef6\u7684\u91cd\u8981contributors\u3002\ud83c\udf89\ud83c\udf89\r\n\r\n\u5728\u5957\u4ef6\u5feb\u4e50\u5f00\u6e90\u5e38\u89c4\u8d5b\u6d3b\u52a8\u4e2d\uff0c\u6211\u4eec\u4f1a\u7ed3\u5408\u6280\u672f\u7814\u8ba8\u548c\u4efb\u52a1\u53d1\u5e03\u4e24\u79cd\u6d3b\u52a8\u5f62\u5f0f\u4e92\u76f8\u4fc3\u8fdb\u3002\u4efb\u4f55\u613f\u610f\u53c2\u4e0e\u793e\u533a\u8d21\u732e\uff08\u65b0\u589e\u4ee3\u7801\u3001Issue\u89e3\u7b54\u7b49\uff09\uff0c\u5bf9\u589e\u957f\u5728\u5206\u5272\u3001OCR\u65b9\u5411\uff08\u540e\u7eed\u6211\u4eec\u4f1a\u6301\u7eed\u5f00\u653e\u5305\u62ec\u56fe\u50cf\u68c0\u6d4b\u3001\u90e8\u7f72\u3001\u56fe\u50cf\u5206\u7c7b\u30013D\u3001\u81ea\u7136\u8bed\u8a00\u5904\u7406\u7b49\u65b9\u5411\uff09\u77e5\u8bc6\u611f\u5174\u8da3\u7684\u5f00\u53d1\u8005\u90fd\u53ef\u4ee5\u52a0\u5165\ud83d\ude0a\u3002\u5728\u8fd9\u4e2a\u8fc7\u7a0b\u4e2d\uff0c**\u8ba9\u5927\u5bb6\u4fdd\u6301\u5bf9\u5404\u5927\u89c6\u89c9\u65b9\u5411\u77e5\u8bc6\u7684\u6301\u7eed\u79ef\u7d2f\u662f\u6211\u4eec\u7684\u4e0d\u53d8\u7684\u4e3b\u65e8**\ud83d\udd25\u3002\r\n\r\n\r\n## \u6280\u672f\u7814\u8ba8\u4f1a\r\n\u4e3a\u4e86\u5e2e\u52a9\u5927\u5bb6\u5faa\u5e8f\u6e10\u8fdb\u5730\u4e86\u89e3\u3001\u5efa\u8bae\u3001\u5f00\u53d1\u98de\u6868\u6a21\u578b\u65b9\u5411\u7684\u5f00\u6e90\u9879\u76ee\uff0c\u6211\u4eec\u642d\u5efa\u4e86\u6280\u672f\u7814\u8ba8\u4f1a\uff0c\u53c2\u4e0e\u6d3b\u52a8\u7684\u5f00\u53d1\u8005\u6bcf\u5468\u53ef\u4ee5\u53c2\u4e0e\u5230\u98de\u6868RD\u5206\u4eab\u7684\u6280\u672f\u7814\u8ba8\u4f1a\u4e2d\uff0c\u7814\u8ba8\u5185\u5bb9\u5305\u62ec\u4e0d\u9650\u4e8e\uff1a\r\n1. \u5957\u4ef6\u4ee3\u7801\u7ed3\u6784\u5256\u6790\uff0cread the code\u3002\r\n2. OCR\u3001Segmentation\u65b9\u5411\u7b97\u6cd5\u7efc\u8ff0\u5206\u4eab\u3002\r\n3. OCR\u3001Segmentation\u65b9\u5411\u524d\u6cbf\u8bba\u6587\u89e3\u8bfb\u3002\r\n4. \u8ba8\u8bba\u65b0\u589e\u9700\u6c42\u7684\u91cd\u8981\u7a0b\u5ea6\uff0c\u8ba9\u4f60\u7684\u53d1\u8a00\u63a8\u52a8\u98de\u6868\u5957\u4ef6\u7684\u53d1\u5c55\u3002\r\n\r\n\r\n## \u6d3b\u52a8\u4ef7\u503c\r\n\u7814\u8ba8\u4f1a\u5b66\u4e60\u7684\u77e5\u8bc6\u53ef\u4ee5\u5e2e\u52a9\u5927\u5bb6\u53c2\u4e0e\u6211\u4eec\u7684\u5404\u9879\u4ee3\u7801\u548cIssue\u89e3\u7b54\u4efb\u52a1\uff0c\u4efb\u52a1\u5b8c\u6210\u6392\u884c\u699c\u5c06\u5728\u4e0b\u65b9\u6bcf\u5929\u66f4\u65b0\uff0c\u671f\u5f85\u5927\u5bb6\u7684\u53c2\u4e0e\u3002\u5b8c\u6210\u4efb\u52a1\u7684\u8d21\u732e\u8005\u53ef\u4ee5\u83b7\u5f97\uff1a\r\n1. \u6280\u672f\u63d0\u5347\uff1a\u5b66\u4e60\u884c\u4e1a\u5185\u7684\u65b0\u52a8\u6001\u65b0\u65b9\u5411\uff0c\u8ba9\u81ea\u5df1\u7684\u6280\u672f\u5b9e\u529b\u5f97\u4ee5\u63d0\u5347\uff1b\r\n2. \u8363\u8a89\u5956\u52b1\uff1a\r\n a. \u6210\u4e3a\u6781\u5177\u5f71\u54cd\u529b\u7684\u89c6\u89c9\u5957\u4ef6\u7684\u91cd\u8981contributor\u3002\r\n b. \u83b7\u5f97\u5f00\u6e90\u8d21\u732e\u8bc1\u4e66\u3001\u793e\u533a\u66dd\u5149\u5ea6\u3001\u5956\u72b6\u5fbd\u7ae0\u7b49\uff1b\r\n c. \u5feb\u4e50\u5f00\u6e90\u5171\u4eab\u5956\u54c1\uff0c\u5305\u62ecPS5\uff0cairpods\u7b49\u3002\r\n3. \u4f18\u79c0\u7684\u5f00\u6e90\u8d21\u732e\u8005\u53ef\u4ee5\u83b7\u5f97\u5b9e\u4e60\u5185\u63a8\u673a\u4f1a\uff0c\u6210\u4e3a\u98de\u6868\u6a21\u578b\u5957\u4ef6\u65b9\u5411\u5b9e\u4e60\u751f\uff1b\r\n\r\n\r\n## \u4efb\u52a1\u653b\u514b\u6392\u884c\u699c\uff08Issue\u89e3\u7b54\u3001\u4ee3\u7801\u5f00\u53d1\uff09\r\n| \u5f00\u53d1\u8005github id | issue\u89e3\u7b54\u6570\u91cf | \u89e3\u7b54issue \u4ea7\u751f\u7684PR\u6570\u91cf \uff08\ud83c\udf1f\uff09| \u5b8c\u6210\u547d\u9898\u4efb\u52a1\u7684\u6570\u91cf \uff08:dart:\uff09| \r\n| --- | --- | --- | --- |\r\n| \u51b2\u5440\u5440\u5440-[livingbody](https://github.com/livingbody) | 41 | \ud83c\udf1f | :dart: :dart:| \r\n| ToddBear | 11 | | :dart: :dart: |\r\n| \u5f3a\u76db\u5927\u961f-[MINGtoMING](https://github.com/MINGtoMING) | | | :dart: :dart: |\r\n| \u66f2\u9879\u5411\u5929\u6b4c-[Asthestarsfalll](https://github.com/Asthestarsfalll)| 69 | \ud83c\udf1f \ud83c\udf1f \ud83c\udf1f \ud83c\udf1f \ud83c\udf1f \ud83c\udf1f | :dart: | \r\n| \u5fb7\u5e03\u7f57\u610f\u6ce2-[marshall-dteach](https://github.com/marshall-dteach)| 3 | | :dart: | \r\n| flytocc | | | :dart: |\r\n| [Liyulingyue](https://github.com/Liyulingyue) | 2 | \ud83c\udf1f \ud83c\udf1f | \r\n| \u51b2\u950b\u5c0f\u961f-[Gmgge](https://github.com/Gmgge)| 7 | \ud83c\udf1f | | |\r\n| \u98ce\u6e05\u626c-[WilliamQf-AI](https://github.com/WilliamQf-AI) | 6 | \ud83c\udf1f | | \r\n| GreatX-[GreatV](https://github.com/GreatV)| 4 | \ud83c\udf1f | |\r\n| [kerneltravel](https://github.com/kerneltravel) | 1 | \ud83c\udf1f | |\r\n| [xu-peng-7](https://github.com/xu-peng-7) | 1 | \ud83c\udf1f | |\r\n| \u660e\u6708\u5fc3-[raoyutian](https://github.com/raoyutian)| 8 | | | \r\n| [bltcn]([bltcn](https://github.com/bltcn)) | 1 | | | \r\n\r\n\r\n\r\n## \u4efb\u52a1\u5217\u8868\r\n\r\n#### 1. \u547d\u9898\u4efb\u52a1\uff08\u6301\u7eed\u66f4\u65b0\u4e2d\uff09\uff1a\r\n\r\n\u547d\u9898\u4efb\u52a1\u662f\u6211\u4eec\u7ecf\u8fc7\u5728 https://github.com/PaddlePaddle/PaddleOCR/issues/10334 \u8fdb\u884c\u9700\u6c42\u5f81\u96c6\u3001\u5728\u6280\u672f\u7814\u8ba8\u4f1a\u4e0a\u7ecf\u8fc7\u5927\u5bb6\u8ba8\u8bba\u786e\u5b9a\u91cd\u8981\u7684\u9700\u6c42\u3002\u6b22\u8fce\u5bf9\u8fd9\u4e9b\u9700\u6c42\u4e5f\u611f\u5174\u8da3\u7684\u5f00\u53d1\u8005\u53c2\u4e0e\u5230\u8fd9\u4e9b\u4efb\u52a1\u7684\u5f00\u53d1\u270c\ufe0f\u270c\ufe0f\u3002\u5728\u5f00\u53d1\u8fc7\u7a0b\u4e2d\uff0c\u4f60\u80fd\u8fdb\u884c\u5305\u62ec\u4efb\u52a1\u5206\u89e3\u3001\u4ee3\u7801\u64b0\u5199\u7b49\u5de5\u4f5c\uff0c\u8fd8\u4f1a\u6709\u98de\u6868\u7684\u7814\u53d1\u5168\u7a0b\u548c\u4f60\u4e00\u8d77\u89e3\u51b3\u53ef\u80fd\u9047\u5230\u7684\u95ee\u9898\u3002\u8fd8\u7b49\u4ec0\u4e48\uff0c\u5feb\u6765\u53c2\u4e0e\u5427\u3002\ud83c\udf89\ud83c\udf89\r\n\r\n* \u505a\u4efb\u52a1\u6d41\u7a0b\uff1a\r\n 1. \u5728\u672c\u6761Issue\u9875\u9762\u8fdb\u884c\u62a5\u540d\u3002\r\n 2. \u52a0\u4e00\u4e0b\u98de\u6868\u5957\u4ef6\u7814\u53d1\u7684\u5fae\u4fe1\uff1atransy-k\uff0c\u52a0\u5165\u5230CV\u5957\u4ef6\u5efa\u8bbe\u603b\u7fa4\uff0c\u5728\u5b8c\u6210\u4efb\u52a1\u4e2d\u6709\u4efb\u4f55\u95ee\u9898\u90fd\u53ef\u4ee5\u8fdb\u884c\u53cd\u9988\uff0c\u4f1a\u6709\u6a21\u578b\u5957\u4ef6\u65b9\u5411\u7684RD\u8fdb\u884c\u89e3\u7b54\u3002\r\n 3. \u5b8c\u6210\u4efb\u52a1\u540e\uff0c\u5728\u4efb\u52a1\u5bf9\u5e94\u8ddf\u8e2aIssue\u9875\u9762\u8fdb\u884c\u56de\u590d\u5b8c\u6210\uff0cRD\u9a8c\u6536\u901a\u8fc7\u540e\u5373\u89c6\u4f5c\u5b8c\u6210\uff0c\u5e76\u5728\u5f53\u5929\u66f4\u65b0\u5728issue\u6392\u884c\u699c\u3002\r\n\r\n* \u4efb\u52a1\u8fbe\u6210\u6807\u51c6\uff1a\u5b8c\u6210\u5c3d\u53ef\u80fd\u591a\u7684\u4efb\u52a1\uff0c\u5b8c\u6210\u60c5\u51b5\u6bcf\u5929\u90fd\u4f1a\u66f4\u65b0\u5230\u4efb\u52a1\u653b\u514b\u603b\u699c\uff08Issue\u89e3\u7b54\u3001\u4ee3\u7801\u5f00\u53d1\uff09\uff0c\u5b8c\u6210\u547d\u9898\u4efb\u52a1\u7684\u6570\u91cf\u7531:dart:\u8ba4\u8bc1\r\n\r\n* \u4efb\u52a1\u5217\u8868\r\n\r\n#### 23\u5e74Q4\u4efb\u52a1\r\n\r\n| \u4efb\u52a1\u540d\u79f0</br>\uff08\u9700\u6c42\u63d0\u51fa\u8005\uff09 | \u4efb\u52a1\u63cf\u8ff0 | tracking issue | mentor | \u62a5\u540d | \r\n| ------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\r\n| MedicalSeg\u589e\u52a0\u6ed1\u7a97\u63a8\u7406\u529f\u80fd\uff08@tangshiyu\uff09| 3D\u533b\u7597\u56fe\u50cf\u4e2d\u7f3a\u5c11\u6ed1\u7a97\u63a8\u7406\u63a8\u7406\u529f\u80fd\uff0c\u6ed1\u7a97\u63a8\u7406\u53ef\u4ee5\u8fdb\u4e00\u6b65\u589e\u5f3a\u4efb\u610f\u6a21\u578b\u7684\u7cbe\u5ea6 | [PaddleSeg#3536](https://github.com/PaddlePaddle/PaddleSeg/issues/3536)| @shiyutang | | \r\n|~~\u65b0\u589eearly stop\u529f\u80fd \uff08@tangshiyu\uff09~~| ~~early stop\u4f5c\u4e3a\u4e00\u79cd\u6b63\u5219\u5316\u7684\u5de5\u5177\uff0c\u53ef\u4ee5\u7528\u4e8e\u6a21\u578b\u5f00\u53d1\u7684\u4f18\u5316\u8fc7\u7a0b\u4e2d\uff0c\u4f5c\u4e3a\u65b0\u589e\u529f\u80fd\u589e\u52a0paddleseg\u4e2d| [PaddleSeg#3537](https://github.com/PaddlePaddle/PaddleSeg/issues/3537)~~ | @shiyutang | @ooooo-create (\u5df2\u5b8c\u6210) | \r\n|\u589e\u52a0\u7c7b\u6fc0\u6d3b\u56fe \uff08@tangshiyu\uff09| \u6fc0\u6d3b\u56fe\u53ef\u89c6\u5316\u80fd\u591f\u53ef\u4ee5\u5e2e\u52a9\u7406\u89e3\u6df1\u5ea6\u5b66\u4e60\u6a21\u578b\u4efb\u52a1\u4e2d\u7684\u51b3\u7b56\u8fc7\u7a0b\u3002\u901a\u8fc7\u89c2\u5bdf\u6a21\u578b\u5173\u6ce8\u7684\u533a\u57df\uff0c\u53ef\u4ee5\u4e86\u89e3\u6a21\u578b\u662f\u5982\u4f55\u6839\u636e\u4e0d\u540c\u533a\u57df\u7684\u7279\u5f81\u6765\u8fdb\u884c\u5206\u7c7b\u51b3\u7b56\u7684\uff0c\u662f\u4e00\u9879\u5341\u5206\u6709\u610f\u4e49\u4e14\u91cd\u8981\u7684\u529f\u80fd| [PaddleSeg#3538](https://github.com/PaddlePaddle/PaddleSeg/issues/3538) | @shiyutang | | \r\n|\u589e\u52a0\u8bad\u7ec3\u56fe\u50cf\u3001\u63a8\u7406\u56fe\u50cf\u3001\u6807\u7b7e\u56fe\u50cf\u53ef\u89c6\u5316\uff08@Wst-sd\uff09| \u98de\u6868\u652f\u6301\u5f3a\u5927\u7684\u8bad\u7ec3\u53ef\u89c6\u5316\u5de5\u5177VisualDL\uff0c\u7528\u4e8e\u8bb0\u5f55\u548c\u76d1\u63a7\u8bad\u7ec3\u8fc7\u7a0b\uff0c\u53ef\u4ee5\u5728\u6bcf\u6b21\u6a21\u578b\u4fdd\u5b58\u8fc7\u7a0b\u4e2d\uff0c\u589e\u52a0\u8bad\u7ec3\u56fe\u50cf\u3001\u63a8\u7406\u56fe\u50cf\u3001\u6807\u7b7e\u56fe\u50cf\u53ef\u89c6\u5316\uff0c\u66f4\u76f4\u89c2\u5730\u611f\u53d7\u8bad\u7ec3\u6548\u679c| [PaddleSeg#3545](https://github.com/PaddlePaddle/PaddleSeg/issues/3545) | @shiyutang | | \r\n|CAT-Seg (CVPR'2023)\u6a21\u578b\u590d\u73b0\uff08@tangshiyu\uff09 | CAT-Seg\u662fopen-vocabulary semantic segmentation\u7684\u524d\u6cbf\u6a21\u578b\uff0c\u5176\u63d0\u51fa\u4e86\u4e00\u79cdcost aggregation\u65b9\u6cd5\u5c06CLIP\u8868\u5f81\u5e94\u7528\u4e8e\u50cf\u7d20\u7ea7\u5206\u5272\u4efb\u52a1\uff0c\u5728\u591a\u4e2a\u6570\u636e\u96c6\u4e0a\u8fbe\u5230\u4e86\u5f00\u653e\u96c6\u5206\u5272\u7684SOTA| [PaddleSeg#3535](https://github.com/PaddlePaddle/PaddleSeg/issues/3535) | @shiyutang | | \r\n|VPD\u6a21\u578b+\u4e0b\u6e38\u4efb\u52a1\uff08\u89c6\u89c9\u611f\u77e5\u3001\u56fe\u50cf\u5206\u5272\u3001\u6df1\u5ea6\u4f30\u8ba1\uff09\uff08@tangshiyu\uff09 | VPD\u662f\u7ed3\u5408Diffusion Models\u7684\u56fe\u6587\u9884\u8bad\u7ec3\u6a21\u578b\uff0c\u53ef\u4ee5\u5e7f\u6cdb\u7684\u5e94\u7528\u4e8e\u4e0b\u6e38\u4efb\u52a1\uff0c\u5982\u89c6\u89c9\u611f\u77e5\u3001\u56fe\u50cf\u5206\u5272\u3001\u6df1\u5ea6\u4f30\u8ba1\u7b49\u7b49\uff0c\u4e14\u5747\u53d6\u5f97\u4e86\u4e0d\u9519\u7684\u6548\u679c\u3002\u53ef\u4ee5\u5c06VPD\u63a5\u5165PaddleSeg\u4e2d\uff0c\u5e76\u5e94\u7528\u4e8e\u4e0b\u6e38\u4efb\u52a1\u4e2d| [PaddleSeg#3540](https://github.com/PaddlePaddle/PaddleSeg/issues/3540) | @shiyutang | | \r\n|\u65b0\u589e\u56fe\u6587\u5bf9\u8bdd\u6a21\u578bX-GPT \uff08@tangshiyu\uff09| X-Decoder \u96c6\u6210\u4e86\u56fe\u50cf\u7406\u89e3\u7684\u591a\u7c7b\u4efb\u52a1\uff0c\u7ed3\u5408GPT\u548cSD\u76f8\u5173\u751f\u6210\u6a21\u578b\u5c31\u53ef\u4ee5\u5b9e\u73b0All-in-One\u7684\u56fe\u6587\u5bf9\u8bdd\u5f0fagnet| [PaddleSeg#3541](https://github.com/PaddlePaddle/PaddleSeg/issues/3541) | @shiyutang | | \r\n|\u9a8c\u8bc1\u5e76\u63d0\u5347SAM+Clip\u5728\u8bed\u4e49\u5206\u5272\u573a\u666f\u4e0b\u7684zero-shot\u5206\u5272\u7cbe\u5ea6 \uff08@tangshiyu\uff09| \u4ee5\u8bed\u4e49\u5206\u5272\u4e3a\u4ee3\u8868\u7684\u89c6\u89c9\u4efb\u52a1\u5b58\u5728\u6cdb\u5316\u6027\u5dee\u7684\u95ee\u9898\uff0c\u5373\u6bcf\u6b21\u5728\u65b0\u6570\u636e\u4e0a\u90fd\u9700\u8981\u91cd\u65b0\u8bad\u7ec3\u3002\u5927\u6a21\u578b\u7684\u53d1\u5c55\u5229\u7528\u56fe\u6587\u94fe\u63a5\u7684\u5f62\u5f0f\u5927\u5927\u63d0\u5347\u4e86\u6a21\u578b\u7684\u6cdb\u5316\u6027\uff0c\u4f46\u662f[\u524d\u6cbf\u8bba\u6587](https://paperswithcode.com/paper/learning-mask-aware-clip-representations-for)\u5bf9\u4e8ezero-shot\u7684\u7814\u7a76\u8868\u660e\uff0c\u5b8c\u5168\u7684zero-shot\u7684\u5206\u5272\u7cbe\u5ea6\u4f9d\u65e7\u8f83\u4f4e\u3002\u56e0\u6b64\u6211\u4eec\u501f\u7528clip\u4e2d\u5bf9zero-shot\u7684\u5b9a\u4e49\uff0c\u5373\u5728\u672a\u89c1\u8fc7\u7684\u56fe\u7247\u800c\u975e\u662f\u672a\u89c1\u8fc7\u7684\u7c7b\u522b\u4e0a\uff0c\u67e5\u770bCLIP+SAM\u6a21\u578b\u7684\u5206\u5272\u6548\u679c\uff08\u8fd9\u4e00\u5b9a\u4e49\u4e5f\u5341\u5206\u6709\u5b9e\u7528\u610f\u4e49\uff09\uff0c\u5e76\u501f\u7528[\u524d\u6cbf\u8bba\u6587](https://paperswithcode.com/paper/learning-mask-aware-clip-representations-for)\u7684\u601d\u60f3\u5bf9baseline\u8fdb\u4e00\u6b65\u4f18\u5316\u3002\u8fd9\u4e00\u4e3e\u52a8\u5c06\u9a8c\u8bc1\u5e76\u4f18\u5316\u8bed\u4e49\u5206\u5272\u6a21\u578b\u5728\u672a\u89c1\u8fc7\u7684\u6570\u636e\u4e0a\u7684\u6cdb\u5316\u6027| [PaddleSeg#3542](https://github.com/PaddlePaddle/PaddleSeg/issues/3542) | @shiyutang | | \r\n| \u3010Bug Fix\u3011humanseg\u663e\u5b58\u6cc4\u6f0f\uff08@enemy1205\uff09| \u4f7f\u7528PaddleSeg\u8fdb\u884c[\u4eba\u50cf\u5206\u5272](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8/contrib/PP-HumanSeg)\u65f6\uff0c\u5bf9\u5927\u6279\u91cf\u6570\u636e\u8fdb\u884c\u4eba\u50cf\u5206\u5272\u63a8\u7406\u65f6\uff0c\u5185\u5b58\u91ca\u653e\u4e0d\u5145\u5206\uff0c\u51fa\u73b0\u5185\u5b58\u5806\u79ef\u95ee\u9898\uff0c\u89e6\u53d1Linux OOM\u673a\u5236\u5bfc\u81f4\u7a0b\u5e8f\u88abkill\u3002 | [PaddleSeg#3543](https://github.com/PaddlePaddle/PaddleSeg/issues/3543) | @shiyutang | | \r\n| \u3010Bug Fix\u3011modnet\u63a8\u7406\u95ee\u9898\uff08@munibkhanali\uff09| \u4f7f\u7528modnet\u8fdb\u884cimage matting\uff0c\u5728\u5c06\u5176\u8f6c\u6362\u4e3a paddlelite \u517c\u5bb9\u6a21\u578b\u65f6\uff0c\u51fa\u73b0\u62a5\u9519\uff0c\u5177\u4f53\u53c2\u8003\uff08[#3477](https://github.com/PaddlePaddle/PaddleSeg/issues/3477)\uff09 | [PaddleSeg#3544](https://github.com/PaddlePaddle/PaddleSeg/issues/3544) | @shiyutang | | \r\n| ~~\u8865\u5145Satrn\u8bc6\u522b\u6a21\u578b\u6587\u6863\uff08@tangshiyu\uff09~~| \u65b0\u589e\u7684Satrn\u8bc6\u522b\u6a21\u578b\u7f3a\u5c11\u8bf4\u660e\u6587\u6863\uff0c\u9002\u5408\u5f00\u6e90\u8d21\u732e\u7ecf\u5386\u8f83\u5c11\u7684\u540c\u5b66\u4e86\u89e3\u63d0\u4ea4PR\u8fc7\u7a0b\u5e76\u719f\u6089OCR\u6587\u6863 | [PaddleOCR#11131](https://github.com/PaddlePaddle/PaddleOCR/issues/11131) | @shiyutang | @wkml | \r\n| \u8865\u5145Satrn\u8bc6\u522b\u6a21\u578bTIPC\uff08@tangshiyu\uff09| \u65b0\u589e\u7684Satrn\u6a21\u578b\u7f3a\u5c11TIPC\uff0c\u5b8c\u6210tipc\u6709\u5229\u4e8e\u4e0a\u624b\u8bad\u63a8\u5168\u6d41\u7a0b\u81ea\u52a8\u5316\u811a\u672c\u9a8c\u8bc1\u8fc7\u7a0b | [PaddleOCR#11133](https://github.com/PaddlePaddle/PaddleOCR/issues/11133) | @shiyutang | | \r\n| \u589e\u52a0\u591a\u5361\u8bc4\u4f30\uff08@flytocc\uff09| \u76ee\u524dPaddleDetection\u4ec5\u652f\u6301\u5355\u5361\u8bc4\u4f30\uff0c\u5e0c\u671b\u652f\u6301\u591a\u5361\u8bc4\u4f30 | [PaddleDet#8682](https://github.com/PaddlePaddle/PaddleDetection/issues/8682) | @shiyutang | @MINGtoMING |\r\n| \u4e3aPaddleOCR\u589e\u52a0\u8bad\u7ec3\u65f6\u5468\u671f\u6027\u9a8c\u8bc1\u7684\u5f00\u5173\uff08@tangshiyu\uff09| \u4e3aPaddleOCR\u589e\u52a0\u8bad\u7ec3\u65f6\u5468\u671f\u6027\u9a8c\u8bc1\u7684\u5f00\u5173\uff1b\u4e3aPaddleOCR\u589e\u52a0eval_epoch_step\u53c2\u6570\u3002\u4e0ePaddleCV\u7684\u5176\u5b83\u57fa\u7840\u5957\u4ef6PaddleSeg\u3001PaddleDetection\u3001PaddleClas\u3001Paddle3D\u7b49\u4e0d\u540c\uff0cPaddleOCR\u4e0d\u652f\u6301\u4e0a\u8ff0\u529f\u80fd\uff0c\u8fd9\u5bfc\u81f4\u5305\u62ec\u4f46\u4e0d\u9650\u4e8e\u5982\u4e0b\u95ee\u9898\uff1a\u7528\u6237\u6709\u65f6\u53ea\u60f3\u8981\u5c06\u6a21\u578b\u8bad\u7ec3\u4e00\u5b9a\u7684\u8fed\u4ee3\u8f6e\u6570\uff0c\u5e76\u4e0d\u5e0c\u671b\u5728\u8bad\u7ec3\u65f6\u8fdb\u884c\u7cbe\u5ea6\u8bc4\u4f30\uff08\u8fd9\u53ef\u80fd\u5e26\u6765\u989d\u5916\u7684\u65f6\u95f4\u5f00\u9500\uff09\uff0c\u800c\u76ee\u524dPaddleOCR\u65e0\u6cd5\u4f18\u96c5\u5730\u6ee1\u8db3\u8fd9\u4e2a\u9700\u6c42\uff0c\u53ea\u80fd\u901a\u8fc7\u8bbe\u5b9a\u4e00\u4e2a\u8f83\u5927\u7684eval_batch_step\u6570\u503c\u6765\u5b9e\u73b0\u3002\u66f4\u6362\u6570\u636e\u96c6\u540e\uff0c\u7531\u4e8e\u6570\u636e\u96c6\u5927\u5c0f\u53d1\u751f\u6539\u53d8\uff0c\u7528\u6237\u5f80\u5f80\u4e5f\u9700\u8981\u4fee\u6539eval_batch_step\u914d\u7f6e\uff0c\u4ee5\u4f7f\u5f97eval\u9891\u7387\u5408\u9002\u3002PaddleOCR\u4e2d\u5b9e\u73b0\u7684\u662fepoch-based trainer\uff0c\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d\u8bbe\u7f6e\u7684\u4e5f\u662fepoch_num\u800c\u4e0d\u662fnum_iters\uff0c\u4f46eval_batch_step\u5374\u662fiters\u7c92\u5ea6\u7684\u63a7\u5236\uff0c\u5b58\u5728\u98ce\u683c\u4e0d\u5951\u5408\u7684\u95ee\u9898\u3002 | [PaddleOCR#11132](https://github.com/PaddlePaddle/PaddleOCR/issues/11132) | @shiyutang | | \r\n\r\n#### 23\u5e74Q3\u4efb\u52a1\r\n\r\n| \u4efb\u52a1\u540d\u79f0</br>\uff08\u9700\u6c42\u63d0\u51fa\u8005\uff09 | \u4efb\u52a1\u63cf\u8ff0 | tracking issue | mentor | \u62a5\u540d | \r\n| ------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\r\n| ~~\u6587\u5b57\u8bc6\u522b\u8fd4\u56de\u5355\u5b57\u8bc6\u522b\u5750\u6807\uff08@EasyIsAllYouNeed @WilliamQf-AI\uff0c\u5df2\u5b8c\u6210\uff09~~ | \u5728\u6587\u672c\u8bc6\u522b\u4e4b\u540e\uff0c\u589e\u52a0\u5bf9\u5355\u5b57\u4f4d\u7f6e\u5750\u6807\u7684\u8fd4\u56de\uff0c\u53ef\u4ee5\u7528\u4e8e\u6587\u6863\u6bd4\u5bf9\u3001\u5408\u540c\u7be1\u6539\u7b49\u5927\u91cf\u573a\u666f\u4e2d\u3002 | [PaddleOCR#10377](https://github.com/PaddlePaddle/PaddleOCR/issues/10377) | @shiyutang | @ToddBear #10515 | \r\n|~~\u5957\u4ef6\u4e00\u81f4\u6027\u8ba1\u5212 **\u4efb\u52a1\u6709\u66f4\u65b0\u4e3a\u4e24\u4e2a\u5b50\u4efb\u52a1**\uff08@Bobholamovic \uff09~~ | \u5404\u5927CV\u5957\u4ef6\u76ee\u524d\u5728\u4f9d\u8d56\u5e93\u3001\u6a21\u578b\u4fdd\u5b58\u8def\u5f84\u7b49\u95ee\u9898\u4e0a\u5b58\u5728\u5f88\u591a\u4e0d\u4e00\u81f4\u6027\uff0c\u5bfc\u81f4\u6ca1\u6709\u529e\u6cd5\u8fbe\u5230\u73af\u5883\u7edf\u4e00\uff0c\u4f7f\u7528\u77e5\u8bc6\u8fc1\u79fb\u7b49\u6548\u679c\uff0c\u4f53\u9a8c\u6548\u679c\u53d8\u5dee\u3002\u6b64\u4efb\u52a1\u81f4\u529b\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\uff0c\u540c\u65f6\u89e3\u51b3\u96be\u5ea6\u4e0d\u9ad8\uff0c\u662f\u4e00\u4e2a\u975e\u5e38\u9002\u5408\u4e0a\u624b\u7684\u4efb\u52a1| [PaddleOCR#10380](https://github.com/PaddlePaddle/PaddleOCR/issues/10380) | @shiyutang @Bobholamovic | @livingbody | \r\n| ~~\u3010\u8bba\u6587\u590d\u73b0\u3011Segment Anything \u52a0\u901f\u7248 MobileSAM\uff08@[qiaoyu1002](https://github.com/qiaoyu1002) \uff08\u5df2\u5b8c\u6210\uff09~~ | \u6839\u636e\u539f\u4f5c\u8005\u63d0\u51fa\u7684issue https://github.com/PaddlePaddle/PaddleSeg/issues/3346\uff0c \u590d\u73b0\u8bba\u6587[MobileSAM](https://arxiv.org/pdf/2306.14289.pdf)\u3002\u8be5\u6a21\u578b\u4e3a\u706b\u7206\u7684SAM\u6a21\u578b\u7684\u52a0\u901f\u7248\u672c\uff0c\u5927\u5927\u63d0\u5347\u4e86SAM\u7684\u4f7f\u7528\u4f53\u9a8c\uff0c\u8be5\u6a21\u578b\u76ee\u524d\u5df2\u7ecf\u67092.9k star\uff0c\u6a21\u578b\u3001\u4ee3\u7801\u5df2\u7ecf\u5f00\u6e90\uff0c\u53ea\u9700\u8fdb\u884c\u524d\u5411\u5bf9\u9f50\u5373\u53ef | [PaddleOCR#10451](https://github.com/PaddlePaddle/PaddleOCR/issues/10451) | @shiyutang | @Asthestarsfalll [PaddleSeg#3349](https://github.com/PaddlePaddle/PaddleSeg/pull/3349) |\r\n| ~~\u3010\u8bba\u6587\u590d\u73b0\u3011OCR\u8bc6\u522b\u6a21\u578b[Parseq](https://arxiv.org/abs/2207.06966)\uff08@printfxs\uff09\uff08\u5df2\u5b8c\u6210\uff09~~ | \u8be5\u6a21\u578b\u5c06\u89c6\u89c9\u548c\u8bed\u4e49\u4fe1\u606f\u7ed3\u5408\uff0c\u5b9e\u73b0\u7cbe\u5ea6\u548c\u901f\u5ea6\u7684\u53cc\u91cd\u63d0\u5347\uff0c\u5bf9\u6bd4\u524d\u6cbf\u6a21\u578bSVTR\u6709\u8fdb\u4e00\u6b65\u4f18\u52bf | [PaddleOCR#10452](https://github.com/PaddlePaddle/PaddleOCR/issues/10452) | @shiyutang | @ToddBear |\r\n|~~\u3010\u8bba\u6587\u590d\u73b0\u3011\u68c0\u6d4b\u6a21\u578b\u7b56\u7565--\u57fa\u4e8ePPDET Deformable DETR\u590d\u73b0SQR\u589e\u5f3a\u7b56\u7565(@lyuwenyu )~~ | \u4e3aPaddledet\u589e\u52a0\u524d\u6cbf\u7b56\u7565SQR\uff0c\u53ef\u4ee5\u5e94\u7528\u5728\u591a\u4e2a\u6a21\u578b\u4e2d | [PaddleDetection#8498](https://github.com/PaddlePaddle/PaddleDetection/issues/8498) | @shiyutang @juncaipeng | @flytocc |\r\n |~~\u3010\u8bba\u6587\u590d\u73b0\u3011\u5206\u7c7b\u6a21\u578b--\u591a\u6807\u7b7e\u5206\u7c7b\u4efb\u52a1ML-Decoder (@cuicheng01 @zhangyubo0722)\uff08\u5df2\u5b8c\u6210\uff09~~| \u8be5\u8bba\u6587\u63d0\u51fa\u7684\u53ef\u6269\u5c55\u901a\u7528\u5206\u7c7b\u5934\u5728\u591a\u6807\u7b7e\u5206\u7c7b\u3001zero-sho\u4ee5\u53ca\u5355\u6807\u7b7e\u5206\u7c7b\u4efb\u52a1\u4e0a\u8868\u73b0\u51fa\u5f88\u597d\u7684\u6548\u679c\u3002\u672c\u4efb\u52a1\u7684\u5b8c\u6210\u53ef\u4ee5\u6269\u5145PaddleClas\u591a\u6807\u7b7e\u5206\u7c7b\u76f8\u5173\u89c6\u89c9\u4efb\u52a1\uff0c\u5e76\u6709\u4f17\u591a\u5e94\u7528\u573a\u666f\u3002\u4f5c\u8005\u56e2\u961f\u57fa\u4e8e\u4e0d\u540c\u6570\u636e\u96c6\u9a8c\u8bc1\u4e0d\u540c\u4efb\u52a1\u7684\u6027\u80fd\uff0c\u5145\u5206\u8bc1\u660eML-Decoder\u5206\u7c7b\u5934\u7684\u6027\u80fd\u4ee5\u53ca\u6cdb\u7528\u6027\u3002 | [PaddleClas#2896](https://github.com/PaddlePaddle/PaddleClas/issues/2896) | @cuicheng01 @shiyutang | @MINGtoMING |\r\n|\u3010\u6a21\u578b\u538b\u7f29\u63a8\u5168\u8ba1\u5212\u3011\u4e3a\u516d\u5927\u5957\u4ef6\u65b0\u589e\u6a21\u578b\u538b\u7f29\u529f\u80fd\uff08@shiyutang\uff09| \u76ee\u524d\u5404\u5957\u4ef6\u7684\u6a21\u578b\u538b\u7f29\u80fd\u529b\u53c2\u5dee\u4e0d\u9f50\uff0c\u800c\u6a21\u578b\u538b\u7f29\u4f5c\u4e3a\u90e8\u7f72\u4e4b\u524d\u7684\u4e00\u6b65\uff0c\u53ef\u4ee5\u5728\u4e0d\u635f\u5bb3\u6216\u8005\u5c11\u91cf\u635f\u5bb3\u6a21\u578b\u7cbe\u5ea6\u7684\u60c5\u51b5\u4e0b\uff0c\u5bf9\u6a21\u578b\u7684\u80fd\u8017\uff0c\u901f\u5ea6\u3001\u5927\u5c0f\u90fd\u6709\u663e\u8457\u7684\u6539\u5584\u3002\u56e0\u6b64\u4e3a\u4e86\u5bf9\u5404\u5957\u4ef6\u7684\u6a21\u578b\u538b\u7f29\u8fdb\u884c\u63a8\u5168\uff0c\u6211\u4eec\u63d0\u51fa\u4e86\u57fa\u4e8ePaddleSlim\u7684ACT\u4e3a\u5404\u5927\u5957\u4ef6\u65b0\u589e\u6a21\u578b\u538b\u7f29\u529f\u80fd\u7684\u8ba1\u5212\u3002| [PaddleOCR#10657](https://github.com/PaddlePaddle/PaddleOCR/issues/10657) | @shiyutang | \u5728issue\u9875\u9762\u62a5\u540d | \r\n | ~~\u4e3aPaddleSeg\u6dfb\u52a0\u591a\u6807\u7b7e\u8bed\u4e49\u5206\u5272\u7684\u529f\u80fd\uff08@Wulx2050\uff09~~| \u591a\u6807\u7b7e\u5206\u5272\u662f\u5206\u5272\u4e2d\u7684\u4e00\u4e2a\u5206\u652f\uff0c\u5e38\u7528\u4e8e\u533b\u7597\u5206\u5272\u4e2d\uff0c\u901a\u8fc7\u4fee\u6539\u5206\u5272\u5934\u548c\u635f\u5931\u51fd\u6570\u5373\u53ef\u5b9e\u73b0\u3002| [PaddleSeg#3456](https://github.com/PaddlePaddle/PaddleSeg/issues/3456) | @shiyutang | @MINGtoMING | \r\n\r\n#### 2. Good first issue\r\n* \u4efb\u52a1\u8bf4\u660e\uff1a\u901a\u5e38\u662f\u4e00\u4e9b\u5bf9\u4e8e\u6587\u6863\u4e0d\u719f\u6089\u3001\u4ee3\u7801\u8fd0\u884c\u62a5\u9519\u3001bug \u7684\u4fee\u590d\u7b49\uff0c\u4f60\u53ef\u4ee5\u901a\u8fc7\u5b8c\u6210\u8fd9\u4e2a ISSUE/PR \u6765\u8e0f\u51fa\u8d21\u732e\u4ee3\u7801\u7684\u7b2c\u4e00\u6b65\u3002\r\n\r\n* \u505a\u4efb\u52a1\u6d41\u7a0b\uff1a\r\n 1. \u5728\u672c\u6761Issue\u9875\u9762\u8fdb\u884c\u62a5\u540d\u3002\r\n 2. \u52a0\u4e00\u4e0b\u98de\u6868\u5957\u4ef6\u7814\u53d1\u7684\u5fae\u4fe1\uff1atransy-k \u52a0\u5165\u5230CV\u5957\u4ef6\u5efa\u8bbe\u603b\u7fa4\uff0c\u5728\u5b8c\u6210\u4efb\u52a1\u4e2d\u6709\u4efb\u4f55\u95ee\u9898\u90fd\u53ef\u4ee5\u8fdb\u884c\u53cd\u9988\uff0c\u4f1a\u6709\u6a21\u578b\u5957\u4ef6\u65b9\u5411\u7684RD\u8fdb\u884c\u89e3\u7b54\u3002\r\n 3. \u56de\u590dissue\uff0c\u8ba4\u4e3a\u56de\u7b54\u6b63\u786e\u540e\u672c\u9875\u9762\u8fdb\u884c\u56de\u590d\u5b8c\u6210\uff0cRD\u9a8c\u6536\u901a\u8fc7\u540e\u5373\u5b8c\u6210\u4e00\u6761\uff0c\u5e76\u5728\u5f53\u5929\u66f4\u65b0\u5728\u4efb\u52a1\u5b8c\u6210\u6392\u884c\u699c\u3002\r\n\r\n* \u4efb\u52a1\u8fbe\u6210\u6807\u51c6\uff1a\u5b8c\u6210\u5c3d\u53ef\u80fd\u591a\u7684issue\uff0c\u5b8c\u6210\u60c5\u51b5\u6bcf\u5929\u90fd\u4f1a\u66f4\u65b0\u5230\u4efb\u52a1\u653b\u514b\u603b\u699c\uff08Issue\u89e3\u7b54\u3001\u4ee3\u7801\u5f00\u53d1\uff09\uff0c\u5982\u679c\u5728\u6b64\u57fa\u7840\u4e0a\u989d\u5916\u63d0\u51fa\u4e86PR\u5e76\u5408\u5165\u7684\u8fdb\u884c\u989d\u5916\u52a0\u661f\ud83c\udf1f\u3002\r\n\r\n* \u4efb\u52a1\u5217\u8868\uff1a\r\n 1. PaddleOCR Repo\uff1a [good first issue](https://github.com/PaddlePaddle/PaddleOCR/issues)\r\n 2. PaddleSeg Repo\uff1a[good first issue](https://github.com/PaddlePaddle/PaddleSeg/issues?q=is%3Aissue+is%3Aopen+label%3AGoodFirstIssue)\r\n\r\n\r\n## \u62a5\u540d\u6a21\u7248\r\n\u961f\u4f0d\u540d\uff1aXXX\r\n\u961f\u4f0d\u6210\u5458\u5fae\u4fe1\u6635\u79f0\uff1aXX\r\n\u529f\u80fd\u63cf\u8ff0\uff1a\uff08\u53ef\u9009\uff09\u63cf\u8ff0\u60f3\u8981\u5b9e\u73b0\u7684\u529f\u80fd\r\n\u3010\u63d0\u4ea4\u65f6\u8865\u5145\u3011issue/PR\u5730\u5740\uff1aGithub\u94fe\u63a5\r\n\r\n\r\n## \ud83d\udca1 \u6b22\u8fce\u63d0\u51fa\u4f60\u7684\u60f3\u6cd5\r\n* \u6b22\u8fce\u5411\u5957\u4ef6\u65b9\u5411\u7684\u5efa\u8bbe\u63d0\u51fa\u4f60\u7684\u60f3\u6cd5\uff0c\u65e0\u8bba\u662f\u5bf9\u5404\u5927\u5957\u4ef6\u60f3\u63d0\u51fa\u65b0\u7684\u9700\u6c42\uff0c\u8fd8\u662f\u5bf9\u6211\u4eec\u5efa\u8bbe\u65b9\u5411\u7684\u5efa\u8bae\uff0c\u90fd\u6b22\u8fce\u8e0a\u8dc3\u63d0\u51fa\u4f60\u7684\u610f\u89c1\u3002\u5173\u4e8e\u65b0\u589e\u9700\u6c42\u6216\u95ee\u9898\u53ef\u4ee5\u5728issue\u4e2d\u63d0\u51fa\u3002\u4f60\u7684\u9700\u6c42\u548c\u5efa\u8bae\u4e5f\u53ef\u80fd\u6210\u4e3a\u6211\u4eec\u540e\u7eed\u53d1\u5e03\u7684\u4efb\u52a1\uff0c\u5927\u5bb6\u53ef\u4ee5\u7fa4\u7b56\u7fa4\u529b\u4e00\u8d77\u5b9e\u73b0\u3002\r\n", "pr_html_url": "https://github.com/PaddlePaddle/PaddleOCR/pull/3261", "file_loc": {"base_commit": "2062b5097ce6800a6dc23fcc1648e128a27d6353", "files": [{"path": "PPOCRLabel/PPOCRLabel.py", "status": "modified", "Loc": {"('MainWindow', '__init__', 95)": {"add": [400], "mod": [568]}, "('MainWindow', None, 92)": {"add": [762]}}}, {"path": "PPOCRLabel/libs/utils.py", "status": "modified", "Loc": {"(None, 'stepsInfo', 162)": {"mod": [190]}}}, {"path": "PPOCRLabel/resources/strings/strings-zh-CN.properties", "status": "modified", "Loc": {"(None, None, None)": {"add": [91]}}}, {"path": "PPOCRLabel/resources/strings/strings.properties", "status": "modified", "Loc": {"(None, None, None)": {"add": [91]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["PPOCRLabel/libs/utils.py", "PPOCRLabel/PPOCRLabel.py"], "doc": [], "test": [], "config": [], "asset": ["PPOCRLabel/resources/strings/strings-zh-CN.properties", "PPOCRLabel/resources/strings/strings.properties"]}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "19446faaa12743f0a2f729a7beab0e561626f530", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/841", "iss_label": "bug\ntriage", "title": "ValueError: Could not parse following text as code edit:", "body": "## Expected Behavior\r\n\r\nImprove the code\r\n\r\n## Current Behavior\r\n\r\nError gets thrown\r\n\r\n## Failure Information\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/home/riccardo/.local/bin/gpt-engineer\", line 8, in <module>\r\n sys.exit(app())\r\n\r\n File \"/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/cli/main.py\", line 169, in main\r\n messages = step(ai, dbs)\r\n\r\n File \"/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/steps.py\", line 588, in improve_existing_code\r\n overwrite_files_with_edits(messages[-1].content.strip(), dbs)\r\n\r\n File \"/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/chat_to_files.py\", line 219, in overwrite_files_with_edits\r\n edits = parse_edits(chat)\r\n\r\n File \"/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/chat_to_files.py\", line 268, in parse_edits\r\n return parse_all_edits(llm_response)\r\n\r\n File \"/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/chat_to_files.py\", line 255, in parse_all_edits\r\n edits.append(parse_one_edit(current_edit))\r\n\r\n File \"/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/chat_to_files.py\", line 240, in parse_one_edit\r\n raise ValueError(f\"Could not parse following text as code edit: \\n{text}\")\r\n\r\n### Steps to Reproduce\r\n\r\nI'm using this prompt:\r\n\r\n> Improve Code for Readability and Reusability:\r\n\r\n> \r\n> Refactor complex functions into smaller, more manageable pieces.\r\n> Use meaningful variable and function names that clearly indicate their purpose.\r\n> Follow a consistent coding style and adhere to best practices outlined in the project's style guide.\r\n> Implement design patterns where applicable to promote code reusability.\r\n> Implement TODOs Where Appropriate:\r\n> \r\n> Review the codebase for any // TODO: comments and prioritize their completion based on the project's goals.\r\n> Assess the impact of each TODO on the current codebase and potential future developments.\r\n> Document the reasoning behind the resolution of TODOs for future reference.\r\n> Add Comments Where Appropriate:\r\n> \r\n> Provide clear and concise comments for complex code blocks to explain the logic and its purpose.\r\n> Update or remove outdated comments that no longer reflect the current state of the code.\r\n> Use comments to outline the steps of complex algorithms or workflows within the code.\r\n> \r\n> Optimize Performance:\r\n> \r\n> Identify bottlenecks and optimize critical sections of the code for better performance.\r\n> Consider the time and space complexity of algorithms and refactor if more efficient solutions exist.\r\n> Utilize profiling tools to measure performance improvements.\r\n> Enhance Security:\r\n> \r\n> Review the code for potential security vulnerabilities and apply best practices to mitigate risks.\r\n> Ensure that all sensitive data is properly encrypted and that secure coding principles are followed.\r\n> Stay updated with the latest security advisories and apply patches or updates as necessary.\r\n> \r\n> Implement Unit Tests and Integration Tests:\r\n> \r\n> Write unit tests for new features and bug fixes to validate individual components.\r\n> Create integration tests to ensure that different parts of the application work together as expected.\r\n> Strive for a high level of test coverage to catch potential issues early.\r\n\r\nAlso, I got charged :'(\r\n![image](https://github.com/AntonOsika/gpt-engineer/assets/50662094/943d14d0-9a2e-4efb-a0eb-7c9de095f752)\r\n\r\n\r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1005", "file_loc": {"base_commit": "19446faaa12743f0a2f729a7beab0e561626f530", "files": [{"path": "gpt_engineer/applications/cli/file_selector.py", "status": "modified", "Loc": {"('FileSelector', 'get_current_files', 327)": {"add": [354]}}}, {"path": "gpt_engineer/core/chat_to_files.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [39], "mod": [2, 4, 5, 6, 7, 9, 10, 11, 12, 13, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 35, 36, 38]}, "(None, 'chat_to_files_dict', 43)": {"mod": [45, 47, 48, 50, 51, 52, 53, 55, 56, 57, 58, 60, 66, 69, 72, 75, 78, 81, 84]}, "(None, 'overwrite_code_with_edits', 87)": {"mod": [87, 89, 91, 92, 94, 95, 96, 97, 98, 99, 101, 102, 105, 106, 107, 108, 109, 112]}, "(None, 'parse_edits', 112)": {"mod": [114, 116, 117, 119, 120, 121, 122, 124, 125, 126, 127, 129, 130, 131, 132, 133, 135, 136, 137, 138, 152, 153, 154, 156, 157, 158, 159, 160, 161, 162, 163, 164, 166, 167, 169]}, "(None, 'parse_one_edit', 135)": {"mod": [140, 141, 142, 143, 144, 145, 147, 148, 150]}, "(None, 'apply_edits', 172)": {"mod": [172, 174, 176, 177, 179, 180, 181, 182, 183, 184, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206]}}}, {"path": "gpt_engineer/core/default/steps.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [28, 29, 47, 48, 49, 50, 51, 59]}, "(None, 'incorrect_edit', 256)": {"mod": [256, 257, 258, 260, 261, 262, 263, 264, 265, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289]}, "(None, 'improve', 292)": {"mod": [306, 328, 332, 334, 335, 339, 341, 344, 345]}}}, {"path": "gpt_engineer/core/files_dict.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}, "('FilesDict', 'to_chat', 54)": {"mod": [55, 56, 57, 82, 83, 84, 85, 86, 87]}, "('FilesDict', 'format_file_to_input', 55)": {"mod": [59, 60, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80]}}}, {"path": "gpt_engineer/preprompts/improve", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 29, 30, 31, 32, 33, 34, 35, 36, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 56, 57, 58, 60, 62, 63, 65, 67, 69, 70, 71, 72, 73, 75, 77, 78, 79, 80]}}}, {"path": "projects/example-improve/controller.py", "status": "modified", "Loc": {"('Controller', 'handle_input', 9)": {"add": [13, 17], "mod": [10, 11, 12, 15, 16]}}}, {"path": "projects/example-improve/prompt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "tests/applications/cli/test_main.py", "status": "modified", "Loc": {"('TestMain', 'test_improve_existing_project', 67)": {"mod": [83, 84]}}}, {"path": "tests/caching_ai.py", "status": "modified", "Loc": {"('CachingAI', 'next', 31)": {"mod": [69, 71]}}}, {"path": "tests/core/default/test_steps.py", "status": "modified", "Loc": {"('TestImprove', 'test_improve_existing_code', 265)": {"mod": [270, 271, 272, 273, 274, 275, 276, 277]}}}, {"path": "tests/core/test_chat_to_files.py", "status": "modified", "Loc": {"(None, 'test_parse_with_additional_text', 146)": {"add": [170], "mod": [159, 161, 162, 163, 164, 165, 166, 167, 168, 169, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183]}, "(None, None, None)": {"add": [217], "mod": [1, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 17, 18, 19]}, "(None, 'test_standard_input', 14)": {"mod": [22, 23, 24, 25, 27, 28, 29, 30, 31, 32, 35, 36, 37, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 51, 52, 53, 54, 55, 58, 59, 60, 61, 62, 63, 64, 65, 68, 69, 70, 71, 72, 73, 75, 76, 77, 78, 79, 80, 81, 84, 85, 86, 88, 89, 90, 91, 92, 93, 96, 97, 98, 99, 100, 101, 102, 103, 104, 107, 108, 109, 110, 111, 112, 113, 114, 115, 118, 119, 120, 121, 122, 123, 124, 125, 126, 129, 130, 131, 132, 133, 134, 135, 137, 138, 140, 141, 142, 143, 146, 147, 148, 150, 151, 152, 153, 154, 155, 156]}, "(None, 'test_apply_overwrite_existing_file', 186)": {"mod": [186, 187, 188, 189, 190, 191]}, "(None, 'test_apply_edit_new_file', 194)": {"mod": [194, 195, 196, 197, 198]}, "(None, 'test_apply_edit_no_match', 201)": {"mod": [201, 202, 203, 204, 205, 206]}, "(None, 'test_apply_edit_multiple_matches', 209)": {"mod": [209, 210, 211, 212, 213, 215]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["tests/caching_ai.py", "gpt_engineer/applications/cli/file_selector.py", "projects/example-improve/controller.py", "gpt_engineer/core/files_dict.py", "gpt_engineer/core/default/steps.py", "gpt_engineer/core/chat_to_files.py"], "doc": [], "test": ["tests/core/test_chat_to_files.py", "tests/core/default/test_steps.py", "tests/applications/cli/test_main.py"], "config": [], "asset": ["gpt_engineer/preprompts/improve", "projects/example-improve/prompt"]}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "a248d8104eeb9deffc8c3819b376bfdcf6f8df83", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/205", "iss_label": "good first issue", "title": "Run pytest in pre-commit", "body": "- Add requirement to pyproject.toml\r\n- Setup `.pre-commit-config.yaml` config\r\n- test that everything is working with `pre-commit run` and in github actions", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/210", "file_loc": {"base_commit": "a248d8104eeb9deffc8c3819b376bfdcf6f8df83", "files": [{"path": ".github/workflows/pre-commit.yaml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13]}}}, {"path": ".pre-commit-config.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [5], "mod": [12, 29, 30, 31, 32]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"add": [11], "mod": [13, 15]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 3, 4], "mod": [6, 7]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": [".pre-commit-config.yaml", ".github/workflows/pre-commit.yaml", "requirements.txt", "pyproject.toml"], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "b27461a871c972ef1c6f080b4608331bc7b01255", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/476", "iss_label": "", "title": "[Feature] Using a open-source LLM instead of Open AI ", "body": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/639", "file_loc": {"base_commit": "b27461a871c972ef1c6f080b4608331bc7b01255", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [83]}}}, {"path": "gpt_engineer/ai.py", "status": "modified", "Loc": {"(None, 'create_chat_model', 342)": {"mod": [368, 370, 371, 372, 373, 374, 375, 376, 377, 383]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/ai.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "bf206a5a1abeaa2b274a799e96933869e02d4c0a", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/898", "iss_label": "bug", "title": "Incompatibility with Python 3.8 and 3.9: TypeError in file_store.py", "body": "## Policy and info\r\n- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.\r\n- Adding the label \"sweep\" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/\r\n\r\n## Expected Behavior\r\nThe project documentation states support for Python versions 3.8 - 3.11. I expect the software to run without syntax errors in these versions.\r\n\r\n## Current Behavior\r\nWhen attempting to run the project in Python 3.9, a TypeError occurs in `file_store.py` due to the use of the union operator `|` in type hints.\r\n\r\n## Failure Information\r\nThe project uses a syntax feature (`str | Path`) that is only available in Python 3.10 and later, leading to incompatibility with Python 3.8 and 3.9.\r\n\r\n### Steps to Reproduce\r\n1. Set up the project in a Python 3.9 environment.\r\n2. Follow the installation and setup instructions.\r\n3. Attempt to run the project, leading to the TypeError in `file_store.py`.\r\n\r\n### Failure Logs\r\n```\r\nTraceback (most recent call last):\r\n File \".../Scripts/gpt-engineer\", line 3, in <module>\r\n from gpt_engineer.applications.cli.main import app\r\n ... (additional traceback)\r\n File \".../file_store.py\", line 8, in FileStore\r\n def __init__(self, path: str | Path | None = None):\r\nTypeError: unsupported operand type(s) for |: 'type' and 'type'\r\n```\r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/909", "file_loc": {"base_commit": "bf206a5a1abeaa2b274a799e96933869e02d4c0a", "files": [{"path": "gpt_engineer/applications/cli/cli_agent.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "('CliAgent', 'improve', 125)": {"mod": [126]}}}, {"path": "gpt_engineer/applications/cli/learning.py", "status": "modified", "Loc": {"(None, 'human_review_input', 92)": {"mod": [92]}}}, {"path": "gpt_engineer/core/base_execution_env.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "('BaseExecutionEnv', None, 7)": {"mod": [22]}}}, {"path": "gpt_engineer/core/base_memory.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2, 4]}}}, {"path": "gpt_engineer/core/default/disk_execution_env.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "('DiskExecutionEnv', None, 11)": {"mod": [23, 43]}}}, {"path": "gpt_engineer/core/default/disk_memory.py", "status": "modified", "Loc": {"('DiskMemory', None, 41)": {"mod": [250]}}}, {"path": "gpt_engineer/core/default/file_store.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "('FileStore', None, 8)": {"mod": [9]}}}, {"path": "gpt_engineer/core/default/simple_agent.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "('SimpleAgent', 'improve', 60)": {"mod": [64]}}}, {"path": "gpt_engineer/core/files_dict.py", "status": "modified", "Loc": {"('FilesDict', '__setitem__', 20)": {"mod": [34]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/core/default/file_store.py", "gpt_engineer/applications/cli/learning.py", "gpt_engineer/core/default/simple_agent.py", "gpt_engineer/core/default/disk_execution_env.py", "gpt_engineer/core/files_dict.py", "gpt_engineer/core/base_execution_env.py", "gpt_engineer/applications/cli/cli_agent.py", "gpt_engineer/core/default/disk_memory.py", "gpt_engineer/core/base_memory.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "7020fea81bef927fe4184e351be12aedf32e7545", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/758", "iss_label": "bug\nsweep", "title": "UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 436: character maps to <undefined>", "body": "## Expected Behavior\r\n\r\ngpt-engineer \"path\" -i command to work properly\r\n\r\n## Current Behavior\r\n\r\nError after \"Press enter to proceed with modifications.\"\r\n\r\n### Steps to Reproduce\r\n\r\nwindows\r\npython 3.9 \r\n\r\n### Failure Logs\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"C:\\tools\\Anaconda3\\envs\\gpteng\\lib\\runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n\r\n File \"C:\\tools\\Anaconda3\\envs\\gpteng\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n\r\n File \"C:\\tools\\Anaconda3\\envs\\gpteng\\Scripts\\gpt-engineer.exe\\__main__.py\", line 7, in <module>\r\n sys.exit(app())\r\n\r\n File \"C:\\tools\\Anaconda3\\envs\\gpteng\\lib\\site-packages\\gpt_engineer\\main.py\", line 96, in main\r\n messages = step(ai, dbs)\r\n\r\n File \"C:\\tools\\Anaconda3\\envs\\gpteng\\lib\\site-packages\\gpt_engineer\\steps.py\", line 360, in improve_existing_code\r\n files_info = get_code_strings(dbs.input) # this only has file names not paths\r\n\r\n File \"C:\\tools\\Anaconda3\\envs\\gpteng\\lib\\site-packages\\gpt_engineer\\chat_to_files.py\", line 113, in get_code_strings\r\n file_data = file.read()\r\n\r\n File \"C:\\tools\\Anaconda3\\envs\\gpteng\\lib\\encodings\\cp1252.py\", line 23, in decode\r\n return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\n\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 436: character maps to <undefined>\n\n\n<details open>\n<summary>Checklist</summary>\n\n- [X] ``gpt_engineer/core/chat_to_files.py:get_code_strings`` \u2705 Commit [`83c9784`](https://github.com/AntonOsika/gpt-engineer/commit/83c97847c89a1c4336f8c824a6b34aa54de17f33)\n</details>\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/801", "file_loc": {"base_commit": "7020fea81bef927fe4184e351be12aedf32e7545", "files": [{"path": "gpt_engineer/core/chat_to_files.py", "status": "modified", "Loc": {"(None, 'get_code_strings', 140)": {"mod": [179]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/core/chat_to_files.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "ebfa59e4f462b1503d9706d3282a6b9751b3dcd7", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/754", "iss_label": "bug", "title": "the code fails after giving additional information at the questions.", "body": "## Policy and info\r\n - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.\r\n - Adding the label \"sweep\" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/\r\n\r\n## Expected Behavior\r\nThat I get some feedback, e.g. code that I can use.\r\n\r\n## Current Behavior\r\nFails after the additional questions.\r\n\r\n## Failure Information\r\nNothing more to clarify.\r\nTraceback (most recent call last):\r\n\r\n File \"/Users/tom/Library/Python/3.9/bin/gpt-engineer\", line 8, in <module>\r\n sys.exit(app())\r\n\r\n File \"/Users/tom/Library/Python/3.9/lib/python/site-packages/gpt_engineer/main.py\", line 96, in main\r\n messages = step(ai, dbs)\r\n\r\n File \"/Users/tom/Library/Python/3.9/lib/python/site-packages/gpt_engineer/steps.py\", line 192, in gen_clarified_code\r\n messages = AI.deserialize_messages(dbs.logs[clarify.__name__])\r\n\r\n File \"/Users/tom/Library/Python/3.9/lib/python/site-packages/gpt_engineer/ai.py\", line 216, in deserialize_messages\r\n return list(messages_from_dict(json.loads(jsondictstr))) # type: ignore\r\n\r\n File \"/Users/tom/Library/Python/3.9/lib/python/site-packages/langchain/schema/messages.py\", line 351, in messages_from_dict\r\n return [_message_from_dict(m) for m in messages]\r\n\r\n File \"/Users/tom/Library/Python/3.9/lib/python/site-packages/langchain/schema/messages.py\", line 351, in <listcomp>\r\n return [_message_from_dict(m) for m in messages]\r\n\r\n File \"/Users/tom/Library/Python/3.9/lib/python/site-packages/langchain/schema/messages.py\", line 331, in _message_from_dict\r\n return AIMessage(**message[\"data\"])\r\n\r\n File \"/Users/tom/Library/Python/3.9/lib/python/site-packages/langchain/load/serializable.py\", line 90, in __init__\r\n super().__init__(**kwargs)\r\n\r\n File \"/Users/tom/Library/Python/3.9/lib/python/site-packages/pydantic/v1/main.py\", line 341, in __init__\r\n raise validation_error\r\n\r\npydantic.v1.error_wrappers.ValidationError: 1 validation error for AIMessage\r\nis_chunk\r\n unexpected value; permitted: False (type=value_error.const; given=True; permitted=(False,))\r\n\r\n\r\npython --version\r\nPython 3.11.6\r\n\r\nchatgpt API, 3.5-turbo\r\n\r\nPossibly related waring/issue I get is:\r\nUsers/tom/Library/Python/3.9/lib/python/site-packages/urllib3/__init__.py:34: NotOpenSSLWarning: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020\r\n\r\n\r\n### Steps to Reproduce\r\n\r\nIf possible, provide detailed steps for reproducing the issue.\r\n\r\n1. I have a prompt file (no extension) in a folder\r\n2. I run gpt-engineer folder\r\n3. I get additional questions\r\n4. After that, it fails everytime, tried 3 different folders with different prompts, I once skipped the questions, other times I answered them all.. fail everytime.\r\n\r\n### Failure Logs\r\n\r\nAny relevant log snippets or files here.\r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/769", "file_loc": {"base_commit": "ebfa59e4f462b1503d9706d3282a6b9751b3dcd7", "files": [{"path": "gpt_engineer/core/ai.py", "status": "modified", "Loc": {"('AI', 'deserialize_messages', 329)": {"mod": [343]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/core/ai.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "dc24bb846464f953e8bb2dbcbcb6ad4faaaeff32", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/786", "iss_label": "bug", "title": "gpt-engineer doesn't respect the COLLECT_LEARNINGS_OPT_OUT=true env variable", "body": "## Policy and info\r\n - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.\r\n - Adding the label \"sweep\" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/\r\n\r\n## Expected Behavior\r\n\r\nWhen setting the environment variable COLLECT_LEARNINGS_OPT_OUT=true, no information should be transmitted back to the gpt-engineer developer.\r\n\r\n## Current Behavior\r\n\r\nBased on viewing the verbose execution output, it's clear that even with that environment variable set, information was transmitted back to the developer. On inspecting the consent methods, such as https://github.com/AntonOsika/gpt-engineer/blob/main/gpt_engineer/cli/learning.py#L172, it's clear that the environment variable is never referenced.\r\n\r\nThis is highly undesirable, considering that this is the mechanism for opting out of data collection described in the terms of use - https://github.com/AntonOsika/gpt-engineer/blob/main/TERMS_OF_USE.md.\r\n\r\n## Failure Information\r\n\r\nI've already transmitted too much information to the developer, and don't feel comfortable adding anything more.\r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/806", "file_loc": {"base_commit": "dc24bb846464f953e8bb2dbcbcb6ad4faaaeff32", "files": [{"path": "gpt_engineer/cli/learning.py", "status": "modified", "Loc": {"(None, 'check_consent', 149)": {"add": [161], "mod": [149, 157, 165, 168]}, "(None, 'human_review_input', 96)": {"mod": [106]}, "(None, 'collect_consent', 172)": {"mod": [172, 173, 174, 175, 177, 178, 179, 180, 181, 182, 183, 184, 186, 187, 188, 189, 190, 191, 194, 195, 196, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 215, 216, 218]}}}, {"path": "gpt_engineer/cli/main.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [40]}, "(None, 'main', 80)": {"mod": [174]}}}, {"path": "tests/test_collect.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [11]}, "(None, 'test_collect_learnings', 15)": {"mod": [16]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/cli/learning.py", "gpt_engineer/cli/main.py"], "doc": [], "test": ["tests/test_collect.py"], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "2058edb3cfb8764cf642d73035af4bb6c783b7e5", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/670", "iss_label": "enhancement\ngood first issue", "title": "Make improve flag less intrusive by moving over files like \"all_output.txt\" and \"file_list\" to the .gpteng folder", "body": "This is done by simply using the new DB in #665 and writing to it", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/720", "file_loc": {"base_commit": "2058edb3cfb8764cf642d73035af4bb6c783b7e5", "files": [{"path": "gpt_engineer/db.py", "status": "modified", "Loc": {"('DBs', None, 118)": {"add": [124]}}}, {"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, 'main', 27)": {"add": [78], "mod": [66, 68]}}}, {"path": "gpt_engineer/steps.py", "status": "modified", "Loc": {"(None, 'set_improve_filelist', 296)": {"mod": [298]}, "(None, 'assert_files_ready', 302)": {"mod": [306, 307]}, "(None, 'get_improve_prompt', 312)": {"mod": [327]}, "(None, 'improve_existing_code', 343)": {"mod": [349]}}}, {"path": "tests/steps/test_archive.py", "status": "modified", "Loc": {"(None, 'test_archive', 25)": {"mod": [27, 36]}}}, {"path": "tests/test_collect.py", "status": "modified", "Loc": {"(None, 'test_collect_learnings', 15)": {"mod": [22]}}}, {"path": "tests/test_db.py", "status": "modified", "Loc": {"(None, 'test_DBs_initialization', 21)": {"add": [36], "mod": [22]}, "(None, 'test_DBs_dataclass_attributes', 99)": {"add": [113], "mod": [100]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/db.py", "gpt_engineer/main.py", "gpt_engineer/steps.py"], "doc": [], "test": ["tests/steps/test_archive.py", "tests/test_db.py", "tests/test_collect.py"], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "f84754d54ee311146c4f52b5e3ceb0fa8d0b731b", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/563", "iss_label": "", "title": "It's only using python...", "body": "## Expected Behavior\r\n\r\nI've seen 3 or 4 issues here asking if gpt-engineer could use languages other than python. the answer was always something like \"yes, of course, it's chatgpt writing the code, so you can use everything\"\r\n\r\n## Current Behavior\r\n\r\nno matter what i do, it is always using python. even if i explicitly forbid it to use python, and stress it in the clarifications\r\nwhy?\r\n\r\n![image](https://github.com/AntonOsika/gpt-engineer/assets/21007980/be4ad96f-4ae8-4495-be03-a7152c4f7618)\r\n\r\n![image](https://github.com/AntonOsika/gpt-engineer/assets/21007980/a7984e8c-92d1-4584-b318-e5f43df9f92b)\r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/568", "file_loc": {"base_commit": "f84754d54ee311146c4f52b5e3ceb0fa8d0b731b", "files": [{"path": "gpt_engineer/preprompts/philosophy", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 4, 5, 6, 7]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gpt_engineer/preprompts/philosophy"]}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "e55f84041c522b03ce09c958deb9822095b3e84e", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/943", "iss_label": "documentation", "title": "Instructions for running it with local models is lacking.", "body": "## Policy and info\r\n - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.\r\n - Adding the label \"sweep\" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/\r\n\r\n\r\n## Description\r\nInstructions:\r\n\r\nRunning the Example[\uf0c1](https://gpt-engineer.readthedocs.io/en/latest/open_models.html#running-the-example)\r\nOnce the API is set up, you can find the host and the exposed TCP port by checking your Runpod dashboard.\r\n\r\nThen, you can use the port and host to run the following example using WizardCoder-Python-34B hosted on Runpod:\r\n\r\n OPENAI_API_BASE=http://<host>:<port>/v1 python -m gpt_engineer.cli.main benchmark/pomodoro_timer --steps benchmark TheBloke_WizardCoder-Python-34B-V1.0-GPTQ\r\n \r\n What is this example? What does it do? Whats gpt_engineer.cli.main?\r\n \r\n How do i run the main command \"gpte projects/my-new-project\" after i have a local llm runing on localhost:8000?\r\n\r\n## Suggestion\r\nPlease provide more step by step instructions. \r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1082", "file_loc": {"base_commit": "164730a5b933ec0ebc9003c72f60e58176ef0dc6", "files": [{"path": "docs/open_models.md", "status": "modified", "Loc": {"(None, None, 17)": {"add": [17]}, "(None, None, 21)": {"add": [21]}, "(None, None, 4)": {"mod": [4]}, "(None, None, 9)": {"mod": [9]}, "(None, None, 12)": {"mod": [12]}, "(None, None, 14)": {"mod": [14]}, "(None, None, 16)": {"mod": [16]}, "(None, None, 19)": {"mod": [19]}}}, {"path": "gpt_engineer/applications/cli/main.py", "status": "modified", "Loc": {"(None, 'main', 247)": {"add": [474]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/applications/cli/main.py"], "doc": ["docs/open_models.md"], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "29e891c1a7bc6a0a46f8ce9d337a1b4bb82dcf85", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/650", "iss_label": "enhancement\ngood first issue", "title": "Fix the \"improve\" prompt to make sure that it generates diffs, and parse and apply those diffs to the existing codebase", "body": "One way to do this is to write the prompt for gpt-engineer with `-i` flag to annotate each codeblock with one of:\r\n\r\n1. `NEW CODE`\r\n2. `REPLACING ONE FUNCTION`\r\n\r\nIf 1., the generated code can just be written to a new file (or appended to an existing file).\r\n\r\nIf it is replacing an existing function, we could make sure to find the name of the function that is being replaced using an AST parser (see how [here](https://chat.openai.com/share/71012377-7ebb-47f2-a8fc-7d1bfd4fabe2))\r\n\r\n\r\n## Why this is necessary\r\nAs an example, I tried to use it on the project itself and got a codeblock that was just changing one of the function (so it should not be used to overwrite the entire file)\r\n\r\n## How to do it\r\n\r\nWe can take inspiration from Aider, that generates diffs, or sweep in how they prompt for \"<copy_lines>\" and [parse the GPT4 output here](https://github.com/sweepai/sweep/blob/e384c9fc3e0278257324c4ce57a888fa64f071b7/sweepai/utils/diff.py#L113)\r\n\r\nShould be quite straightforward!", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/714", "file_loc": {"base_commit": "29e891c1a7bc6a0a46f8ce9d337a1b4bb82dcf85", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [55, 56, 57, 58, 59, 62, 63, 64, 65]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "8e95858f3867faf1198c0631bd060172991bb523", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/872", "iss_label": "enhancement\ntriage", "title": "Default launch command is too cumbersome", "body": "## Policy and info\r\n - good first issue\r\n\r\n## Feature description\r\nCurrently, to use the tool `gpt-engineer` command has to be used. Although this can be resolved using an alias, would be nice to have a command such as `gpte` be available by default.\r\n\r\nCan refer https://clig.dev/#naming for more details.\r\n\r\n## Motivation/Application\r\nThis feature will make it very user friendly to use the command. Having to type dash (`-`) in `gpt-engineer` command is very cumbersome. \r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/889", "file_loc": {"base_commit": "8e95858f3867faf1198c0631bd060172991bb523", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [74], "mod": [64, 65, 70, 71]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"add": [62]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": ["pyproject.toml"], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "b60185ac6a02c1366324221eb143c9e37a64f1e6", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/718", "iss_label": "", "title": "Separate `core` and `cli` into separate modules (directories) and only allow cli to import from core", "body": "The idea is to separate the core logic and CLI UX specific things. To make it easier to take decisions on what makes sense from UX perspective, and how the core building blocks should work.\r\n\r\nWould look something like:\r\n\r\n```\r\ngpt_engineer\r\n\u251c\u2500\u2500 core\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ai.py\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 domain.py\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 chat_to_files.py\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 steps.py\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 db.py\r\n\u251c\u2500\u2500 cli\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 main.py\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 file_selector.py\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 learning.py\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 collect.py\r\n\u251c\u2500\u2500 api\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 main.py\r\n\u2514\u2500\u2500 preprompts\r\n \u00a0\u00a0 \u2514\u2500\u2500 ...\r\n```\r\n\r\nOne could use either:\r\n- PyCharm \"move\" automagic functionality\r\n- Or! gpt-engineer by adding new steps and configs, or somehow the existing -i flag", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/766", "file_loc": {"base_commit": "fb35323551c3404283fdb04297f961a05a587caf", "files": [{"path": "evals/evals_existing_code.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [14, 15]}}}, {"path": "evals/evals_new_code.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [14]}}}, {"path": "gpt_engineer/api.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0], "mod": [4, 5]}}}, {"path": "gpt_engineer/collect.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0], "mod": [5, 6, 7, 8]}}}, {"path": "gpt_engineer/file_selector.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0], "mod": [10]}, "('TerminalFileSelector', None, 134)": {"add": [134]}, "('DisplayablePath', None, 16)": {"mod": [18]}, "('TerminalFileSelector', 'display', 143)": {"mod": [145]}, "('TerminalFileSelector', 'ask_for_selection', 173)": {"mod": [175, 178]}}}, {"path": "gpt_engineer/learning.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0], "mod": [13, 14]}}}, {"path": "gpt_engineer/main.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0], "mod": [11, 12, 13, 14, 15]}}}, {"path": "gpt_engineer/ai.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0, 24, 26]}, "('AI', None, 41)": {"add": [41, 188]}, "(None, 'serialize_messages', 430)": {"add": [430]}}}, {"path": "gpt_engineer/chat_to_files.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0], "mod": [7, 8]}}}, {"path": "gpt_engineer/db.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [0]}, "('DB', None, 10)": {"add": [10]}}}, {"path": "gpt_engineer/domain.py", "status": "removed", "Loc": {}}, {"path": "gpt_engineer/steps.py", "status": "removed", "Loc": {}}, {"path": "scripts/rerun_edited_message_logs.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8, 9]}}}, {"path": "tests/steps/test_archive.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6]}}}, {"path": "tests/test_ai.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}}}, {"path": "tests/test_chat_to_files.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}}}, {"path": "tests/test_collect.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [9, 10, 11, 12]}}}, {"path": "tests/test_db.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["evals/evals_new_code.py", "gpt_engineer/learning.py", "gpt_engineer/db.py", "evals/evals_existing_code.py", "gpt_engineer/ai.py", "gpt_engineer/chat_to_files.py", "gpt_engineer/main.py", "scripts/rerun_edited_message_logs.py", "gpt_engineer/api.py", "gpt_engineer/steps.py", "gpt_engineer/domain.py", "gpt_engineer/collect.py", "gpt_engineer/file_selector.py"], "doc": [], "test": ["tests/test_ai.py", "tests/test_chat_to_files.py", "tests/steps/test_archive.py", "tests/test_collect.py", "tests/test_db.py"], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "ba00896c5673990923abd0e99dba147938871512", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/79", "iss_label": "", "title": "Analysis - Give context of a project to GPT Engineer", "body": "GPT Engineer is amazing. But right now the purpose is for small projects, projects where you need little implementations or requirements.\r\n\r\nBut... What about to give a full context of a project? If ChatGPT can understand what methods and classes has some projects on GitHub or packages in npm, maybe he can have a fully understand of a project and modify parts of it.\r\n\r\nWhat about limits of ChatGPT prompt? We can give some prompts in several windows to give fully understanding of what's going on. \r\n\r\nI can work on this if someone has the courage to develop it with me.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/465", "file_loc": {"base_commit": "ba00896c5673990923abd0e99dba147938871512", "files": [{"path": "gpt_engineer/chat_to_files.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'to_files', 37)": {"add": [42]}}}, {"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, 'main', 19)": {"add": [26, 40], "mod": [62, 63, 67]}}}, {"path": "gpt_engineer/steps.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12, 21, 247, 327], "mod": [11]}, "('Config', None, 267)": {"add": [277]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/chat_to_files.py", "gpt_engineer/main.py", "gpt_engineer/steps.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "0596b07a39c2c99c46509c17660f5c8aef4b2114", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/388", "iss_label": "good first issue", "title": "Remove \"run_id\" and \"delete_existing\" options: instead move old memory/workspace folder to \"archive\" by default", "body": "The first step in the main file would be to check for memory folder and workspace, if they exist create a new folder in \"archive\" e.g. with the name \"currentdate_currenttime\", and move everything there.\r\n\r\nThis would make main.py much nicer, and make it clearly defined that all files, apart from `archive` folder, in the project directory are from the most recent run.\r\n\r\n(It is also a prerequisite to later add handling of logging to separate files when there are \"multiple of the same steps\")", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/409", "file_loc": {"base_commit": "0596b07a39c2c99c46509c17660f5c8aef4b2114", "files": [{"path": "gpt_engineer/db.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "('DBs', None, 44)": {"add": [49]}}}, {"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, 'main', 19)": {"add": [53, 59], "mod": [21, 38, 39, 40, 42, 43, 44, 45]}, "(None, None, None)": {"mod": [3]}}}, {"path": "gpt_engineer/steps.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 1, 2, 260, 282, 291, 299, 308, 315], "mod": [289, 290]}}}, {"path": "tests/test_collect.py", "status": "modified", "Loc": {"(None, 'test_collect_learnings', 15)": {"mod": [22]}}}, {"path": "tests/test_db.py", "status": "modified", "Loc": {"(None, 'test_DBs_initialization', 29)": {"add": [43], "mod": [30]}, "(None, 'test_DBs_instantiation_with_wrong_number_of_arguments', 102)": {"mod": [109]}, "(None, 'test_DBs_dataclass_attributes', 112)": {"mod": [113]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/db.py", "gpt_engineer/main.py", "gpt_engineer/steps.py"], "doc": [], "test": ["tests/test_db.py", "tests/test_collect.py"], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "dc7a2bd0f546ea29929faa57b8e618c413c86bb2", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/582", "iss_label": "triage", "title": "RuntimeError: ('Message exceeds %skb limit. (%s)', AFTER it asks me to run the code", "body": "I am running a quite complex prompt to create a python app with a PSQL DB backend. I already have the whole DB schema ready and pasted it into the prompt. \r\n\r\n## Expected Behavior\r\n\r\nthe app is created according to my prompt.\r\n\r\n## Current Behavior\r\nOnly a part of the files are created, then it asks me to run the code which fails for non-related reasons and then shows the error message\"\r\nRuntimeError: ('Message exceeds %skb limit. (%s)', '32', '{\\'integrations\\': {\\'All\\': True}, \\'anonymousId\\': None, \\'properties\\': {\\'model\\': \\'gpt-3.5-turbo\\', \\'temperature\\': 0.1, \\'steps\\': \\'[\"clarify\", \"gen_clarified_code\", \"gen_entrypoint\", \"execute_entrypoint\", \"human_review\"]\\', \\'steps_file_hash\\': \\'\\', \\'prompt\\' \r\nREMAINING OUTPUT of gpt engineer. \r\n\r\n\r\nError seems to be in the analytics page\r\n\r\n File \"/home/stefan/.local/bin/gpt-engineer\", line 8, in <module>\r\n sys.exit(app())\r\n ^^^^^\r\n\r\n File \"/home//code/gpt-engineer/gpt_engineer/main.py\", line 61, in main\r\n collect_learnings(model, temperature, steps, dbs)\r\n\r\n File \"/home/code/gpt-engineer/gpt_engineer/collect.py\", line 28, in collect_learnings\r\n send_learning(learnings)\r\n\r\n File \"/home/code/gpt-engineer/gpt_engineer/collect.py\", line 17, in send_learning\r\n rudder_analytics.track(\r\n\r\n File \"/home/.local/lib/python3.11/site-packages/rudderstack/analytics/__init__.py\", line 53, in track\r\n _proxy('track', *args, **kwargs)\r\n\r\n File \"/home/.local/lib/python3.11/site-packages/rudderstack/analytics/__init__.py\", line 113, in _proxy\r\n fn(*args, **kwargs)\r\n\r\n File \"/home/.local/lib/python3.11/site-packages/rudderstack/analytics/client.py\", line 141, in track\r\n return self._enqueue(msg)\r\n ^^^^^^^^^^^^^^^^^^\r\n\r\n File \"/home/.local/lib/python3.11/site-packages/rudderstack/analytics/client.py\", line 279, in _enqueue\r\n raise RuntimeError('Message exceeds %skb limit. (%s)', str(int(MAX_MSG_SIZE / 1024)), str(msg))\r\n\r\n\r\nUPDATE: confirmed related to Rudderstack, does not happen when you opt out \r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/632", "file_loc": {"base_commit": "dc7a2bd0f546ea29929faa57b8e618c413c86bb2", "files": [{"path": "gpt_engineer/collect.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}, "(None, 'send_learning', 11)": {"mod": [31, 32, 33, 34, 35]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/collect.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "66cd09c789bfcae57e144fcaea86050b97230f18", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/150", "iss_label": "bug", "title": "AttributeError: 'tuple' object has no attribute 'expandtabs'", "body": "I'm getting the following error when running `python -m gpt_engineer.main`. I'm using python 3.11/\r\n\r\n```\r\n File \"/opt/miniconda3/envs/gpt-eng/lib/python3.11/inspect.py\", line 873, in cleandoc\r\n lines = doc.expandtabs().split('\\n')\r\n ^^^^^^^^^^^^^^\r\n\r\nAttributeError: 'tuple' object has no attribute 'expandtabs'\r\n```", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/152", "file_loc": {"base_commit": "66cd09c789bfcae57e144fcaea86050b97230f18", "files": [{"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, 'chat', 16)": {"mod": [21]}}}, {"path": "identity/generate", "status": "modified", "Loc": {}}, {"path": "scripts/benchmark.py", "status": "modified", "Loc": {"(None, 'main', 13)": {"mod": [33, 53, 61]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/main.py", "scripts/benchmark.py"], "doc": [], "test": [], "config": [], "asset": ["identity/generate"]}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "6ccd05ab65dcd83d6057c6c068a3f5290ab09176", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/49", "iss_label": "", "title": "GPT4ALL support or open source models", "body": "OpenAI's model 3.5 breaks frequently and is low quality in general. \r\n\r\nFalcon, Vicuna, Hermes and more should be supported as they're open source, free, and moving away from paid closed source is good practice and opens applications to huge user base who wants free access to these tools.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/63", "file_loc": {"base_commit": "6ccd05ab65dcd83d6057c6c068a3f5290ab09176", "files": [{"path": ".gitignore", "status": "modified", "Loc": {}}, {"path": "gpt_engineer/ai.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4], "mod": [7]}, "('AI', 'next', 42)": {"add": [63], "mod": [46, 48, 50, 51, 60, 61, 62]}, "('AI', None, 10)": {"mod": [10, 11, 12, 25, 26, 27, 28, 29, 33, 34, 36, 37, 39, 40, 42, 43, 44]}, "('AI', '__init__', 11)": {"mod": [14, 15, 16, 17, 18, 19, 20, 21, 22, 23]}, "('AI', 'start', 25)": {"mod": [31]}}}, {"path": "gpt_engineer/chat_to_files.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, 'to_files', 37)": {"mod": [37]}}}, {"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 10, 11, 13]}, "(None, 'main', 19)": {"mod": [24, 25, 47, 48, 49, 50, 62]}}}, {"path": "gpt_engineer/steps.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 9], "mod": [1, 227]}, "(None, 'setup_sys_prompt', 12)": {"mod": [12]}, "(None, 'simple_gen', 16)": {"mod": [16, 17, 18, 19, 20, 21, 22, 23]}, "(None, 'clarify', 26)": {"mod": [26, 28, 30, 31, 33, 35, 39, 42, 45]}, "(None, 'gen_spec', 57)": {"mod": [57, 62, 63, 64, 65, 67, 69]}, "(None, 'respec', 74)": {"mod": [74, 75, 76, 81, 83, 84, 85, 86, 87, 91]}, "(None, 'gen_unit_tests', 95)": {"mod": [95, 99, 100, 101, 102, 103, 105, 107]}, "(None, 'gen_clarified_code', 113)": {"mod": [113, 116, 118, 119, 120, 121, 123]}, "(None, 'gen_code', 127)": {"mod": [127, 130, 131, 132, 133, 134, 135, 136, 137]}, "(None, 'execute_entrypoint', 141)": {"mod": [141, 152, 162]}, "(None, 'gen_entrypoint', 165)": {"mod": [165, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 184]}, "(None, 'use_feedback', 189)": {"mod": [189, 190, 191, 192, 193, 194, 195, 196, 197]}, "(None, 'fix_code', 201)": {"mod": [201, 202, 203, 204, 205, 206, 207, 208, 209, 210]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}}}, {"path": "scripts/rerun_edited_message_logs.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3], "mod": [6, 7]}, "(None, 'main', 13)": {"mod": [15, 19, 30, 32]}}}, {"path": "tests/test_ai.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}, "(None, 'test_ai', 7)": {"mod": [8]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/chat_to_files.py", "gpt_engineer/ai.py", "gpt_engineer/main.py", "scripts/rerun_edited_message_logs.py", "gpt_engineer/steps.py"], "doc": [], "test": ["tests/test_ai.py"], "config": [".gitignore", "pyproject.toml", "requirements.txt"], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "dc7a2bd0f546ea29929faa57b8e618c413c86bb2", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/530", "iss_label": "", "title": "Using gpt-engineer with Azure OpenAI", "body": "\r\nHi, I am trying to test gpt-engineer by using Azure OpenAI but I am getting authentication error. I have added all the additional details that are required for the Azure OpenAI like api_base url, model, etc. in the python file ai.py in the gpt_engineer folder. Am I missing out something can you please help me out with this issue. \r\n\r\nHave set the openAI API key as the windows environmnet variable. Rest all the steps have followed according to the readme file. \r\n\r\n<img width=\"946\" alt=\"image\" src=\"https://github.com/AntonOsika/gpt-engineer/assets/53396422/d3657e3b-1e49-4f6c-adac-18125ee1f29f\">\r\n\r\n\r\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/640", "file_loc": {"base_commit": "dc7a2bd0f546ea29929faa57b8e618c413c86bb2", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [57]}}}, {"path": "gpt_engineer/ai.py", "status": "modified", "Loc": {"('AI', '__init__', 40)": {"add": [54], "mod": [52, 53]}, "(None, 'create_chat_model', 338)": {"add": [353], "mod": [338]}, "(None, None, None)": {"mod": [13]}, "('AI', None, 39)": {"mod": [40]}}}, {"path": "gpt_engineer/learning.py", "status": "modified", "Loc": {"(None, 'human_review_input', 54)": {"add": [63], "mod": [95]}, "(None, 'check_consent', 106)": {"add": [122, 124], "mod": [106, 113]}}}, {"path": "gpt_engineer/main.py", "status": "modified", "Loc": {"(None, 'main', 27)": {"add": [39, 55]}}}, {"path": "gpt_engineer/steps.py", "status": "modified", "Loc": {"(None, 'execute_entrypoint', 218)": {"mod": [221, 225]}, "(None, 'human_review', 374)": {"mod": [377]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/ai.py", "gpt_engineer/main.py", "gpt_engineer/steps.py", "gpt_engineer/learning.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "3e589bf1356024fb471a9d17738e4626f21a953b", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/1143", "iss_label": "enhancement\ngood first issue\ntriage", "title": "Add GPTE CLI argument to output system information", "body": "When running GPTE, it will be quite helpful to be able to quickly generate useful system information for use in debugging issues.\r\n\r\nFor example, this should be invoked as `gpte --sysinfo`.\r\n\r\nThis invocation should output system information in a standardized and useful way, so that users can readily copy and paste the output into GitHub, Discord, etc ...\r\n\r\nHere are some requirements for this CLI argument:\r\n* The CLI argument should use system-native commands or those available from the packages installed by GPTE (i.e. it should not require or install additional tools). \r\n* The CLI argument should not expose personally identifiable or other sensitive information.\r\n* When running `gpte --sysinfo` the application immediately outputs the system information without executing any of the other application flow and returns the user back to the command line.\r\n* When running gpte --sysinfo the application does not require an OpenAI (or any other LLM) API key but, rather, immediately generates the system information and outputs it.\r\n\r\nHere are some examples of system information that should be returned by running `gpte --sysinfo`:\r\n\r\nOutputs of Linux operating system commands like:\r\n* `uname -a`\r\n* `lsb_release -a`\r\n* `cat /proc/version`\r\n\r\nand, in Windows:\r\n\r\n* `systeminfo` \r\n\r\nWe should also include Python-specific information, like the output of:\r\n* `pip freeze`\r\n* `python --version`\r\n* `which python`\r\n\r\nThese are indicative but not comprehensive.\r\n\r\nThis is a great first issue for a new contributor!", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1169", "file_loc": {"base_commit": "3e589bf1356024fb471a9d17738e4626f21a953b", "files": [{"path": "gpt_engineer/applications/cli/main.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [28, 30, 239]}, "(None, 'main', 250)": {"add": [331, 371, 382]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/applications/cli/main.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "e7e329211655d08e48d04ce828f929c9108050ad", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/14", "iss_label": "", "title": "exporting the api key to the environment doesn't work for me", "body": "I can't get the export command to work, so an alternative solution like using an extern file or hardcoding the api in the code would be a nice solution. I personally created an external json config file and parsed the api key from that to the python script. \r\n\r\nSo a solution could be:\r\n\r\n1) Make a json file named \"config.json\"\r\n2) Inside of ai.py add:\r\n```\r\nimport json\r\n\r\ndef get_api_key(file_name: str) -> str:\r\n with open(file_name, 'r') as f:\r\n config = json.load(f)\r\n return config['openai_api_key']\r\n```\r\n \r\n3) Inside of config.json add: \r\n```\r\n{\r\n \"openai_api_key\": \"your_api_key\"\r\n}\r\n```\r\n\r\n4) In the __init__ part of the AI class add:\r\n```\r\nclass AI:\r\n def __init__(self, **kwargs):\r\n openai.api_key = get_api_key(\"config.json\")\r\n self.kwargs = kwargs\r\n```", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/22", "file_loc": {"base_commit": "e7e329211655d08e48d04ce828f929c9108050ad", "files": [{"path": ".gitignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [22]}}}, {"path": "ai.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 3]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["ai.py"], "doc": ["README.md"], "test": [], "config": [".gitignore"], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "164730a5b933ec0ebc9003c72f60e58176ef0dc6", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/819", "iss_label": "enhancement", "title": "Automatic benchmarking of gpt-engineer with APPS", "body": "## Feature description\r\ngpt-engineer has an automatic evals suite in \"evals/eval_new_code.py\". However, only 2 test cases are given in evals/new_code_eval.yaml . An alternative to filling in more testcases manually, we should parse in prompts and tests from the (very large) APPS dataset (https://paperswithcode.com/dataset/apps).\r\n\r\nSince APPS is way too large to run in its entirety, there should be functionality to run n randomly selected tests and run n tests according to some predetermined test ordering (so that consecutive benchmark runs are comparable). \r\n\r\nThe APPS database should not be added to the gpt-engineer git repo! Probably the best way to handle this is to pull it from huggingface (https://huggingface.co/datasets/codeparrot/apps) in the code itself (potentially caching it and gitignoring it so it doesn't need to be pulled on every run).\r\n\r\n## Motivation/Application\r\nAutomatic benchmarking is the ideal way to determine whether an imposed change to the code base is advantageous.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1051", "file_loc": {"base_commit": "164730a5b933ec0ebc9003c72f60e58176ef0dc6", "files": [{"path": ".gitignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [90]}}}, {"path": "gpt_engineer/benchmark/benchmarks/load.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12, 19]}}}, {"path": "gpt_engineer/benchmark/run.py", "status": "modified", "Loc": {"(None, 'run', 24)": {"add": [50], "mod": [52]}, "(None, 'print_results', 87)": {"add": [107], "mod": [109, 121, 123, 124, 125, 126, 127, 128, 129, 130]}}}, {"path": "gpt_engineer/benchmark/types.py", "status": "modified", "Loc": {"('TaskResult', None, 74)": {"add": [77]}}}, {"path": "poetry.lock", "status": "modified", "Loc": {"(None, None, None)": {"add": [722, 789, 997, 2002, 2375, 2626, 2905, 4185, 4244], "mod": [1013, 1151, 1156, 1157, 1174, 1179]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/benchmark/types.py", "gpt_engineer/benchmark/benchmarks/load.py", "gpt_engineer/benchmark/run.py"], "doc": [], "test": [], "config": [".gitignore", "poetry.lock"], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "1ad0892697e8468939a914f12bbf7378a1e045a2", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/914", "iss_label": "enhancement", "title": "Automatic benchmarking of gpt-engineer with MBPP", "body": "## Feature description\r\nWe have a way to easily add benchmarks:\r\n\r\nhttps://www.loom.com/share/206805143fbb4302b5455a5329eaab17?sid=f689608f-8e49-44f7-b55f-4c81e9dc93e6\r\n\r\nThis issue is about looking into if [mbpp](https://huggingface.co/datasets/mbpp) is a good benchmark to add and then add a simple version of it.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1103", "file_loc": {"base_commit": "1ad0892697e8468939a914f12bbf7378a1e045a2", "files": [{"path": ".gitignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [93]}}}, {"path": "gpt_engineer/benchmark/__main__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [30]}, "(None, 'main', 54)": {"add": [89]}}}, {"path": "gpt_engineer/benchmark/benchmarks/apps/load.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [26]}}}, {"path": "gpt_engineer/benchmark/benchmarks/load.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14, 20]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt_engineer/benchmark/benchmarks/load.py", "gpt_engineer/benchmark/benchmarks/apps/load.py", "gpt_engineer/benchmark/__main__.py"], "doc": [], "test": [], "config": [".gitignore"], "asset": []}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d16a54edd69f82158ae7ffe5669618db33a01ac7", "iss_has_pr": 1, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/2863", "iss_label": "bug", "title": "[Bug]: app-1 | sh: 1: /content/entrypoint.sh: not found (docker compose)", "body": "### Checklist\n\n- [ ] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)\n- [ ] The issue exists on a clean installation of Fooocus\n- [ ] The issue exists in the current version of Fooocus\n- [ ] The issue has not been reported before recently\n- [ ] The issue has been reported before but has not been fixed yet\n\n### What happened?\n\n```\r\n\r\ndocker compose build --no-cache\r\n\r\n\r\n(venv) deb-workshop :: ~/Fooocus-2024 \u2039main\u203a \u00bb docker-compose up\r\nWARN[0000] /home/username/Fooocus-2024/docker-compose.yml: `version` is obsolete\r\n[+] Running 2/3\r\n \u2714 Network fooocus-2024_default Created 0.1s\r\n \u2714 Volume \"fooocus-2024_fooocus-data\" Created 0.0s\r\n \u280b Container fooocus-2024-app-1 Created 0.1s\r\nAttaching to app-1\r\napp-1 | sh: 1: /content/entrypoint.sh: not found\r\napp-1 exited with code 127\r\n```\n\n### Steps to reproduce the problem\n\nlatest main branch\n\n### What should have happened?\n\nit can run\n\n### What browsers do you use to access Fooocus?\n\n_No response_\n\n### Where are you running Fooocus?\n\nNone\n\n### What operating system are you using?\n\n_No response_\n\n### Console logs\n\n```Shell\ndocker compose build --no-cache\r\n\r\n\r\n(venv) deb-workshop :: ~/Fooocus-2024 \u2039main\u203a \u00bb docker-compose up\r\nWARN[0000] /home/username/Fooocus-2024/docker-compose.yml: `version` is obsolete\r\n[+] Running 2/3\r\n \u2714 Network fooocus-2024_default Created 0.1s\r\n \u2714 Volume \"fooocus-2024_fooocus-data\" Created 0.0s\r\n \u280b Container fooocus-2024-app-1 Created 0.1s\r\nAttaching to app-1\r\napp-1 | sh: 1: /content/entrypoint.sh: not found\r\napp-1 exited with code 127\r\n```\n```\n\n\n### Additional information\n\n_No response_", "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/2865", "file_loc": {"base_commit": "d16a54edd69f82158ae7ffe5669618db33a01ac7", "files": [{"path": "entrypoint.sh", "status": "modified", "Loc": {"(None, None, None)": {"add": [33], "mod": [1]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["entrypoint.sh"]}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "179bcb2c4e6e6b9574c5a38e28e3c9813ed95bd7", "iss_has_pr": 1, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1247", "iss_label": "", "title": "Canvas zoom for the inpainting canvas", "body": "Can we get a canvas zoom feature similar to what https://github.com/richrobber2/canvas-zoom provides for A1111?\r\nFooocus has by far the best inpainting/outpainting backend. It would be nice if the frontend was spruced up a bit too.", "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1428", "file_loc": {"base_commit": "179bcb2c4e6e6b9574c5a38e28e3c9813ed95bd7", "files": [{"path": "css/style.css", "status": "modified", "Loc": {"(None, None, None)": {"add": [96]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["css/style.css"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "odoo", "repo_name": "odoo", "base_commit": "4213eebe2ebe6b0c81580176b263aeee9fa6a3fd", "iss_has_pr": 1, "iss_html_url": "https://github.com/odoo/odoo/issues/304", "iss_label": "", "title": "Bug 1089229: Wrong treatment of UoS among objects", "body": "**Impacted versions:**\n6.1 and above\n\n**Steps to reproduce:**\nSee https://bugs.launchpad.net/openobject-addons/+bug/1089229\n\n**Current behavior:**\n- If you change units of sale (uos) quantity in sales order, uom quantity is not recalculated, thus breaking the relation between uom and uos (uos_coeff).\n- If you change the uom or uos within their category in sales order or invoice, nothing happens --> Thus breaking again the relation between uom and uos (there is no recalculation, and it's not the same selling grams and kilograms).\n- Sale order lines shows only uom quantities and uom prices.\n\n**Expected behavior:**\n- If you change units of sale (uos) quantity in sales order, uom quantity should be recalculated accordingly (as happens viceversa).\n- If you change uom or uos within their category in sales order or invoice, the other quantity is recalculated. Also, price should be recalculated (because of the change of unit, price(kg)=1000*price(g); and also because if quantity changes, another pricelist may apply).\n- If using a secondary uos, sale order lines should show both uom and uos, as well as price_unit(uom) and price_unit(uos). --> This is a much desired feature for salespeople, because many times they know the Sale unit and its price (not the uom and price(uom), which may be more related to warehouse in such cases).\n -Both UoM and UoS related info (quantities, prices) should be both available in product, sale and invoice objects.\n\n**Further info**\nThis bug (and its code implications) https://bugs.launchpad.net/openobject-addons/+bug/1089229 is still there in master as of today (checked code a few minutes ago, and in runbot there is something weird with the reports so I cannot obtain the sale order report, but the invoice report shows sale unit price in tax column).\n\nMaybe this is just the right time (just before v8) to harmonize uom/uos and price_unit_uom/price_unit_uos among different objcts (product.product, sale.order, account.invoice) and be able to keep all info lossless.\nAlso to fix the uom category conversions (look at the 'FIXME' in code, for example: https://github.com/odoo/odoo/blob/master/addons/sale/sale.py#L1031 )\n", "pr_html_url": "https://github.com/odoo/odoo/pull/7311", "file_loc": {"base_commit": "4213eebe2ebe6b0c81580176b263aeee9fa6a3fd", "files": [{"path": "addons/sale_stock/sale_stock_view.xml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [49, 50, 51, 52, 53]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["addons/sale_stock/sale_stock_view.xml"]}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "2a003e8d494bdfb3132dd40dc8d7face7e52be49", "iss_has_pr": 1, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1697", "iss_label": "ToDo", "title": "[Feature]: \u63a5\u5165\"gpt-4-turbo-2024-04-09\"\u6a21\u578b", "body": "### Class | \u7c7b\u578b\n\n\u7a0b\u5e8f\u4e3b\u4f53\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\n\u80fd\u4e0d\u80fd\u63a5\u5165gpt-4-turbo-2024-04-09\u548cgpt-4-0125-preview\u8fd9\u4e24\u4e2a\u6a21\u578b\u3002", "pr_html_url": "https://github.com/binary-husky/gpt_academic/pull/1698", "file_loc": {"base_commit": "2a003e8d494bdfb3132dd40dc8d7face7e52be49", "files": [{"path": "config.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [35]}}}, {"path": "request_llms/bridge_all.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [202]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["request_llms/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "8d7ca46b2c1fcf0fe8983b0d6effc5fd9d009bff", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/32", "iss_label": "", "title": "ImportError: No module named pathlib", "body": "I have already installed pathlib in python3.6:Requirement already satisfied: pathlib in /usr/local/lib/python3.6/dist-packages\r\n\r\nCommand executed: python3 faceswap.py extract -i ~/faceswap/photo/trump -o ~/faceswap/data/trump\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"faceswap.py\", line 3, in <module>\r\n from scripts.extract import ExtractTrainingData\r\n File \"/home/ubuntu/data/faceswap/scripts/extract.py\", line 2, in <module>\r\n from lib.cli import DirectoryProcessor\r\n File \"/home/ubuntu/data/faceswap/lib/cli.py\", line 6, in <module>\r\n from lib.utils import get_image_paths, get_folder, load_images, stack_images\r\n File \"/home/ubuntu/data/faceswap/lib/utils.py\", line 4, in <module>\r\n from pathlib import Path\r\nImportError: No module named pathlib\r\n\r\n\r\n\r\nCan anyone help me out with this issue?\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/33", "file_loc": {"base_commit": "8d7ca46b2c1fcf0fe8983b0d6effc5fd9d009bff", "files": [{"path": "USAGE.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [39, 41, 55]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["USAGE.md"], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "47b43191031d0901371d0be362fcccdf547cb4e5", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/306", "iss_label": "enhancement", "title": "Is it possible to implement occlusion masks to original model?", "body": "I think GAN model's most interesting feature is occlusion masks. But original model is more stable than GAN and the output of GAN code here is not good. So my question is can we implement this occlusion mask feature to original model? Or is it exclusive to GAN?", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/576", "file_loc": {"base_commit": "47b43191031d0901371d0be362fcccdf547cb4e5", "files": [{"path": ".github/ISSUE_TEMPLATE.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}}}, {"path": ".github/ISSUE_TEMPLATE/bug_report.md", "status": "removed", "Loc": {}}, {"path": ".github/ISSUE_TEMPLATE/feature_request.md", "status": "removed", "Loc": {}}, {"path": ".install/windows/MultiDetailPrint.nsi", "status": "removed", "Loc": {}}, {"path": ".install/windows/git_install.inf", "status": "removed", "Loc": {}}, {"path": "lib/aligner.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}, "('Extract', 'extract', 19)": {"mod": [22, 24]}, "('Extract', 'transform', 37)": {"mod": [41, 43]}, "(None, 'get_matrix_scaling', 126)": {"mod": [126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136]}, "(None, 'get_align_mat', 139)": {"mod": [142]}}}, {"path": "lib/alignments.py", "status": "modified", "Loc": {"('Alignments', None, 17)": {"add": [272], "mod": [295]}, "('Alignments', 'rotate_existing_landmarks', 295)": {"add": [308], "mod": [299, 302]}, "('Alignments', 'hashes_to_frame', 63)": {"mod": [65, 66, 67, 68, 69, 70, 71]}}}, {"path": "lib/config.py", "status": "modified", "Loc": {"('FaceswapConfig', 'get', 78)": {"mod": [91, 92]}, "('FaceswapConfig', 'get_config_file', 96)": {"mod": [99, 100]}, "('FaceswapConfig', 'check_config_choices', 282)": {"mod": [290, 291, 292]}}}, {"path": "lib/gui/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 4, 6, 8]}}}, {"path": "lib/gui/display_page.py", "status": "modified", "Loc": {"('DisplayPage', '__init__', 17)": {"add": [22], "mod": [37]}, "(None, None, None)": {"mod": [9]}, "('DisplayOptionalPage', 'add_option_save', 201)": {"mod": [205]}}}, {"path": "lib/gui/options.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9], "mod": [2, 11]}, "('CliOptions', 'gen_cli_arguments', 228)": {"add": [249], "mod": [235]}}}, {"path": "lib/keypress.py", "status": "modified", "Loc": {}}, {"path": "lib/logger.py", "status": "modified", "Loc": {"(None, 'log_setup', 77)": {"mod": [77, 85]}, "(None, 'file_handler', 95)": {"mod": [95, 97, 98, 99, 100, 101, 102]}}}, {"path": "lib/model/initializers.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6, 7, 9, 10]}, "(None, 'icnr_keras', 13)": {"mod": [19]}, "('ICNR', None, 33)": {"mod": [33, 34, 35, 37, 38, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 70, 71, 72, 73, 74, 75, 78, 79, 80, 81]}}}, {"path": "lib/model/normalization.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [161], "mod": [6, 7, 286, 287, 288, 289]}}}, {"path": "lib/queue_manager.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [9]}, "('QueueManager', '__init__', 22)": {"mod": [35, 36, 37]}}}, {"path": "lib/sysinfo.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}, "('SysInfo', None, 15)": {"mod": [35, 36, 37, 38, 232, 251]}, "('SysInfo', 'is_virtual_env', 61)": {"mod": [63, 64, 65, 66, 67, 68, 69]}, "('SysInfo', 'cudnn_version', 166)": {"mod": [169, 170, 171, 172, 175, 176, 177, 178, 194, 195, 196, 197]}, "('SysInfo', 'cuda_version_linux', 232)": {"mod": [244, 245, 246, 247]}, "('SysInfo', 'cuda_version_windows', 251)": {"mod": [257, 258, 259, 260]}, "('SysInfo', 'full_info', 264)": {"mod": [278]}}}, {"path": "lib/umeyama.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 30, 31, 32, 33, 35]}, "(None, 'umeyama', 35)": {"mod": [55, 56]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [114], "mod": [18]}, "(None, 'safe_shutdown', 206)": {"add": [217]}, "(None, 'backup_file', 82)": {"mod": [82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 95]}, "(None, 'set_system_verbosity', 95)": {"mod": [106, 107, 111]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/gui/options.py", "lib/gui/display_page.py", "lib/alignments.py", "lib/sysinfo.py", "lib/config.py", "lib/logger.py", "lib/keypress.py", "lib/utils.py", "lib/model/normalization.py", "lib/umeyama.py", "lib/model/initializers.py", "lib/aligner.py", "lib/queue_manager.py", "lib/gui/__init__.py"], "doc": [".github/ISSUE_TEMPLATE/feature_request.md", ".github/ISSUE_TEMPLATE/bug_report.md", ".github/ISSUE_TEMPLATE.md"], "test": [], "config": [".install/windows/MultiDetailPrint.nsi", ".install/windows/git_install.inf"], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "3f04e8cd06e1816e6aa87f3826ebb919cfa983b2", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/279", "iss_label": "", "title": "Sharpening the face before applying it", "body": "Sharpen by multiplying every pixel by 2, and then subtracting the average value of the neighborhood from it.\r\n\r\nI modified Convert_Masked.py and I find the face less blurry on closeups on hi-res pics, though it's a bit too sharp on normal/low res compared to the rest of the image.\r\n\r\nYMMV.\r\n\r\n```\r\ndef apply_new_face(self, image, new_face, image_mask, mat, image_size, size):\r\n base_image = numpy.copy( image )\r\n new_image = numpy.copy( image )\r\n cv2.warpAffine( new_face, mat, image_size, new_image, cv2.WARP_INVERSE_MAP | cv2.INTER_CUBIC, cv2.BORDER_TRANSPARENT )\r\n kernel = numpy.zeros( (9,9), numpy.float32)\r\n kernel[4,4] = 2.0\r\n boxFilter = numpy.ones( (9,9), numpy.float32) / 81.0\r\n kernel = kernel - boxFilter\r\n new_image = cv2.filter2D(new_image, -1, kernel)\r\n```", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/285", "file_loc": {"base_commit": "3f04e8cd06e1816e6aa87f3826ebb919cfa983b2", "files": [{"path": "plugins/Convert_Masked.py", "status": "modified", "Loc": {"('Convert', '__init__', 9)": {"add": [20]}, "('Convert', None, 8)": {"mod": [9]}, "('Convert', 'apply_new_face', 36)": {"mod": [42]}}}, {"path": "scripts/convert.py", "status": "modified", "Loc": {"('ConvertImage', 'add_optional_arguments', 24)": {"add": [131]}, "('ConvertImage', 'process', 152)": {"add": [179]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["plugins/Convert_Masked.py", "scripts/convert.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "a561f5b78bf09e785686b500c4825641b0823791", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/628", "iss_label": "", "title": "Increase training_data generation speed", "body": "For some settings the training_data generation takes a long time especially \"warp to landmarks\" is pretty slow.\r\nIMO using multiprocessing would speed stuff a lot.\r\nBut there is also some stuff that could be cached, like `get_closest_match` (used in warp to landmarks).\r\n\r\nI did some quick and dirty profiling.\r\nSee https://github.com/kilroythethird/faceswap/tree/perf_test\r\n\r\n<details>\r\n <summary>Profiling with current staging</summary>\r\n <p>\r\n\r\n```\r\n# python faceswap.py train -A faces/a -B faces/b -m model -t original -s 400 -bs 32 -it 201 -g 1 -ps 250 -wl -L INFO\r\nThread: load_batches_0\r\n ALL PER CALL COUNT \r\n 1201.998871 0.187578 6408 process_face(side=a)\r\n 691.611971 0.107929 6408 random_warp_landmarks\r\n 367.470097 0.057337 6409 get_closest_match\r\n 55.490737 0.008658 6409 mask_function\r\n 32.234393 0.005030 6409 random_transform\r\n 23.848031 0.003721 6409 cv2.imread\r\n 7.767768 0.001212 6409 get_landmarks\r\n 7.564588 0.001180 6409 sha1(image)\r\n 0.588165 0.000092 6409 do_random_flip\r\nThread: load_batches_0\r\n ALL PER CALL COUNT \r\n 1161.078251 0.175178 6628 process_face(side=b)\r\n 730.652074 0.110237 6628 random_warp_landmarks\r\n 282.749230 0.042660 6628 get_closest_match\r\n 58.361239 0.008804 6629 mask_function\r\n 33.519484 0.005056 6629 random_transform\r\n 22.260553 0.003358 6629 cv2.imread\r\n 7.995775 0.001206 6629 get_landmarks\r\n 7.795409 0.001176 6629 sha1(image)\r\n 0.688545 0.000104 6629 do_random_flip\r\nThread: training_0\r\n ALL PER CALL COUNT \r\n 1018.465688 5.092328 200 Batcher(a)->train_one_batch\r\n 843.560930 4.217805 200 Batcher.get_next\r\n 174.890468 0.874452 200 train_on_batch\r\n 191.730749 0.958654 200 Batcher(b)->train_one_batch\r\n 106.012409 0.530062 200 train_on_batch\r\n 85.689574 0.428448 200 Batcher.get_next\r\n```\r\n</p></details>\r\n \r\n## Suggesttions\r\n### Multiprocessing\r\nI wrote a \"FixedProducerDispatcher\" class which runs a work function in x sub processes and uses fixed shared memory to save the data. Each run creates a whole batch.\r\nThe only downside to this i see is that we now need to know how big and in which shape the batch is before starting the subprocesses.\r\n\r\nSee https://github.com/kilroythethird/faceswap/tree/mp_training_data\r\nand https://github.com/kilroythethird/faceswap/tree/perf_test_mp (with profiling output)\r\n\r\nThis definitely helps performance wise\r\n<details>\r\n <summary>Profiling with multiprocessing</summary>\r\n <p>\r\n\r\n```\r\n# python faceswap.py train -A faces/a -B faces/b -m model -t original -s 400 -bs 32 -it 201 -g 1 -ps 250 -wl -L INFO\r\nThread: load_batches_0\r\n ALL PER CALL COUNT \r\n 184.241626 0.028362 6496 process_face(side=a)\r\n 132.813554 0.020445 6496 random_warp_landmarks\r\n 11.279126 0.001736 6496 get_closest_match\r\n 10.877508 0.001674 6496 random_transform\r\n 9.162846 0.001411 6496 mask_function\r\n 7.910127 0.001218 6496 cv2.imread\r\n 3.066403 0.000472 6496 get_landmarks\r\n 2.870745 0.000442 6496 sha1(image)\r\n 0.097161 0.000015 6496 do_random_flip\r\nThread: load_batches_0\r\n ALL PER CALL COUNT \r\n 181.656559 0.027964 6496 process_face(side=b)\r\n 134.033812 0.020633 6496 random_warp_landmarks\r\n 11.135122 0.001714 6496 random_transform\r\n 8.973757 0.001381 6496 mask_function\r\n 7.647873 0.001177 6496 get_closest_match\r\n 7.452525 0.001147 6496 cv2.imread\r\n 3.014454 0.000464 6496 get_landmarks\r\n 2.841576 0.000437 6496 sha1(image)\r\n 0.159162 0.000025 6496 do_random_flip\r\nThread: training_0\r\n ALL PER CALL COUNT \r\n 171.892934 0.859465 200 Batcher(a)->train_one_batch\r\n 162.831906 0.814160 200 train_on_batch\r\n 9.050015 0.045250 200 Batcher.get_next\r\n 111.126615 0.555633 200 Batcher(b)->train_one_batch\r\n 102.814357 0.514072 200 train_on_batch\r\n 8.296469 0.041482 200 Batcher.get_next\r\n```\r\n</p></details>\r\n \r\n### Caching\r\nAlso some function cached here:\r\nhttps://github.com/kilroythethird/faceswap/tree/perf_test_caching\r\nand https://github.com/kilroythethird/faceswap/tree/perf_test_all (with multiprocessing and caching)\r\nI am not 100% sure caching + multiprocessing works properly on windows systems (spawn vs fork), if someone could test that that would be awesome.\r\n\r\nFunction cached:\r\n- `sha1(img).hexdigest()` ie. the hash creation function.\r\nCached by filename and side.\r\nThis doesn't bring that much (in absolute terms), but it also doesn't really harm.\r\n- The major (non random) part of `get_closest_match` ie \"warp to landmark\".\r\nThis caches only the indices of the 10 closest images from the other set for each face (so maximum some kb).\r\nThis brings performance up for \"warp to landmark\" by a good chunk and should def be done in some way or another i think.\r\n- `mask_function` Currently i cache the mask only (256,256,1) but for every image.\r\nSo this sums up. Assuming 1000 faces in each set that means 250MB for each side (`(256*256*4*1000)/1024./1024.`).\r\nI am not really sure if this is worth it to be honest.\r\n\r\n<details>\r\n <summary>Profiling with caching and without multiprocessing</summary>\r\n <p>\r\n\r\n```\r\nThread: load_batches_0\r\n ALL PER CALL COUNT \r\n 889.992215 0.136607 6515 process_face(side=a)\r\n 764.990586 0.117420 6515 random_warp_landmarks\r\n 32.839521 0.005041 6515 random_transform\r\n 23.343189 0.003583 6515 get_closest_match\r\n 22.460514 0.003447 6516 cv2.imread\r\n 20.894716 0.003207 6516 mask_function\r\n 0.799270 0.000123 6516 get_landmarks\r\n 0.641308 0.000098 6516 sha1(image)\r\n 0.585590 0.000090 6515 do_random_flip\r\nThread: load_batches_0\r\n ALL PER CALL COUNT \r\n 878.511239 0.137182 6404 process_face(side=b)\r\n 744.183952 0.116206 6404 random_warp_landmarks\r\n 33.463518 0.005225 6405 get_closest_match\r\n 31.718939 0.004952 6405 random_transform\r\n 22.498738 0.003513 6405 cv2.imread\r\n 21.485141 0.003354 6405 mask_function\r\n 1.094532 0.000171 6405 get_landmarks\r\n 0.942542 0.000147 6405 sha1(image)\r\n 0.604323 0.000094 6405 do_random_flip\r\nThread: training_0\r\n ALL PER CALL COUNT \r\n 727.685975 3.638430 200 Batcher(b)->train_one_batch\r\n 625.898893 3.129494 200 Batcher.get_next\r\n 101.772938 0.508865 200 train_on_batch\r\n 195.816539 0.979083 200 Batcher(a)->train_one_batch\r\n 169.204756 0.846024 200 train_on_batch\r\n 26.595766 0.132979 200 Batcher.get_next\r\n```\r\n</p></details>\r\n\r\n \r\n \r\nLet me know what you think. I could prepare patches for bot, or just subsets.\r\nFor me multiprocessing in some form and warp to landmark speed up are the important things.\r\n\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/690", "file_loc": {"base_commit": "a561f5b78bf09e785686b500c4825641b0823791", "files": [{"path": "lib/training_data.py", "status": "modified", "Loc": {"('TrainingDataGenerator', '__init__', 23)": {"add": [34]}, "('TrainingDataGenerator', 'load_batches', 84)": {"add": [89]}, "(None, None, None)": {"mod": [7]}, "('TrainingDataGenerator', 'get_closest_match', 186)": {"mod": [190, 191, 192, 193, 194, 195]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/training_data.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "b057b719ce5665590beb3ba1782721bc6257963a", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1143", "iss_label": "bug", "title": "Disabling AMD and CUDA sets backed to \"cpu\" in config, but running faceswap -h still tries to load CUDA", "body": "Turning off all GPU related config items during setup does create config/.faceswap, which contains {\"backend\": \"cpu\"}.\r\n\r\nHowever, running faceswap.py -h throws an exception and terminates the program:\r\n\r\nSetting Faceswap backend to CPU\r\nTraceback (most recent call last):\r\n File \"faceswap.py\", line 6, in <module>\r\n from lib.cli import args as cli_args\r\n File \"/Users/mrfredsmoothie/software/faceswap/lib/cli/args.py\", line 13, in <module>\r\n from lib.gpu_stats import GPUStats\r\n File \"/Users/mrfredsmoothie/software/faceswap/lib/gpu_stats.py\", line 17, in <module>\r\n import pynvx # pylint: disable=import-error\r\nModuleNotFoundError: No module named 'pynvx'\r\n\r\nWhy set the backend to CPU and then choke trying to display options?\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. configure a lack of GPU support\r\n2. try to use the -h option to list available options and commands\r\n\r\n**Expected behavior**\r\nDon't try to load GPU info if no GPU support is configured\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: MacOS X 11.2.3 (M1 arm64)\r\n - Python Version Python 3.8.8\r\n - Conda Version 4.10.0\r\n - Commit ID f60eaee", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/1216", "file_loc": {"base_commit": "b057b719ce5665590beb3ba1782721bc6257963a", "files": [{"path": "INSTALL.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [22, 147], "mod": [57]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [22, 40]}}}, {"path": "lib/gpu_stats/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"('_Backend', '__init__', 32)": {"mod": [33]}, "('_Backend', '_configure_backend', 85)": {"mod": [95, 96]}, "(None, 'set_backend', 122)": {"mod": [127]}}}, {"path": "setup.py", "status": "modified", "Loc": {"('Environment', 'set_config', 284)": {"add": [289]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/gpu_stats/__init__.py", "setup.py", "lib/utils.py"], "doc": ["README.md", "INSTALL.md"], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "b057b719ce5665590beb3ba1782721bc6257963a", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1197", "iss_label": "", "title": "Please Support for Apple M1 pro/max", "body": "As we know, Apple release New silicon named M1 pro/max.\r\n\r\nIt has powerful GPUs and CPUs.\r\n\r\nIs there any chance to run FaceSwap on new Mac book pro?\r\n\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/1216", "file_loc": {"base_commit": "b057b719ce5665590beb3ba1782721bc6257963a", "files": [{"path": "INSTALL.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [22, 147], "mod": [57]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [22, 40]}}}, {"path": "lib/gpu_stats/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"('_Backend', '__init__', 32)": {"mod": [33]}, "('_Backend', '_configure_backend', 85)": {"mod": [95, 96]}, "(None, 'set_backend', 122)": {"mod": [127]}}}, {"path": "setup.py", "status": "modified", "Loc": {"('Environment', 'set_config', 284)": {"add": [289]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/gpu_stats/__init__.py", "setup.py", "lib/utils.py"], "doc": ["README.md", "INSTALL.md"], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "85c5e8b66c00b096c31f416cc4954d611c3fdb14", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/39", "iss_label": "bug\ngood first issue\ndev\nperformance", "title": "Don't reload models everytime `convert_one_image` is called", "body": "## Expected behavior\r\n\r\nUse the convert command to convert a directory. `convert_one_image` loads the model once.\r\n\r\n## Actual behavior\r\n\r\nUse the convert command to convert a directory. `convert_one_image` loads the model every time that it is called.\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/52", "file_loc": {"base_commit": "85c5e8b66c00b096c31f416cc4954d611c3fdb14", "files": [{"path": "faceswap.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2, 8, 17, 18, 19, 20]}}}, {"path": "lib/DetectedFace.py", "status": "removed", "Loc": {}}, {"path": "lib/aligner.py", "status": "modified", "Loc": {"(None, 'get_align_mat', 25)": {"mod": [26]}}}, {"path": "lib/cli.py", "status": "modified", "Loc": {"('DirectoryProcessor', 'process_arguments', 34)": {"add": [47], "mod": [49, 51]}, "(None, None, None)": {"mod": [5]}, "('DirectoryProcessor', 'process_directory', 51)": {"mod": [56, 59]}, "('DirectoryProcessor', None, 14)": {"mod": [62]}}}, {"path": "lib/faces_detect.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [3, 4, 28]}, "(None, 'detect_faces', 6)": {"mod": [9, 11, 12, 13, 14, 15, 16]}}}, {"path": "lib/model.py", "status": "removed", "Loc": {}}, {"path": "lib/training_data.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2], "mod": [45]}, "(None, 'get_training_data', 13)": {"mod": [13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 26, 27, 29]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12], "mod": [1, 2]}, "(None, 'get_folder', 8)": {"mod": [10]}, "(None, 'load_images', 18)": {"mod": [18, 19, 20, 21, 22, 23, 24, 25, 26]}}}, {"path": "plugins/Convert_Adjust.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('Convert', None, 5)": {"mod": [6, 7]}, "('Convert', 'patch_image', 12)": {"mod": [21]}}}, {"path": "plugins/Convert_Masked.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [6]}, "('Convert', None, 8)": {"mod": [9, 10]}, "('Convert', 'get_new_face', 51)": {"mod": [54]}, "('Convert', 'get_image_mask', 58)": {"mod": [67]}}}, {"path": "plugins/Extract_Align.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('Extract', 'extract', 6)": {"add": [7]}}}, {"path": "plugins/Extract_Crop.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}}}, {"path": "plugins/PluginLoader.py", "status": "modified", "Loc": {"('PluginLoader', None, 2)": {"add": [12]}}}, {"path": "scripts/convert.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7, 8, 9]}, "('ConvertImage', None, 13)": {"mod": [38, 39, 40, 42, 43, 44, 45]}, "('ConvertImage', 'process_image', 38)": {"mod": [47, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60]}}}, {"path": "scripts/extract.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}, "('ExtractTrainingData', None, 8)": {"mod": [18, 19]}, "('ExtractTrainingData', 'process_image', 18)": {"mod": [22, 23, 24, 25, 26, 28, 29, 30, 31]}}}, {"path": "scripts/train.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10], "mod": [5, 6, 8, 9]}, "('TrainingProcessor', 'process_arguments', 18)": {"mod": [24, 25, 26, 27, 28, 29, 30]}, "('TrainingProcessor', None, 12)": {"mod": [89, 90, 91, 92, 93, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 107, 108, 109, 111, 113, 114, 115, 116]}, "('TrainingProcessor', 'process', 118)": {"mod": [119, 122, 123, 125, 127, 129, 131, 132, 133, 134, 135, 136, 138, 139, 140, 142, 143, 144, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/aligner.py", "lib/model.py", "lib/training_data.py", "plugins/Convert_Adjust.py", "plugins/Extract_Align.py", "plugins/Extract_Crop.py", "scripts/train.py", "faceswap.py", "plugins/PluginLoader.py", "plugins/Convert_Masked.py", "lib/DetectedFace.py", "lib/faces_detect.py", "lib/utils.py", "lib/cli.py", "scripts/convert.py", "scripts/extract.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/213", "iss_label": "code to integrate", "title": "check for duplicates in extract folder", "body": "Hello all,\r\nI have been having trouble with cloud servers shutting down unexpectedly so I edited the original `extract.py` to not overwrite if the image has already been processed in a previous run.\r\n\r\nNote that I am currently assuming an `idx` of `0` (i.e. single face was found in photo, usually denoting successful face extraction - all extracted images with nonzero index have been failures from what I ve seen, please enlighten me further!)\r\n\r\nThis can be handy since somebody may update his image db but should not wait for complete re-extraction!\r\nNote that this is on an earlier version I pulled from this repo so not directly applicable, but I am sure this can be implemented extremely quickly.\r\n\r\nJust thought I'd share this idea: you can have a `-no` flag in the extract command to prevent overwriting.\r\nThoughts? Thanks to all contributors for the good work!\r\n\r\n``` python\r\nimport os\r\ndef process(self):\r\n extractor_name = \"Align\" # TODO Pass as argument\r\n extractor = PluginLoader.get_extractor(extractor_name)()\r\n\r\n try:\r\n for filename in self.read_directory():\r\n output_file = self.output_dir / Path(filename).stem\r\n output_file_to_check = os.path.abspath(str(output_file) +\r\n '0' +\r\n Path(filename).suffix)\r\n if os.path.isfile(output_file_to_check):\r\n print('File {} already exists, will not overwrite'.format(output_file_to_check))\r\n else:\r\n image = cv2.imread(filename)\r\n for idx, face in self.get_faces(image):\r\n resized_image = extractor.extract(image, face, 256)\r\n cv2.imwrite(str(output_file) + str(idx) + Path(filename).suffix, resized_image)\r\n \r\n except Exception as e:\r\n print('Failed to extract from image: {}. Reason: {}'.format(filename, e))\r\n```", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/214", "file_loc": {"base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "files": [{"path": "lib/cli.py", "status": "modified", "Loc": {"('DirectoryProcessor', 'process_arguments', 39)": {"add": [53], "mod": [56]}, "('DirectoryProcessor', 'write_alignments', 80)": {"add": [84]}, "('DirectoryProcessor', 'get_faces_alignments', 105)": {"mod": [119]}, "('DirectoryProcessor', 'get_faces', 122)": {"mod": [136]}}}, {"path": "lib/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, 'get_image_paths', 14)": {"mod": [14, 15, 16]}}}, {"path": "scripts/extract.py", "status": "modified", "Loc": {"('ExtractTrainingData', 'add_optional_arguments', 22)": {"add": [40]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/cli.py", "lib/utils.py", "scripts/extract.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "b4ce0b910cd7265d046923162c922be840fa60c8", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/1677", "iss_label": "bug", "title": "Questionable indexing of Tex", "body": "### Describe the bug\r\nWhen I made videos, I used many math equations with specific variables or sub-expressions colored, and noticed there were some bugs in manim dealing with indexing components in `Tex` mobjects. Recently I'm trying to refactor the code of `Tex` class and fix bugs concerning with coloring, so I dive into the source code of `Tex`, only to find that manim just breaks the original tex string with substrings to build new `SingleStringTex` objects, using the lengths of which to do the indexing work. As the formula becomes much more complicated, some issues cannot be handled through the `modify_special_strings` method.\r\n\r\n1. Symbols given by the same command, like `\\sqrt`, may have different shapes or even different numbers of components making up them.\r\n2. The order between symbols may be swapped from the original tex string such as `\\frac`, `\\sum`, and super- and subscripts.\r\n\r\nWhen compiling a tex string, each specified pieces of it should be tracked so that the indices of their corresponding components can be found. This, however, may require us to dive right into the nature of how TeX works... I want to look for external tools to finish the \"tracking\" work. I'm not even sure whether there're approaches to fixing this issue perfectly...\r\n\r\n**Code**:\r\nThis is just a combination of all messy stuff, so don't care about its actual meaning...\r\n```python\r\nfrom manimlib import *\r\n\r\n\r\nTEST_STR = \"\"\"\\\\lim_{n \\\\to \\\\infty} \\\\left\\\\lfloor\r\n\\\\sqrt{\\\\frac{1}{n !} \\\\mathrm{e}^{n} a_{n} + b_{n}^{p}} \\\\otimes\r\n\\\\sqrt[n]{\\\\sum_{m = 0}^{n^{2}} \\\\tilde{c}_{m \\\\cdot n}^{b_{n}^{p}\r\n\\\\cos \\\\left( \\\\theta \\\\right)}} \\\\right\\\\rfloor\"\"\".replace(\"\\n\", \" \")\r\n\r\n\r\nclass TestScene(Scene):\r\n def construct(self):\r\n tex1 = Tex(\r\n TEST_STR,\r\n fill_color=TEAL\r\n )\r\n tex1.shift(2 * UP)\r\n tex2 = Tex(\r\n TEST_STR,\r\n tex_to_color_map={\"b\": LIGHT_PINK, \"\\\\sum\": YELLOW},\r\n fill_color=TEAL\r\n )\r\n tex2.shift(2 * DOWN)\r\n sub_tex = VGroup(*[\r\n Tex(s, fill_color=BLUE)\r\n for s in re.split(r\"(b|\\\\sum)\", TEST_STR)\r\n ]).scale(0.8).arrange(RIGHT, buff=0.7)\r\n self.add(tex1, tex2, sub_tex)\r\n\r\n # Labels of indices for debugging\r\n self.add(\r\n # index_labels(tex1[0]),\r\n # *[index_labels(submob) for submob in tex2],\r\n # *[index_labels(submob[0]) for submob in sub_tex]\r\n )\r\n```\r\n\r\n**Wrong display or Error traceback**:\r\n\r\n![TestScene](https://user-images.githubusercontent.com/50232075/141300447-626c63f2-a559-45df-ae09-a0ef2d2de184.png)\r\n![TestScene](https://user-images.githubusercontent.com/50232075/141300582-ec2db7c8-44f6-47c2-9036-45e27ce51cb5.png)\r\n\r\n", "pr_html_url": "https://github.com/3b1b/manim/pull/1678", "file_loc": {"base_commit": "b4ce0b910cd7265d046923162c922be840fa60c8", "files": [{"path": "manimlib/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [39]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["manimlib/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "05db6174e9d677fe26eb863592d88e5cf02cf8cb", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/28", "iss_label": "", "title": "Windows 10 - No module named . (period)", "body": "I've tried the python extract_scene.py -p example_scenes.py SquareToCircle example on cmd, and I get the above error. \r\n*I've looked around and it seems that a few a people have had this problem, but I can't find any one who has a solution. .(period) is syntax for relative import, but I don't know how to fix from there. ", "pr_html_url": "https://github.com/3b1b/manim/pull/38", "file_loc": {"base_commit": "05db6174e9d677fe26eb863592d88e5cf02cf8cb", "files": [{"path": "extract_scene.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 161]}, "(None, 'get_module', 154)": {"mod": [154, 156, 158]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["extract_scene.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "43d28a8595450d39f800f650c25a7570b228db44", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/627", "iss_label": "", "title": "Text rendering problem", "body": "### Steps to reproduce\r\n```\r\nfrom manimlib.imports import *\r\n\r\nclass Playground(Scene):\r\n def construct(self):\r\n text = TextMobject(\"print('Hello, world!')\",\r\n tex_to_color_map={'print': YELLOW})\r\n self.play(FadeIn(text))\r\n```\r\n### The unexpected behavior that occurred\r\nNotice between the 'print' and the '(' there is no space.\r\n\r\nWrong\r\n![wrong](https://user-images.githubusercontent.com/47266984/60874808-2fe13700-a26b-11e9-8042-85041a680656.png)\r\n\r\nRight\r\n![right](https://user-images.githubusercontent.com/47266984/60874825-35d71800-a26b-11e9-9d20-2dcc2fe515d5.png)\r\n### Solution\r\nI changed [here](https://github.com/3b1b/manim/blob/master/manimlib/mobject/svg/tex_mobject.py) in line 134\r\n\r\n`\"arg_separator\": \" \", --> \"arg_separator\": \"\",`\r\n\r\nand also commented out [here](https://github.com/3b1b/manim/blob/master/manimlib/mobject/svg/tex_mobject.py) in line 160\r\n```\r\nsplit_list = [str(x).strip() for x in split_list] --> \r\n#split_list = [str(x).strip() for x in split_list]\r\n```\r\nI also find some interesting in TexMobject(), no matter how many space I enter, like\r\n\r\n`text = TexMobject(\"print ('Hello, world!')\")`\r\n\r\nit always ignored those space.", "pr_html_url": "https://github.com/3b1b/manim/pull/628", "file_loc": {"base_commit": "43d28a8595450d39f800f650c25a7570b228db44", "files": [{"path": "manimlib/mobject/svg/tex_mobject.py", "status": "modified", "Loc": {"('TextMobject', None, 241)": {"add": [244]}, "('TexMobject', 'break_up_tex_strings', 152)": {"mod": [160]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["manimlib/mobject/svg/tex_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "3362f93964cae6f610a47d2da0e076b51a9eab42", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/1017", "iss_label": "", "title": "Text(\" \") don't move. Because of that Text(\"a b\") shows wrong transform animation.", "body": "```python \r\nclass test(Scene):\r\n def construct(self):\r\n text = Text(\" \")\r\n text.to_corner(DOWN+LEFT)\r\n rect = SurroundingRectangle(text)\r\n self.add(text,rect)\r\n```\r\n## Output \r\n![test](https://user-images.githubusercontent.com/30471072/80776058-f7ef0b80-8b7e-11ea-9827-e20dbea176bc.png)\r\n\r\n```python\r\nclass test(Scene):\r\n def construct(self):\r\n text = Text(\"a b\")\r\n text1 = Text(\"123\")\r\n text.to_corner(DOWN+LEFT)\r\n text1.to_edge(RIGHT+DOWN)\r\n rect = SurroundingRectangle(text)\r\n self.add(text,rect)\r\n self.play(Transform(text,text1))\r\n self.wait()\r\n```\r\n## Output\r\n![ezgif-6-9891bd47c1c2](https://user-images.githubusercontent.com/30471072/80776516-320cdd00-8b80-11ea-8f1e-50fddab4995c.gif)\r\n\r\n\r\n", "pr_html_url": "https://github.com/3b1b/manim/pull/1035", "file_loc": {"base_commit": "3362f93964cae6f610a47d2da0e076b51a9eab42", "files": [{"path": "manimlib/mobject/svg/svg_mobject.py", "status": "modified", "Loc": {"('SVGMobject', 'get_mobjects_from', 76)": {"mod": [90, 91, 92]}}}, {"path": "manimlib/mobject/svg/text_mobject.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}, "('Text', None, 25)": {"add": [46]}, "('Text', '__init__', 49)": {"add": [52, 57], "mod": [50, 81]}, "('Text', 'remove_last_M', 83)": {"mod": [86]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["manimlib/mobject/svg/svg_mobject.py", "manimlib/mobject/svg/text_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "994749ceadf9f87f2ebe40bbb795fbb2b696f377", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/39", "iss_label": "", "title": "Python version problem?", "body": "While running the demo, ( python extract_scene.py -p example_scenes.py SquareToCirclepython extract_scene.py -p example_scenes.py SquareToCircle ) I get the following exception: \r\n\r\n File \"extract_scene.py\", line 46\r\n print str(err)\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\nI believe it is somehow related to python version, right?", "pr_html_url": "https://github.com/3b1b/manim/pull/97", "file_loc": {"base_commit": "994749ceadf9f87f2ebe40bbb795fbb2b696f377", "files": [{"path": "active_projects/WindingNumber.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'point_to_rev', 381)": {"add": [384], "mod": [381, 386]}, "('TestDual', 'construct', 86)": {"mod": [88]}, "(None, 'split_interval', 414)": {"mod": [414]}, "('RectangleData', 'splits_on_dim', 446)": {"mod": [456]}, "('RectangleData', 'split_line_on_dim', 460)": {"mod": [469]}, "(None, 'plane_poly_with_roots', 476)": {"mod": [477]}, "(None, 'plane_func_from_complex_func', 481)": {"mod": [482]}, "(None, 'point_func_from_complex_func', 484)": {"mod": [485]}, "('LoopSplitSceneMapped', 'setup', 954)": {"mod": [957]}}}, {"path": "active_projects/basel.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, 'show_line_length', 53)": {"mod": [55]}}}, {"path": "active_projects/fourier.py", "status": "modified", "Loc": {"('AddingPureFrequencies', 'play_mix', 276)": {"mod": [283]}, "('AddingPureFrequencies', 'separate_out_parts', 288)": {"mod": [314]}, "('WrapCosineGraphAroundCircle', 'show_initial_signal', 1073)": {"mod": [1082]}, "('ShowLowerFrequency', 'show_lower_frequency_signal', 1663)": {"mod": [1678]}, "('ShowLinearity', 'show_sum_of_signals', 1820)": {"mod": [1830]}, "('ShowCommutativeDiagram', 'apply_transform', 2077)": {"mod": [2084]}, "('FilterOutHighPitch', 'show_intensity_vs_time_graph', 2239)": {"mod": [2272]}, "('FilterOutHighPitch', 'get_broadcast_anims', 2412)": {"mod": [2421]}, "('WriteComplexExponentialExpression', 'show_eulers_formula', 2703)": {"mod": [2752]}, "('ScaleUpCenterOfMass', 'scale_up_center_of_mass', 3236)": {"mod": [3279, 3364]}, "('SummarizeFormula', 'construct', 3739)": {"mod": [3749]}, "('BoundsAtInfinity', 'construct', 3790)": {"mod": [3807]}, "('BoundsAtInfinity', 'get_time_interval', 3889)": {"mod": [3892]}, "('ShowUncertaintyPrinciple', 'construct', 3921)": {"mod": [3972]}}}, {"path": "animation/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [7]}}}, {"path": "animation/continual_animation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [3]}}}, {"path": "animation/playground.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 9], "mod": [4, 5]}, "('Vibrate', 'update_mobject', 37)": {"mod": [45]}}}, {"path": "animation/simple_animations.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [9, 10, 11]}}}, {"path": "animation/transform.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [9]}, "('ApplyMethod', '__init__', 133)": {"mod": [145, 153, 154, 155]}}}, {"path": "camera/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "camera/camera.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}, "('Camera', 'display_image_mobject', 245)": {"mod": [296]}, "('Camera', 'overlay_rgba_array', 312)": {"mod": [315]}}}, {"path": "eop/bayes.py", "status": "modified", "Loc": {"('UpdatePokerPrior', 'get_prior_labels', 1038)": {"mod": [1046]}, "('MusicExample', 'record_track', 1661)": {"mod": [1665]}}}, {"path": "eop/bayes_footnote.py", "status": "modified", "Loc": {"('TryUnitSquareVisual', 'add_prior_division', 509)": {"mod": [517]}, "('ShowRestrictedSpace', 'fade_out_negative_result_individuals', 685)": {"mod": [703]}, "('CompareNumbersInBothExamples', 'construct', 1370)": {"mod": [1385, 1393]}}}, {"path": "eop/combinations.py", "status": "modified", "Loc": {"('ExperienceProblemSolver', 'think_about_patterns', 175)": {"mod": [209]}, "('IntroducePascalsTriangle', 'show_triangle', 1801)": {"mod": [1810]}, "('StacksApproachBellCurve', 'construct', 2059)": {"mod": [2149]}, "('ChooseThreeFromFive', 'that_phrase_is_confusing', 2380)": {"mod": [2441]}, "('ChooseThreeFromFive', 'get_names', 2488)": {"mod": [2491]}, "('StudentsGetConfused', 'create_pi_creatures', 2699)": {"mod": [2702]}}}, {"path": "eop/independence.py", "status": "modified", "Loc": {"('MeaningOfIndependence', 'align_conditionals', 229)": {"mod": [236]}, "('ThousandPossibleQuizzes', 'ask_about_second_question', 948)": {"mod": [956]}, "('ShowAllEightConditionals', 'show_all_conditionals', 1505)": {"mod": [1516]}, "('NameBinomial', 'add_quiz_questions', 2311)": {"mod": [2336]}, "('CycleThroughPatterns', 'construct', 2527)": {"mod": [2560]}, "('CorrectForDependence', 'get_arrow_flip_anims', 3089)": {"mod": [3096]}, "('CompareTwoSituations', 'construct', 3289)": {"mod": [3293]}, "('SkepticalOfDistributions', 'get_binomial', 3417)": {"mod": [3421]}}}, {"path": "example_scenes.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24]}, "('WarpSquare', 'construct', 47)": {"mod": [50]}}}, {"path": "extract_scene.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2], "mod": [14, 15, 16]}, "(None, 'main', 201)": {"mod": [228]}}}, {"path": "helpers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [13]}, "(None, 'make_even_by_cycling', 486)": {"mod": [491, 492]}}}, {"path": "mobject/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [7, 8, 9, 10]}}}, {"path": "mobject/image_mobject.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [8, 9]}}}, {"path": "mobject/mobject.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8]}, "('Mobject', 'apply_complex_function', 200)": {"mod": [202]}, "('Mobject', 'align_submobjects', 759)": {"mod": [764]}}}, {"path": "mobject/point_cloud_mobject.py", "status": "modified", "Loc": {"('PMobject', 'pointwise_become_partial', 149)": {"mod": [152]}}}, {"path": "mobject/region.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [6]}}}, {"path": "mobject/svg_mobject.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [4]}, "('SVGMobject', 'circle_to_mobject', 116)": {"mod": [121]}}}, {"path": "mobject/tex_mobject.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 8], "mod": [3, 4]}}}, {"path": "mobject/vectorized_mobject.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 5], "mod": [3]}, "('VMobject', 'set_points_as_corners', 177)": {"mod": [183]}}}, {"path": "old_projects/bell.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [32]}, "('PhotonsThroughPerpendicularFilters', 'get_photons', 213)": {"mod": [223]}, "('PhotonsThroughPerpendicularFilters', 'get_probability_text', 226)": {"mod": [247]}, "('ShowVariousFilterPairsWithPhotonsOverTime', None, 615)": {"mod": [620]}, "('ShowVariousFilterPairs', 'get_lines', 859)": {"mod": [868]}, "('ShowVariousFilterPairsFrom0To45', 'mention_probabilities', 898)": {"mod": [908]}, "('ForgetPreviousActions', None, 921)": {"mod": [926]}, "('VennDiagramProofByContradiction', 'draw_venn_diagram', 1395)": {"mod": [1423]}, "('VennDiagramProofByContradiction', 'setup_venn_diagram_sections', 1998)": {"mod": [2006]}, "('NoFirstMeasurementPreferenceBasedOnDirection', None, 2408)": {"mod": [2413]}}}, {"path": "old_projects/borsuk.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27]}, "('WalkEquatorPostTransform', 'get_transverse_curve', 966)": {"mod": [972]}, "('ChoicesInNecklaceCutting', 'get_groups', 1827)": {"mod": [1847]}, "('ChoicesInNecklaceCutting', 'get_boxes_and_labels', 1852)": {"mod": [1864]}, "('NecklaceDivisionSphereAssociation', 'show_binary_choice_association', 2101)": {"mod": [2112]}, "('TotalLengthOfEachJewelEquals', 'demonstrate_fair_division', 2228)": {"mod": [2245]}, "('ShowFunctionDiagram', 'add_number_pair', 2327)": {"mod": [2333]}}}, {"path": "old_projects/brachistochrone/curves.py", "status": "modified", "Loc": {"('TransitionAwayFromSlide', 'construct', 368)": {"mod": [376]}}}, {"path": "old_projects/brachistochrone/cycloid.py", "status": "modified", "Loc": {"('CycloidScene', 'grow_parts', 57)": {"mod": [60]}, "('LeviSolution', 'show_diameter', 289)": {"mod": [319]}}}, {"path": "old_projects/brachistochrone/drawing_images.py", "status": "modified", "Loc": {"('NewtonVsJohann', 'construct', 275)": {"mod": [278]}, "('JohannThinksOfFermat', 'construct', 297)": {"mod": [300]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["eop/bayes.py", "mobject/svg_mobject.py", "mobject/tex_mobject.py", "animation/playground.py", "eop/bayes_footnote.py", "mobject/__init__.py", "active_projects/WindingNumber.py", "animation/transform.py", "eop/combinations.py", "helpers.py", "animation/__init__.py", "mobject/mobject.py", "eop/independence.py", "old_projects/brachistochrone/drawing_images.py", "mobject/image_mobject.py", "camera/camera.py", "animation/simple_animations.py", "mobject/region.py", "old_projects/brachistochrone/curves.py", "camera/__init__.py", "mobject/point_cloud_mobject.py", "old_projects/brachistochrone/cycloid.py", "mobject/vectorized_mobject.py", "active_projects/fourier.py", "old_projects/borsuk.py", "old_projects/bell.py", "active_projects/basel.py", "extract_scene.py", "example_scenes.py", "animation/continual_animation.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "cf439fa89cf45a5462336a10c3dfee4ab4c0ace8", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/7060", "iss_label": "bug\nopenhands", "title": "[Bug]: Obsolete attribute in a unit test file", "body": "### Is there an existing issue for the same bug?\n\n- [x] I have checked the existing issues.\n\n### Describe the bug and reproduction steps\n\nopenhands-agent,\n\nThe file test_long_term_memory.py uses an attribute 'micro_agent_name' which is obsolete and has been removed from AgentConfig.\n\nPlease remove it from the tests too.\n\nYou ONLY need to work with test_long_term_memory.py, no other files, I took care of everything else.\n\n### OpenHands Installation\n\nOther\n\n### OpenHands Version\n\n_No response_\n\n### Operating System\n\nNone\n\n### Logs, Errors, Screenshots, and Additional Context\n\n_No response_", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/7061", "file_loc": {"base_commit": "cf439fa89cf45a5462336a10c3dfee4ab4c0ace8", "files": [{"path": "tests/unit/test_long_term_memory.py", "status": "modified", "Loc": {"(None, 'mock_agent_config', 24)": {"mod": [26]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["tests/unit/test_long_term_memory.py"], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "6e3b554317de7bc5d96ef81b4097287e05c0c4d0", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/226", "iss_label": "enhancement\nbackend", "title": "Redesign docker sandbox", "body": "**What problem or use case are you trying to solve?**\r\nWe're using `exec_run` to run commands in the sandbox. This isn't stateful, and doesn't handle CLI interactions via stdin very well.\r\n\r\nThings we struggle with today:\r\n* We don't keep track of cd commands\r\n* The agent can't interact with stdin (e.g. it runs apt-get install without -y, it wants to type y to get through)\r\n * this is more important if we e.g. ask the agent to develop an interactive CLI that it needs to test\r\n* [Can't use apt-get install in sandbox](https://github.com/OpenDevin/OpenDevin/issues/216) (due to permissions)\r\n* [kill doesn't work](https://github.com/OpenDevin/OpenDevin/issues/179)\r\n\r\n**Describe the UX of the solution you'd like**\r\nSomething closer to @xingyaoww 's original implementation: https://github.com/xingyaoww/OpenDevin/blob/8815aa95ba770110e9d6a4839fb7f9cef01ef4d7/opendevin/sandbox/docker.py\r\n\r\n**Do you have thoughts on the technical implementation?**\r\nCan we start the container, then connect an ssh or pty session?\r\n\r\n**Describe alternatives you've considered**\r\n* Hacking around `exec` \ud83d\udc4e \r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/847", "file_loc": {"base_commit": "6e3b554317de7bc5d96ef81b4097287e05c0c4d0", "files": [{"path": "opendevin/sandbox/Dockerfile", "status": "modified", "Loc": {"(None, None, None)": {"add": [16, 17]}}}, {"path": "opendevin/sandbox/Makefile", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}}}, {"path": "opendevin/sandbox/sandbox.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6], "mod": [11]}, "('DockerInteractive', '__init__', 93)": {"add": [134, 136]}, "('DockerInteractive', None, 88)": {"add": [148]}, "('DockerInteractive', 'restart_docker_container', 255)": {"add": [273], "mod": [270]}, "('DockerInteractive', 'setup_devin_user', 139)": {"mod": [141, 142, 143, 144, 145]}, "('DockerInteractive', 'get_exec_cmd', 149)": {"mod": [151]}, "('DockerInteractive', 'execute', 161)": {"mod": [162, 163, 164, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181]}}}, {"path": "poetry.lock", "status": "modified", "Loc": {"(None, None, None)": {"add": [3365, 3598, 3911, 3916, 3921, 3926, 3931, 3949], "mod": [5877]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"add": [25]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["opendevin/sandbox/sandbox.py"], "doc": [], "test": [], "config": ["pyproject.toml", "opendevin/sandbox/Makefile", "opendevin/sandbox/Dockerfile", "poetry.lock"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "07f0d1ccb347d1c67a189d53c7147916d05cd528", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/4783", "iss_label": "bug\nfix-me", "title": "[Bug]: Tool call metadata should NOT be None when function calling is enabled", "body": "### Is there an existing issue for the same bug?\n\n- [X] I have checked the existing issues.\n\n### Describe the bug and reproduction steps\n\n1. Manually run command in the client terminal (e.g., `pwd`)\r\n2. Error is thrown\n\n### OpenHands Installation\n\nDocker command in README\n\n### OpenHands Version\n\nmain\n\n### Operating System\n\nNone\n\n### Logs, Errors, Screenshots, and Additional Context\n\n<img width=\"395\" alt=\"image\" src=\"https://github.com/user-attachments/assets/9afa3669-863f-4d16-97ff-9e5f21fffd3e\">\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/4955", "file_loc": {"base_commit": "07f0d1ccb347d1c67a189d53c7147916d05cd528", "files": [{"path": "openhands/agenthub/codeact_agent/codeact_agent.py", "status": "modified", "Loc": {"('CodeActAgent', 'get_action_message', 112)": {"add": [186], "mod": [151, 156]}, "('CodeActAgent', 'get_observation_message', 189)": {"mod": [222, 223, 224]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["openhands/agenthub/codeact_agent/codeact_agent.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "123968f887a5eb101b549472805e4b9e4ac7bce0", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/1686", "iss_label": "bug\nseverity:low", "title": "[Bug]: Error creating controller", "body": "### Is there an existing issue for the same bug?\n\n- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting\n- [X] I have checked the existing issues.\n\n### Describe the bug\n\nI followed the quickstart guide and was able to open the UI, but I keep getting \"Error creating controller\". I checked the troubleshooting doc and verified that Docker is running using `docker ps`. I also checked existing issues and saw people saying that modifying the config.toml file with a line saying `SANDBOX_TYPE=\"exec\"` might fix it. However with the (new?) installation method through Docker, there are no files to modify as the image is already made. Another thing I thought it might be is that I'm on Windows and WSL might not have the right permissions set? I'm not sure how to troubleshoot that. \n\n### Current Version\n\n```bash\nDocker Desktop 4.29.0 (145265)\n```\n\n\n### Installation and Configuration\n\n```bash\nAlyssa@LAPTOP-U1RNRHQR MINGW64 ~\r\n$ cd C:/Users/Alyssa/Documents/opendevintesting\r\n\r\nAlyssa@LAPTOP-U1RNRHQR MINGW64 ~/Documents/opendevintesting\r\n$ docker run \\\r\n --pull=always \\\r\n -e SANDBOX_USER_ID=$(id -u) \\\r\n -e WORKSPACE_MOUNT_PATH=\"C:\\Users\\Alyssa\\Documents\\opendevintesting\" \\\r\n -v \"C:\\Users\\Alyssa\\Documents\\opendevintesting:/opt/workspace_base\" \\\r\n -v /var/run/docker.sock:/var/run/docker.sock \\\r\n -p 3000:3000 \\\r\n --add-host host.docker.internal:host-gateway \\\r\n ghcr.io/opendevin/opendevin:0.5\r\n0.5: Pulling from opendevin/opendevin\r\nDigest: sha256:322c5ddcc40f0ac3b6727f63dda9fab87fea3cc1e90a1359f7229529a2c89684\r\nStatus: Image is up to date for ghcr.io/opendevin/opendevin:0.5\r\nuseradd warning: enduser's uid 197611 outside of the UID_MIN 499 and UID_MAX 60000 range.\r\nstat: cannot statx '/var/run/docker.sock': No such file or directory\r\nDocker socket group id:\r\nUsage: usermod [options] LOGIN\r\n\r\nOptions:\r\n -a, --append append the user to the supplemental GROUPS\r\n mentioned by the -G option without removing\r\n the user from other groups\r\n -b, --badname allow bad names\r\n -c, --comment COMMENT new value of the GECOS field\r\n -d, --home HOME_DIR new home directory for the user account\r\n -e, --expiredate EXPIRE_DATE set account expiration date to EXPIRE_DATE\r\n -f, --inactive INACTIVE set password inactive after expiration\r\n to INACTIVE\r\n -g, --gid GROUP force use GROUP as new primary group\r\n -G, --groups GROUPS new list of supplementary GROUPS\r\n -h, --help display this help message and exit\r\n -l, --login NEW_LOGIN new value of the login name\r\n -L, --lock lock the user account\r\n -m, --move-home move contents of the home directory to the\r\n new location (use only with -d)\r\n -o, --non-unique allow using duplicate (non-unique) UID\r\n -p, --password PASSWORD use encrypted password for the new password\r\n -P, --prefix PREFIX_DIR prefix directory where are located the /etc/* files\r\n -r, --remove remove the user from only the supplemental GROUPS\r\n mentioned by the -G option without removing\r\n the user from other groups\r\n -R, --root CHROOT_DIR directory to chroot into\r\n -s, --shell SHELL new login shell for the user account\r\n -u, --uid UID new UID for the user account\r\n -U, --unlock unlock the user account\r\n -v, --add-subuids FIRST-LAST add range of subordinate uids\r\n -V, --del-subuids FIRST-LAST remove range of subordinate uids\r\n -w, --add-subgids FIRST-LAST add range of subordinate gids\r\n -W, --del-subgids FIRST-LAST remove range of subordinate gids\r\n -Z, --selinux-user SEUSER new SELinux user mapping for the user account\r\n\r\nINFO: Started server process [27]\r\nINFO: Waiting for application startup.\r\nINFO: Application startup complete.\r\nINFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)\r\nINFO: 172.17.0.1:54844 - \"GET / HTTP/1.1\" 307 Temporary Redirect\r\nINFO: ('172.17.0.1', 54856) - \"WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiIzOWZhODFmYS02YjhiLTQzMzYtODBjZi0zNzU0NjQ3ZTg0MDAifQ.5XamgoC0qQvuxmY_WKRufEKkSBrWNHJcvsB8NR_RycE\" [accepted]\r\nINFO: connection open\r\n06:56:44 - opendevin:INFO: agent.py:125 - Creating agent CodeActAgent using LLM gpt-3.5-turbo\r\n06:56:44 - opendevin:INFO: llm.py:78 - Initializing LLM with model: gpt-3.5-turbo\r\n06:56:44 - opendevin:INFO: ssh_box.py:68 - SSHBox is running as opendevin user with USER_ID=197611 in the sandbox\r\n06:56:44 - opendevin:ERROR: ssh_box.py:75 - Error creating controller. Please check Docker is running and visit `https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting` for more debugging information.\r\n06:56:44 - opendevin:ERROR: agent.py:138 - Error creating controller: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))\r\nTraceback (most recent call last):\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 793, in urlopen\r\n response = self._make_request(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 496, in _make_request\r\n conn.request(\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connection.py\", line 400, in request\r\n self.endheaders()\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1331, in endheaders\r\n self._send_output(message_body, encode_chunked=encode_chunked)\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1091, in _send_output\r\n self.send(msg)\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1035, in send\r\n self.connect()\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/transport/unixconn.py\", line 27, in connect\r\n sock.connect(self.unix_socket)\r\nFileNotFoundError: [Errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/adapters.py\", line 486, in send\r\n resp = conn.urlopen(\r\n ^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 847, in urlopen\r\n retries = retries.increment(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/util/retry.py\", line 470, in increment\r\n raise reraise(type(error), error, _stacktrace)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/util/util.py\", line 38, in reraise\r\n raise value.with_traceback(tb)\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 793, in urlopen\r\n response = self._make_request(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 496, in _make_request\r\n conn.request(\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connection.py\", line 400, in request\r\n self.endheaders()\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1331, in endheaders\r\n self._send_output(message_body, encode_chunked=encode_chunked)\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1091, in _send_output\r\n self.send(msg)\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1035, in send\r\n self.connect()\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/transport/unixconn.py\", line 27, in connect\r\n sock.connect(self.unix_socket)\r\nurllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/client.py\", line 213, in _retrieve_server_version\r\n return self.version(api_version=False)[\"ApiVersion\"]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/daemon.py\", line 181, in version\r\n return self._result(self._get(url), json=True)\r\n ^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/utils/decorators.py\", line 44, in inner\r\n return f(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/client.py\", line 236, in _get\r\n return self.get(url, **self._set_request_timeout(kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/sessions.py\", line 602, in get\r\n return self.request(\"GET\", url, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/sessions.py\", line 589, in request\r\n resp = self.send(prep, **send_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/sessions.py\", line 703, in send\r\n r = adapter.send(request, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/adapters.py\", line 501, in send\r\n raise ConnectionError(err, request=request)\r\nrequests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/app/opendevin/server/agent/agent.py\", line 130, in create_controller\r\n self.controller = AgentController(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/app/opendevin/controller/agent_controller.py\", line 82, in __init__\r\n self.action_manager = ActionManager(self.id)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/opendevin/controller/action_manager.py\", line 39, in __init__\r\n self.sandbox = DockerSSHBox(\r\n ^^^^^^^^^^^^^\r\n File \"/app/opendevin/runtime/docker/ssh_box.py\", line 79, in __init__\r\n raise ex\r\n File \"/app/opendevin/runtime/docker/ssh_box.py\", line 73, in __init__\r\n self.docker_client = docker.from_env()\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/client.py\", line 94, in from_env\r\n return cls(\r\n ^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/client.py\", line 45, in __init__\r\n self.api = APIClient(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/client.py\", line 197, in __init__\r\n self._version = self._retrieve_server_version()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/client.py\", line 220, in _retrieve_server_version\r\n raise DockerException(\r\ndocker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))\r\n06:56:44 - opendevin:INFO: agent_controller.py:201 - Setting agent state from AgentState.LOADING to AgentState.INIT\r\nStarting loop_recv for sid: 39fa81fa-6b8b-4336-80cf-3754647e8400\r\nINFO: 172.17.0.1:54844 - \"GET /api/refresh-files HTTP/1.1\" 200 OK\r\nINFO: 172.17.0.1:54872 - \"GET /api/litellm-models HTTP/1.1\" 200 OK\r\nINFO: 172.17.0.1:54886 - \"GET /api/messages/total HTTP/1.1\" 200 OK\r\nINFO: 172.17.0.1:54886 - \"GET /api/agents HTTP/1.1\" 200 OK\r\nINFO: 172.17.0.1:54886 - \"DELETE /api/messages HTTP/1.1\" 200 OK\r\n06:57:16 - opendevin:INFO: agent.py:125 - Creating agent CodeActAgent using LLM gpt-4-turbo\r\n06:57:16 - opendevin:INFO: llm.py:78 - Initializing LLM with model: gpt-4-turbo\r\n06:57:16 - opendevin:WARNING: stream.py:30 - Subscriber subscribed multiple times: agent_controller\r\n06:57:16 - opendevin:INFO: ssh_box.py:68 - SSHBox is running as opendevin user with USER_ID=197611 in the sandbox\r\n06:57:16 - opendevin:ERROR: ssh_box.py:75 - Error creating controller. Please check Docker is running and visit `https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting` for more debugging information.\r\n06:57:16 - opendevin:ERROR: agent.py:138 - Error creating controller: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))\r\nTraceback (most recent call last):\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 793, in urlopen\r\n response = self._make_request(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 496, in _make_request\r\n conn.request(\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connection.py\", line 400, in request\r\n self.endheaders()\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1331, in endheaders\r\n self._send_output(message_body, encode_chunked=encode_chunked)\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1091, in _send_output\r\n self.send(msg)\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1035, in send\r\n self.connect()\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/transport/unixconn.py\", line 27, in connect\r\n sock.connect(self.unix_socket)\r\nFileNotFoundError: [Errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/adapters.py\", line 486, in send\r\n resp = conn.urlopen(\r\n ^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 847, in urlopen\r\n retries = retries.increment(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/util/retry.py\", line 470, in increment\r\n raise reraise(type(error), error, _stacktrace)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/util/util.py\", line 38, in reraise\r\n raise value.with_traceback(tb)\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 793, in urlopen\r\n response = self._make_request(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py\", line 496, in _make_request\r\n conn.request(\r\n File \"/app/.venv/lib/python3.12/site-packages/urllib3/connection.py\", line 400, in request\r\n self.endheaders()\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1331, in endheaders\r\n self._send_output(message_body, encode_chunked=encode_chunked)\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1091, in _send_output\r\n self.send(msg)\r\n File \"/usr/local/lib/python3.12/http/client.py\", line 1035, in send\r\n self.connect()\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/transport/unixconn.py\", line 27, in connect\r\n sock.connect(self.unix_socket)\r\nurllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/client.py\", line 213, in _retrieve_server_version\r\n return self.version(api_version=False)[\"ApiVersion\"]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/daemon.py\", line 181, in version\r\n return self._result(self._get(url), json=True)\r\n ^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/utils/decorators.py\", line 44, in inner\r\n return f(self, *args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/client.py\", line 236, in _get\r\n return self.get(url, **self._set_request_timeout(kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/sessions.py\", line 602, in get\r\n return self.request(\"GET\", url, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/sessions.py\", line 589, in request\r\n resp = self.send(prep, **send_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/sessions.py\", line 703, in send\r\n r = adapter.send(request, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/requests/adapters.py\", line 501, in send\r\n raise ConnectionError(err, request=request)\r\nrequests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/app/opendevin/server/agent/agent.py\", line 130, in create_controller\r\n self.controller = AgentController(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/app/opendevin/controller/agent_controller.py\", line 82, in __init__\r\n self.action_manager = ActionManager(self.id)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/opendevin/controller/action_manager.py\", line 39, in __init__\r\n self.sandbox = DockerSSHBox(\r\n ^^^^^^^^^^^^^\r\n File \"/app/opendevin/runtime/docker/ssh_box.py\", line 79, in __init__\r\n raise ex\r\n File \"/app/opendevin/runtime/docker/ssh_box.py\", line 73, in __init__\r\n self.docker_client = docker.from_env()\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/client.py\", line 94, in from_env\r\n return cls(\r\n ^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/client.py\", line 45, in __init__\r\n self.api = APIClient(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/client.py\", line 197, in __init__\r\n self._version = self._retrieve_server_version()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/app/.venv/lib/python3.12/site-packages/docker/api/client.py\", line 220, in _retrieve_server_version\r\n raise DockerException(\r\ndocker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))\r\n06:57:16 - opendevin:INFO: agent_controller.py:201 - Setting agent state from AgentState.INIT to AgentState.INIT\n```\n\n\n### Model and Agent\n\n- Model: gpt-4-turbo\r\n- Agent: CodeActAgent\n\n### Reproduction Steps\n\n_No response_\n\n### Logs, Errors, Screenshots, and Additional Context\n\n_No response_", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/1788", "file_loc": {"base_commit": "123968f887a5eb101b549472805e4b9e4ac7bce0", "files": [{"path": "containers/app/Dockerfile", "status": "modified", "Loc": {"(None, None, None)": {"add": [47]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["containers/app/Dockerfile"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "32ee6a5a646454a9dc2dae43275313e2d6f77073", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/6440", "iss_label": "bug", "title": "[Bug]: KeyError: 'ExposedPorts'", "body": "### Is there an existing issue for the same bug?\n\n- [x] I have checked the existing issues.\n\n### Describe the bug and reproduction steps\n\n```\n23:07:30 - openhands:ERROR: session.py:128 - Error creating agent_session: 'ExposedPorts'\nTraceback (most recent call last):\n File \"/workspaces/OpenHands/openhands/server/session/session.py\", line 115, in initialize_agent\n await self.agent_session.start(\n File \"/workspaces/OpenHands/openhands/server/session/agent_session.py\", line 98, in start\n await self._create_runtime(\n File \"/workspaces/OpenHands/openhands/server/session/agent_session.py\", line 212, in _create_runtime\n await self.runtime.connect()\n File \"/workspaces/OpenHands/openhands/runtime/impl/docker/docker_runtime.py\", line 120, in connect\n await call_sync_from_async(self._attach_to_container)\n File \"/workspaces/OpenHands/openhands/utils/async_utils.py\", line 18, in call_sync_from_async\n result = await coro\n ^^^^^^^^^^\n File \"/usr/local/python/3.12.1/lib/python3.12/concurrent/futures/thread.py\", line 58, in run\n result = self.fn(*self.args, **self.kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/workspaces/OpenHands/openhands/utils/async_utils.py\", line 17, in <lambda>\n coro = loop.run_in_executor(None, lambda: fn(*args, **kwargs))\n ^^^^^^^^^^^^^^^^^^^\n File \"/workspaces/OpenHands/openhands/runtime/impl/docker/docker_runtime.py\", line 321, in _attach_to_container\n for exposed_port in config['ExposedPorts'].keys():\n ~~~~~~^^^^^^^^^^^^^^^^\nKeyError: 'ExposedPorts'\n```\n\n### OpenHands Installation\n\nDocker command in README\n\n### OpenHands Version\n\nmain\n\n### Operating System\n\nNone\n\n### Logs, Errors, Screenshots, and Additional Context\n\n_No response_", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/6460", "file_loc": {"base_commit": "32ee6a5a646454a9dc2dae43275313e2d6f77073", "files": [{"path": "openhands/core/config/sandbox_config.py", "status": "modified", "Loc": {"('SandboxConfig', None, 6)": {"mod": [75]}}}, {"path": "openhands/runtime/impl/docker/docker_runtime.py", "status": "modified", "Loc": {"('DockerRuntime', '_attach_to_container', 318)": {"mod": [330, 331, 332, 333]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["openhands/runtime/impl/docker/docker_runtime.py", "openhands/core/config/sandbox_config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "f9088766e826e208195345a7fcde4920a87df3dd", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/3527", "iss_label": "bug", "title": "[Bug]: openhands-ai Python package requires agenthub", "body": "### Is there an existing issue for the same bug?\r\n\r\n- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting\r\n- [X] I have checked the existing issues.\r\n\r\n### Describe the bug\r\n\r\nWhen attempting to use the openhands-ai package from PyPi, I encounter an issue that `agenthub` cannot be imported. I believe this is because `agenthub` is imported, but it does not exist as part of the package in PyPi.\r\n\r\n### Current OpenHands version\r\n\r\n```bash\r\nopenhands-ai 0.8.3\r\n```\r\n\r\n\r\n### Installation and Configuration\r\nI ran `poetry install openhands-ai`, then installed missing dependencies, then attempted to use it. Specifically it is failing on the import of `openhands.core.main`.\r\n```bash\r\n\r\n\r\nfrom openhands.controller.state.state import State\r\nfrom openhands.core.config import AppConfig, SandboxConfig\r\nfrom openhands.core.main import run_controller\r\nfrom openhands.runtime import get_runtime_cls\r\n```\r\n\r\n\r\n### Model and Agent\r\n\r\n_No response_\r\n\r\n### Operating System\r\n\r\nMacOS\r\n\r\n### Reproduction Steps\r\n\r\n1. Clone https://github.com/mattbarlow-sg/openhands-test\r\n2. Run `poetry install`\r\n3. Run `poetry shell`\r\n4. Run `openhands-package`\r\n\r\n### Logs, Errors, Screenshots, and Additional Context\r\n\r\n```\r\nERROR:root: File \"/Users/matt.barlow/Library/Caches/pypoetry/virtualenvs/openhands-package-8SiZbAsB-py3.12/bin/openhands-package\", line 3, in <module>\r\n from openhands_package.cli import ai_tools\r\n File \"/Users/matt.barlow/Engineering/openhands-package/openhands_package/__init__.py\", line 1, in <module>\r\n from .cli import main\r\n File \"/Users/matt.barlow/Engineering/openhands-package/openhands_package/cli.py\", line 7, in <module>\r\n from openhands.core.main import run_controller\r\n File \"/Users/matt.barlow/Library/Caches/pypoetry/virtualenvs/openhands-package-8SiZbAsB-py3.12/lib/python3.12/site-packages/openhands/core/main.py\", line 7, in <module>\r\n import agenthub # noqa F401 (we import this to get the agents registered)\r\n ^^^^^^^^^^^^^^^\r\n\r\nERROR:root:<class 'ModuleNotFoundError'>: No module named 'agenthub'\r\n```", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/3548", "file_loc": {"base_commit": "f9088766e826e208195345a7fcde4920a87df3dd", "files": [{"path": "openhands/runtime/utils/runtime_build.py", "status": "modified", "Loc": {"(None, '_create_project_source_dist', 34)": {"mod": [62]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 45], "mod": [2, 69, 70, 71, 72, 85, 86, 87, 88]}}}, {"path": "tests/unit/test_runtime_build.py", "status": "modified", "Loc": {"(None, '_check_source_code_in_dir', 28)": {"add": [38], "mod": [54]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["openhands/runtime/utils/runtime_build.py"], "doc": [], "test": ["tests/unit/test_runtime_build.py"], "config": ["pyproject.toml"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "356caf0960df558be438f8c3e357e808c0619238", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/1514", "iss_label": "enhancement\nseverity:low", "title": "Micro-agent: typo checker", "body": "**What problem or use case are you trying to solve?**\r\n\r\nMicro-agents are small agents that specialize in one field. You don't have to write code to define a new micro-agent! Take a look at existing micro-agents: https://github.com/OpenDevin/OpenDevin/tree/main/agenthub/micro\r\n\r\nWe could add a new micro-agent that scans file(s) with the given path, (or maybe the current workspace?) and **just fix the typos** in-place. Motivation: typos are everywhere. Most project owners welcome PRs that fix typos, but few of them are happy to see their doc and/or docstring gets completely rewritten & polished by LLMs.\r\n\r\n**Do you have thoughts on the technical implementation?**\r\n\r\nWe should think about how we want to prompt the LLM to fix the typos. A naive approach is to let LLM review each document and return new document with typos fixed. This might waste a lot of output tokens. An alternative is to instruct LLM to return (typo, fix) pairs, and then use `sed` or `awk` to fix them in-place. This might need some experiments. Both approaches could cause false positives.\r\n\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/1613", "file_loc": {"base_commit": "356caf0960df558be438f8c3e357e808c0619238", "files": [{"path": "agenthub/micro/agent.py", "status": "modified", "Loc": {"(None, 'parse_response', 16)": {"mod": [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["agenthub/micro/agent.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "97e938d5450728128ccbf896ecbc5963ac223012", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/6382", "iss_label": "bug\nawaiting release", "title": "[Bug]: The sandbox container is being recreated when rejoining an existing conversation (all changes are lost)", "body": "### Is there an existing issue for the same bug?\n\n- [x] I have checked the existing issues.\n\n### Expected result\n\n- When joining an existing conversation, OH must start the same container (already created for this conversation), instead of creating a new one from scratch.\n- Each conversation must have their own exclusive container.\n- `keep_runtime_alive = 1` should also be the default IMO.\n- Restarting OH must also keep the sandbox containers.\n- The sandbox containers can be destroyed once the conversation/session is deleted.\n\n### Describe the bug and reproduction steps\n\nWith `keep_runtime_alive = 0`:\nThe sandbox container is being recreated when rejoining an existing conversation and all changes are lost.\n\nWith `keep_runtime_alive = 1`:\nThe container is not destroyed, but the same sandbox container is shared for all conversations, which is alsoincorrect.\n\n### OpenHands Installation\n\nDocker command in README\n\n### OpenHands Version\n\nmain (2025-01-21)\n\n### Operating System\n\nWSL on Windows\n\n### Test case\n\n- Start a conversation\n- Use `docker ps` to get the container ID\n- Restart OH and resume the conversation\n- Use `docker ps` to get the container ID and confirm it's the same", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/6402", "file_loc": {"base_commit": "b468150f2abf0f4c8bcf05072f808dd8a086e9c6", "files": [{"path": "openhands/runtime/impl/docker/docker_runtime.py", "status": "modified", "Loc": {"('DockerRuntime', '__init__', 57)": {"mod": [69]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["openhands/runtime/impl/docker/docker_runtime.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "5f61885e44cf1841fe9ec82befd38cf45b13869b", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/2866", "iss_label": "bug", "title": "[Bug]: azure open ai config", "body": "### Is there an existing issue for the same bug?\n\n- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting\n- [X] I have checked the existing issues.\n\n### Describe the bug\n\nBased on the documentation I ran the azure open ai config, I managed to open the ui but i got :\r\nAgent encountered an error.\r\n\r\nWould it be possible to give an example of azure openai configuration,\r\nthis one is not so clear : https://docs.all-hands.dev/modules/usage/llms/azureLLMs#azure-openai-configs\r\n\n\n### Current OpenDevin version\n\n```bash\nghcr.io/opendevin/opendevin\n```\n\n\n### Installation and Configuration\n\n```bash\nI ran this command in the terminal:\r\n\r\nWORKSPACE_BASE=$(pwd)/workspace\r\ndocker run -it \\\r\n --pull=always \\\r\n -e SANDBOX_USER_ID=$(id -u) \\\r\n -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \\\r\n -e LLM_BASE_URL=\"https://xxx.openai.azure.com/\" \\ \r\n -v $WORKSPACE_BASE:/opt/workspace_base \\\r\n -v /var/run/docker.sock:/var/run/docker.sock \\\r\n -p 3000:3000 \\\r\n --add-host host.docker.internal:host-gateway \\\r\n --name opendevin-app-$(date +%Y%m%d%H%M%S) \\\r\n ghcr.io/opendevin/opendevin\n```\n\n\n### Model and Agent\n\n_No response_\n\n### Operating System\n\nWSL\n\n### Reproduction Steps\n\n_No response_\n\n### Logs, Errors, Screenshots, and Additional Context\n\n File \"/app/.venv/lib/python3.12/site-packages/litellm/utils.py\", line 7496, in exception_type\r\n raise APIConnectionError(\r\nlitellm.exceptions.APIConnectionError: litellm.APIConnectionError: AzureException APIConnectionError - 'NoneType' object has no attribute 'split'", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/2894", "file_loc": {"base_commit": "5f61885e44cf1841fe9ec82befd38cf45b13869b", "files": [{"path": "docs/modules/usage/llms/azureLLMs.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [17], "mod": [15, 35, 36]}}}, {"path": "docs/modules/usage/llms/localLLMs.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [43]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/modules/usage/llms/localLLMs.md", "docs/modules/usage/llms/azureLLMs.md"], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "3661893161826c2a36bacdb3b08d12c805134bee", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/4142", "iss_label": "documentation\nenhancement\nfix-me", "title": "Documentation: Create a \"Usage Methods -> GUI Mode\" page", "body": "**What problem or use case are you trying to solve?**\r\n\r\nCurrently we have pages about different usage methods, CLI and headless, and soon to by github actions (#4113).\r\n\r\nHowever, we don't have a page describing GUI mode, other than the Getting Started page. We can start out by copying the information from the \"Getting Started\" page and then after we do that add more information about how to interact with the GUI.", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/4156", "file_loc": {"base_commit": "3661893161826c2a36bacdb3b08d12c805134bee", "files": [{"path": "docs/sidebars.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [24]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["docs/sidebars.ts"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "bfa1de4a6b18d3b8493b94f6e54e360012957fdc", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/2714", "iss_label": "bug\ngood first issue", "title": "[Bug]: The long filename will stretch the workspace panel", "body": "### Is there an existing issue for the same bug?\n\n- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting\n- [X] I have checked the existing issues.\n\n### Describe the bug\n\nThe issue manifests as follows:\r\n<img width=\"1513\" alt=\"image\" src=\"https://github.com/OpenDevin/OpenDevin/assets/16201837/3468e1cc-352a-4483-a883-d6a37a11157e\">\r\n\r\nWe can limit the maximum display length, or allow the panel to freely adjust its width and scroll along the x and y axes.\n\n### Current OpenDevin version\n\n```bash\n0.7.0\n```\n\n\n### Installation and Configuration\n\n```bash\nDefault configuration.\n```\n\n\n### Model and Agent\n\n_No response_\n\n### Operating System\n\n_No response_\n\n### Reproduction Steps\n\n_No response_\n\n### Logs, Errors, Screenshots, and Additional Context\n\n_No response_", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/2731", "file_loc": {"base_commit": "bfa1de4a6b18d3b8493b94f6e54e360012957fdc", "files": [{"path": "frontend/src/components/file-explorer/FileExplorer.tsx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [219, 222]}}}, {"path": "frontend/src/components/file-explorer/TreeNode.tsx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23, 24, 25]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["frontend/src/components/file-explorer/FileExplorer.tsx", "frontend/src/components/file-explorer/TreeNode.tsx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "93d2e4a338adcaa8acaa602adad14364abca821f", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/3903", "iss_label": "bug", "title": "[Bug]: LocalBox has been removed from 0.9.0", "body": "### Is there an existing issue for the same bug?\r\n\r\n- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting\r\n- [X] I have checked the existing issues.\r\n\r\n### Describe the bug\r\n\r\nHey team,\r\n\r\nWe built our setup based on local sandbox in Openshift with restricted permission. We did it after this discusion https://github.com/All-Hands-AI/OpenHands/discussions/2675 \r\n\r\nBut we found there is no local sandbox in v. 0.9.0+ and it brakes our setup :(\r\n\r\nIs there a replacement for it or would it be possible to revert this changes?\r\n\r\nMany thanks!\r\n\r\n### Current OpenHands version\r\n\r\n```bash\r\n0.9.0+\r\n```\r\n\r\n\r\n### Installation and Configuration\r\n\r\nWe 've written own Dockerfile based on yours:\r\n\r\n```bash\r\n\r\n\r\nFROM ghcr.io/opendevin/opendevin:0.7\r\nRUN chmod 777 -R /app\r\nENTRYPOINT []\r\nUSER root\r\n\r\n# install basic packages\r\nRUN apt-get update && apt-get install -y \\\r\n curl \\\r\n wget \\\r\n git \\\r\n vim \\\r\n nano \\\r\n unzip \\\r\n zip \\\r\n python3 \\\r\n python3-pip \\\r\n python3-venv \\\r\n python3-dev \\\r\n build-essential \\\r\n openssh-server \\\r\n sudo \\\r\n gcc \\\r\n jq \\\r\n g++ \\\r\n make \\\r\n iproute2 \\\r\n && rm -rf /var/lib/apt/lists/*\r\n\r\nRUN mkdir -p -m0755 /var/run/sshd\r\n\r\n# symlink python3 to python\r\nRUN ln -s /usr/bin/python3 /usr/bin/python\r\n\r\n# ==== OpenDevin Runtime Client ====\r\nRUN mkdir -p /opendevin && mkdir -p /opendevin/logs && chmod 777 /opendevin/logs\r\nRUN wget \"https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh\"\r\nRUN bash Miniforge3-$(uname)-$(uname -m).sh -b -p /opendevin/miniforge3\r\nRUN chmod -R g+w /opendevin/miniforge3\r\nRUN bash -c \". /opendevin/miniforge3/etc/profile.d/conda.sh && conda config --set changeps1 False && conda config --append channels conda-forge\"\r\nRUN echo \"\" > /opendevin/bash.bashrc\r\n\r\n# - agentskills dependencies\r\nRUN /opendevin/miniforge3/bin/pip install --upgrade pip\r\nRUN /opendevin/miniforge3/bin/pip install jupyterlab notebook jupyter_kernel_gateway flake8\r\nRUN /opendevin/miniforge3/bin/pip install python-docx PyPDF2 python-pptx pylatexenc openai\r\nRUN chmod 777 -R /opendevin\r\nRUN mkdir -p /opt/workspace_base && chmod -R 777 /opt/workspace_base\r\nRUN sed \"s/config.sandbox_type/\\'local\\'/g\" -i /app/opendevin/runtime/runtime.py && sed '24,27{/.*/d}' -i /app/opendevin/runtime/plugins/mixin.py && mkdir /opendevin/plugins/ && cp -av /app/opendevin/runtime/plugins/jupyter /opendevin/plugins/ && cp -av /app/opendevin/runtime/plugins/agent_skills /opendevin/plugins/\r\nRUN export PATH=/opendevin/miniforge3/bin:/app/.venv/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\r\nRUN echo $PATH\r\nRUN cd /app && playwright install\r\nCMD [\"uvicorn\", \"opendevin.server.listen:app\", \"--host\", \"0.0.0.0\", \"--port\", \"3000\"]\r\n\r\n\r\n```\r\n\r\nWe combined opendevin and sandbox into the same container, changed paths and permission.\r\n\r\nThis image works without root/docker etc so we were able to start it under restrictedv2 Openshift SCC\r\n```\r\n\r\n\r\n### Model and Agent\r\n\r\n_No response_\r\n\r\n### Operating System\r\n\r\n_No response_\r\n\r\n### Reproduction Steps\r\n\r\n_No response_\r\n\r\n### Logs, Errors, Screenshots, and Additional Context\r\n\r\n_No response_", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/5284", "file_loc": {"base_commit": "93d2e4a338adcaa8acaa602adad14364abca821f", "files": [{"path": ".github/workflows/ghcr-build.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [236]}}}, {"path": "openhands/runtime/README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [111, 114]}}}, {"path": "openhands/runtime/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "(None, 'get_runtime_cls', 11)": {"add": [23]}}}, {"path": "openhands/runtime/action_execution_server.py", "status": "modified", "Loc": {"('ActionExecutor', None, 83)": {"add": [165]}, "(None, None, None)": {"mod": [69, 70, 71]}}}, {"path": "openhands/runtime/plugins/jupyter/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "('JupyterPlugin', 'initialize', 22)": {"mod": [25, 26, 27, 33, 34, 35, 36, 37]}}}, {"path": "openhands/runtime/plugins/vscode/__init__.py", "status": "modified", "Loc": {"('VSCodePlugin', None, 18)": {"add": [21]}}}, {"path": "openhands/runtime/utils/bash.py", "status": "modified", "Loc": {"('BashSession', 'initialize', 184)": {"add": [189], "mod": [187]}}}, {"path": "openhands/runtime/utils/command.py", "status": "modified", "Loc": {"(None, 'get_action_execution_server_startup_command', 14)": {"add": [19], "mod": [35, 48, 50]}}}, {"path": "openhands/runtime/utils/runtime_init.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'init_user_and_working_directory', 6)": {"add": [33]}}}, {"path": "poetry.lock", "status": "modified", "Loc": {"(None, None, None)": {"add": [3372, 3869, 7776, 10128], "mod": [1, 231, 338, 1144, 1170, 1373, 1384, 1492, 1528, 1779, 3308, 3342, 3406, 3638, 3661, 4805, 5389, 6244, 6376, 6508, 6654, 6697, 7661, 8818, 9316, 9973, 10558]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"add": [71]}}}, {"path": "tests/runtime/conftest.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15], "mod": [6, 11, 279]}, "(None, 'get_runtime_classes', 130)": {"add": [133]}, "(None, '_get_sandbox_folder', 41)": {"mod": [41, 42, 43, 44, 45]}, "(None, '_load_runtime', 208)": {"mod": [219, 272]}}}, {"path": "tests/runtime/test_bash.py", "status": "modified", "Loc": {"(None, 'test_cmd_run', 211)": {"add": [227], "mod": [212, 214]}, "(None, 'test_git_operation', 444)": {"add": [466, 484], "mod": [444, 448, 449, 456, 458, 459]}, "(None, None, None)": {"mod": [10, 17]}, "(None, 'test_bash_command_env', 33)": {"mod": [34]}, "(None, 'test_bash_server', 45)": {"mod": [46, 67, 76]}, "(None, 'test_multiline_commands', 91)": {"mod": [92]}, "(None, 'test_multiple_multiline_commands', 112)": {"mod": [126]}, "(None, 'test_complex_commands', 157)": {"mod": [160]}, "(None, 'test_no_ps2_in_output', 171)": {"mod": [173]}, "(None, 'test_multiline_command_loop', 184)": {"mod": [198]}, "(None, 'test_run_as_user_correct_home_dir', 248)": {"mod": [249, 253]}, "(None, 'test_multi_cmd_run_in_single_line', 261)": {"mod": [262, 266]}, "(None, 'test_stateful_cmd', 272)": {"mod": [273, 283]}, "(None, 'test_failed_cmd', 288)": {"mod": [289]}, "(None, 'test_copy_single_file', 303)": {"mod": [304, 306]}, "(None, 'test_copy_directory_recursively', 333)": {"mod": [334, 336]}, "(None, 'test_copy_to_non_existent_directory', 362)": {"mod": [363, 365]}, "(None, 'test_overwrite_existing_file', 378)": {"mod": [379, 381]}, "(None, 'test_copy_non_existent_file', 406)": {"mod": [407, 409]}, "(None, 'test_copy_from_directory', 422)": {"mod": [423, 424]}, "(None, 'test_python_version', 502)": {"mod": [503]}, "(None, 'test_pwd_property', 516)": {"mod": [517]}, "(None, 'test_basic_command', 530)": {"mod": [531]}, "(None, 'test_interactive_command', 558)": {"mod": [559]}, "(None, 'test_long_output', 594)": {"mod": [595]}, "(None, 'test_long_output_exceed_history_limit', 608)": {"mod": [609]}, "(None, 'test_long_output_from_nested_directories', 624)": {"mod": [625]}, "(None, 'test_command_backslash', 649)": {"mod": [650]}, "(None, 'test_command_output_continuation', 676)": {"mod": [677]}, "(None, 'test_long_running_command_follow_by_execute', 714)": {"mod": [717]}, "(None, 'test_empty_command_errors', 757)": {"mod": [758]}, "(None, 'test_python_interactive_input', 770)": {"mod": [771]}, "(None, 'test_python_interactive_input_without_set_input', 798)": {"mod": [801]}, "(None, 'test_stress_long_output_with_soft_and_hard_timeout', 837)": {"mod": [840]}, "(None, 'test_bash_remove_prefix', 927)": {"mod": [928]}}}, {"path": "tests/runtime/test_browsergym_envs.py", "status": "modified", "Loc": {"(None, 'test_browsergym_eval_env', 31)": {"mod": [32]}}}, {"path": "tests/runtime/test_browsing.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [20]}, "(None, 'test_simple_browse', 23)": {"mod": [24, 27, 28, 29]}}}, {"path": "tests/runtime/test_edit.py", "status": "modified", "Loc": {"(None, 'test_edit_from_scratch', 30)": {"mod": [31]}, "(None, 'test_edit', 70)": {"mod": [71]}, "(None, 'test_edit_long_file', 129)": {"mod": [130]}}}, {"path": "tests/runtime/test_env_vars.py", "status": "modified", "Loc": {"(None, 'test_env_vars_os_environ', 16)": {"mod": [18]}, "(None, 'test_env_vars_runtime_operations', 35)": {"mod": [36]}, "(None, 'test_env_vars_added_by_config', 70)": {"mod": [71]}, "(None, 'test_docker_runtime_env_vars_persist_after_restart', 86)": {"mod": [89]}}}, {"path": "tests/runtime/test_images.py", "status": "modified", "Loc": {"(None, 'test_bash_python_version', 14)": {"mod": [21]}, "(None, 'test_nodejs_22_version', 48)": {"mod": [55]}, "(None, 'test_go_version', 69)": {"mod": [76]}}}, {"path": "tests/runtime/test_ipython.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "(None, 'test_simple_cmd_ipython_and_fileop', 32)": {"mod": [33]}, "(None, 'test_ipython_multi_user', 104)": {"mod": [105]}, "(None, 'test_ipython_simple', 176)": {"mod": [177]}, "(None, 'test_ipython_package_install', 199)": {"mod": [201]}, "(None, 'test_ipython_file_editor_permissions_as_openhands', 234)": {"mod": [236]}, "(None, 'test_file_read_and_edit_via_oh_aci', 315)": {"mod": [316]}}}, {"path": "tests/runtime/test_microagent.py", "status": "modified", "Loc": {"(None, 'test_load_microagents_with_trailing_slashes', 75)": {"mod": [81]}, "(None, 'test_load_microagents_with_selected_repo', 115)": {"mod": [122]}, "(None, 'test_load_microagents_with_missing_files', 158)": {"mod": [177]}}}, {"path": "tests/runtime/test_replay.py", "status": "modified", "Loc": {"(None, 'test_simple_replay', 29)": {"mod": [34, 36]}, "(None, 'test_simple_gui_replay', 51)": {"mod": [62]}, "(None, 'test_replay_wrong_initial_state', 81)": {"mod": [90, 92]}, "(None, 'test_replay_basic_interactions', 115)": {"mod": [123]}}}, {"path": "tests/runtime/test_stress_docker_runtime.py", "status": "modified", "Loc": {"(None, 'test_stress_docker_runtime', 9)": {"mod": [10]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["openhands/runtime/__init__.py", "openhands/runtime/plugins/jupyter/__init__.py", "openhands/runtime/plugins/vscode/__init__.py", "openhands/runtime/utils/bash.py", "openhands/runtime/utils/runtime_init.py", "tests/runtime/conftest.py", "openhands/runtime/action_execution_server.py", "openhands/runtime/utils/command.py"], "doc": ["openhands/runtime/README.md"], "test": ["tests/runtime/test_stress_docker_runtime.py", "tests/runtime/test_browsing.py", "tests/runtime/test_replay.py", "tests/runtime/test_microagent.py", "tests/runtime/test_bash.py", "tests/runtime/test_browsergym_envs.py", "tests/runtime/test_edit.py", "tests/runtime/test_images.py", "tests/runtime/test_ipython.py", "tests/runtime/test_env_vars.py"], "config": ["pyproject.toml", ".github/workflows/ghcr-build.yml", "poetry.lock"], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "0403b460f10207075b7472f5127bfdd4ab1a66f8", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/272", "iss_label": "enhancement", "title": "Add latest tag for docker image", "body": "**What problem or use case are you trying to solve?**\r\nProposed [here](https://github.com/OpenDevin/OpenDevin/pull/263#issuecomment-2023918115). Better to add `latest` tag for image. Then user do not need to pull image at specific version. We also do not need to always change the tags in [code](https://github.com/OpenDevin/OpenDevin/blob/a9102382f6a56765eea34fdac0f04ca0f2305651/opendevin/sandbox/sandbox.py#L17).\r\n\r\n**Describe the UX of the solution you'd like**\r\n\r\n**Do you have thoughts on the technical implementation?**\r\nNeed the pipeline builder to set it.\r\n\r\n**Describe alternatives you've considered**\r\n\r\n**Additional context**\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/290", "file_loc": {"base_commit": "2def49e79409108eacb4e797f7fdc2422cc5bd19", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, 32)": {"mod": [32]}}}, {"path": "evaluation/SWE-bench/scripts/run_docker_interactive.sh", "status": "modified", "Loc": {"(None, None, 3)": {"mod": [3]}}}, {"path": "opendevin/README.md", "status": "modified", "Loc": {"(None, None, 30)": {"mod": [30]}}}, {"path": "opendevin/sandbox/sandbox.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [21]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["opendevin/sandbox/sandbox.py"], "doc": ["README.md", "opendevin/README.md", "evaluation/SWE-bench/scripts/run_docker_interactive.sh"], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "221a4e83f1e438950591d183b0a6e7c5e15de6be", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/2308", "iss_label": "enhancement", "title": "[Feature]: Confirmation Mode for Agent", "body": "**What problem or use case are you trying to solve?**\r\n\r\nContext: https://opendevin.slack.com/archives/C06P5NCGSFP/p1717733829670139\r\n\r\nIf the agent is NOT operating inside a sandbox or if a user care a lot about not letting the agent mess around with their environment, we should better let user confirm action (command/code) before the agent execute them.\r\n\r\n**Describe the UX of the solution you'd like**\r\n\r\nAdd a confirmation mode for OpenDevin: A checkbox on the frontend; when enabled (checked), the frontend will prompt for the user's approval for **every** action the agent wants to execute.\r\n\r\n**Do you have thoughts on the technical implementation?**\r\n\r\nWhen confirmation mode is on, we probably need to add a check in the agent controller for every \"executable\" action -- the action can only be sent off for execution when it receives user confirmation from the front end.\r\n\r\n**Describe alternatives you've considered**\r\n\r\n**Additional context**\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/2774", "file_loc": {"base_commit": "456690818c94a266935888f1e56e0afa2c4d5219", "files": [{"path": "frontend/package-lock.json", "status": "modified", "Loc": {"(None, None, 20)": {"mod": [20]}, "(None, None, 8256)": {"mod": [8256, 8257, 8258]}}}, {"path": "frontend/package.json", "status": "modified", "Loc": {"(None, None, 19)": {"mod": [19]}}}, {"path": "frontend/src/components/AgentControlBar.tsx", "status": "modified", "Loc": {"(None, None, 19)": {"add": [19]}, "(None, None, 27)": {"add": [27]}, "(None, None, 29)": {"add": [29]}, "(None, None, 104)": {"mod": [104, 105]}, "(None, None, 107)": {"mod": [107, 108, 109, 110, 111, 112]}, "(None, None, 114)": {"mod": [114]}, "(None, None, 116)": {"mod": [116]}, "(None, None, 118)": {"mod": [118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139]}}}, {"path": "frontend/src/components/AgentStatusBar.tsx", "status": "modified", "Loc": {"(None, None, 60)": {"add": [60]}}}, {"path": "frontend/src/components/chat/Chat.test.tsx", "status": "modified", "Loc": {"(None, None, 3)": {"add": [3]}, "(None, None, 2)": {"mod": [2]}, "(None, None, 16)": {"mod": [16]}}}, {"path": "frontend/src/components/chat/Chat.tsx", "status": "modified", "Loc": {"(None, None, 2)": {"add": [2]}, "(None, None, 5)": {"add": [5]}, "(None, None, 8)": {"mod": [8]}, "(None, None, 12)": {"mod": [12]}}}, {"path": "frontend/src/components/chat/ChatInterface.tsx", "status": "modified", "Loc": {"(None, None, 126)": {"mod": [126]}, "(None, None, 172)": {"mod": [172]}}}, {"path": "frontend/src/components/chat/ChatMessage.test.tsx", "status": "modified", "Loc": {"(None, None, 32)": {"add": [32]}, "(None, None, 13)": {"mod": [13]}, "(None, None, 20)": {"mod": [20]}}}, {"path": "frontend/src/components/chat/ChatMessage.tsx", "status": "modified", "Loc": {"(None, None, 5)": {"add": [5]}, "(None, None, 8)": {"add": [8]}, "(None, None, 11)": {"add": [11]}, "(None, None, 60)": {"add": [60]}, "(None, None, 14)": {"mod": [14]}}}, {"path": "frontend/src/components/modals/settings/SettingsForm.test.tsx", "status": "modified", "Loc": {"(None, None, 11)": {"add": [11]}, "(None, None, 22)": {"add": [22]}, "(None, None, 30)": {"add": [30]}, "(None, None, 42)": {"add": [42]}, "(None, None, 47)": {"add": [47]}, "(None, None, 55)": {"add": [55]}, "(None, None, 74)": {"add": [74]}, "(None, None, 82)": {"add": [82]}, "(None, None, 87)": {"add": [87]}, "(None, None, 91)": {"add": [91]}}}, {"path": "frontend/src/components/modals/settings/SettingsForm.tsx", "status": "modified", "Loc": {"(None, None, 19)": {"add": [19]}, "(None, None, 30)": {"add": [30]}, "(None, None, 88)": {"add": [88]}, "(None, None, 1)": {"mod": [1]}}}, {"path": "frontend/src/components/modals/settings/SettingsModal.test.tsx", "status": "modified", "Loc": {"(None, None, 29)": {"add": [29]}, "(None, None, 35)": {"add": [35]}, "(None, None, 109)": {"add": [109]}, "(None, None, 199)": {"mod": [199]}}}, {"path": "frontend/src/components/modals/settings/SettingsModal.tsx", "status": "modified", "Loc": {"(None, None, 91)": {"add": [91]}, "(None, None, 172)": {"add": [172]}, "(None, None, 51)": {"mod": [51]}}}, {"path": "frontend/src/i18n/translation.json", "status": "modified", "Loc": {"(None, None, 569)": {"add": [569]}, "(None, None, 588)": {"add": [588]}, "(None, None, 683)": {"add": [683]}}}, {"path": "frontend/src/services/actions.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [69], "mod": [49, 55]}}}, {"path": "frontend/src/services/session.test.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [22]}}}, {"path": "frontend/src/services/settings.test.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [31, 48, 59], "mod": [23]}}}, {"path": "frontend/src/services/settings.ts", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 9, 14, 53, 59, 93], "mod": [72, 95, 96, 98]}}}, {"path": "frontend/src/types/AgentState.tsx", "status": "modified", "Loc": {"(None, None, 10)": {"add": [10]}}}, {"path": "opendevin/controller/agent_controller.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18, 23]}, "('AgentController', None, 46)": {"add": [51]}, "('AgentController', '__init__', 57)": {"add": [62, 94]}, "('AgentController', 'on_event', 151)": {"add": [172, 174]}, "('AgentController', 'set_agent_state_to', 188)": {"add": [207]}, "('AgentController', '_step', 247)": {"add": [348, 351]}, "('AgentController', 'set_initial_state', 364)": {"mod": [365, 370]}}}, {"path": "opendevin/controller/state/state.py", "status": "modified", "Loc": {"('State', None, 38)": {"add": [41]}}}, {"path": "opendevin/core/config.py", "status": "modified", "Loc": {"('AppConfig', None, 178)": {"add": [225]}}}, {"path": "opendevin/core/schema/agent.py", "status": "modified", "Loc": {"('AgentState', None, 4)": {"add": [39]}}}, {"path": "opendevin/core/schema/config.py", "status": "modified", "Loc": {"('ConfigType', None, 4)": {"add": [22]}}}, {"path": "opendevin/core/schema/observation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [46]}}}, {"path": "opendevin/events/action/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [34], "mod": [1]}}}, {"path": "opendevin/events/action/action.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 6]}}}, {"path": "opendevin/events/action/commands.py", "status": "modified", "Loc": {"('CmdRunAction', None, 10)": {"add": [14]}, "('IPythonRunCellAction', None, 29)": {"add": [33]}, "(None, None, None)": {"mod": [6]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["frontend/src/components/modals/settings/SettingsForm.tsx", "frontend/package-lock.json", "frontend/src/components/chat/Chat.test.tsx", "opendevin/core/config.py", "frontend/package.json", "frontend/src/components/AgentControlBar.tsx", "opendevin/controller/state/state.py", "frontend/src/components/modals/settings/SettingsModal.test.tsx", "opendevin/events/action/action.py", "opendevin/events/action/commands.py", "frontend/src/i18n/translation.json", "frontend/src/components/chat/ChatMessage.test.tsx", "frontend/src/components/chat/ChatMessage.tsx", "frontend/src/services/settings.ts", "frontend/src/services/settings.test.ts", "frontend/src/components/chat/Chat.tsx", "frontend/src/components/chat/ChatInterface.tsx", "frontend/src/services/session.test.ts", "frontend/src/components/AgentStatusBar.tsx", "opendevin/events/action/__init__.py", "frontend/src/components/modals/settings/SettingsModal.tsx", "opendevin/core/schema/config.py", "opendevin/core/schema/agent.py", "frontend/src/types/AgentState.tsx", "frontend/src/components/modals/settings/SettingsForm.test.tsx", "frontend/src/services/actions.ts", "opendevin/core/schema/observation.py", "opendevin/controller/agent_controller.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "c8f3d07e86dd41074971b5423fb932c2eda6db1e", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3370", "iss_label": "", "title": "AttributeError from contract errback", "body": "When running a contract with a URL that returns non-200 response, I get the following:\r\n```\r\n2018-08-09 14:40:23 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.bureauxlocaux.com/annonce/a-louer-bureaux-a-louer-a-nantes--1289-358662> (referer: None)\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py\", line 653, in _runCallbacks\r\n current.result = callback(current.result, *args, **kw)\r\n File \"/usr/local/lib/python3.6/site-packages/scrapy/contracts/__init__.py\", line 89, in eb_wrapper\r\n results.addError(case, exc_info)\r\n File \"/usr/local/lib/python3.6/unittest/runner.py\", line 67, in addError\r\n super(TextTestResult, self).addError(test, err)\r\n File \"/usr/local/lib/python3.6/unittest/result.py\", line 17, in inner\r\n return method(self, *args, **kw)\r\n File \"/usr/local/lib/python3.6/unittest/result.py\", line 115, in addError\r\n self.errors.append((test, self._exc_info_to_string(err, test)))\r\n File \"/usr/local/lib/python3.6/unittest/result.py\", line 186, in _exc_info_to_string\r\n exctype, value, tb, limit=length, capture_locals=self.tb_locals)\r\n File \"/usr/local/lib/python3.6/traceback.py\", line 470, in __init__\r\n exc_value.__cause__.__traceback__,\r\nAttributeError: 'getset_descriptor' object has no attribute '__traceback__'\r\n```\r\n\r\nHere is how `exc_info` looks like:\r\n```\r\n(HttpError('Ignoring non-200 response',), <class 'scrapy.spidermiddlewares.httperror.HttpError'>, <traceback object at 0x7f4bdca1d948>)\r\n```\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3371", "file_loc": {"base_commit": "c8f3d07e86dd41074971b5423fb932c2eda6db1e", "files": [{"path": "scrapy/contracts/__init__.py", "status": "modified", "Loc": {"('ContractsManager', 'eb_wrapper', 85)": {"mod": [87]}}}, {"path": "tests/test_contracts.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 4]}, "('ContractsManagerTest', 'test_scrapes', 163)": {"add": [187]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/contracts/__init__.py"], "doc": [], "test": ["tests/test_contracts.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "b337c986ca1188f4b26d30c9ae4bb7ff457ed505", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5811", "iss_label": "bug\ngood first issue", "title": "`BaseSettings.setdefault` does nothing", "body": "### Description\r\n\r\nCalling `setdefault` method of class `BaseSettings` does nothing.\r\n\r\n### Steps to Reproduce\r\n\r\n ```python\r\n from scrapy.settings import BaseSettings\r\n settings = BaseSettings()\r\n stored = settings.setdefault('key', 'value')\r\n print(stored) # prints None\r\n print(settings.copy_to_dict()) # prints empty dictionary\r\n ```\r\n\r\n**Expected behavior:**\r\n`settings.setdefault(key, default)` must work as described in `MutableMapping` interface: set `default` to `settings[key]` and return `default` if `key` is not present, otherwise return `settings[key]`.\r\n\r\n**Actual behavior:**\r\n`settings.setdefault(key, default)` does nothing regardless of holding `key` or not.\r\n\r\n**Reproduces how often:** 100%\r\n\r\n### Versions\r\n\r\nScrapy : 2.7.1\r\nlxml : 4.8.0.0\r\nlibxml2 : 2.9.12\r\ncssselect : 1.1.0\r\nparsel : 1.6.0\r\nw3lib : 1.22.0\r\nTwisted : 22.4.0\r\nPython : 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]\r\npyOpenSSL : 22.0.0 (OpenSSL 3.0.3 3 May 2022)\r\ncryptography : 37.0.2\r\nPlatform : Windows-10-10.0.19044-SP0\r\n\r\n\r\n### Additional context\r\n\r\n`BaseSettings` explicitly inherits from `MutableMapping` and does not redefine `setdefault` method. Thus, it uses base implementation:\r\n```python\r\ndef setdefault(self, key, default=None):\r\n 'D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D'\r\n try:\r\n return self[key]\r\n except KeyError:\r\n self[key] = default\r\n return default\r\n```\r\nBase implementation refers to `self[key]` which is in fact `self.__getitem__[key]`. `BaseSettings` has own `__getitem__` implementation:\r\n```python\r\ndef __getitem__(self, opt_name):\r\n if opt_name not in self:\r\n return None\r\n return self.attributes[opt_name].value\r\n```\r\nAnd here is the root of the problem: when passed `key` is not present, `__getitem__` returns `None`, and `setdefault` follows.\r\n\r\n**Solution**\r\nImplement own `setdefault` method. An example with matching signature:\r\n```python\r\ndef setdefault(self, opt_name, default=None):\r\n if opt_name not in self:\r\n self.set(opt_name, default)\r\n return default\r\n return self.attributes[opt_name].value\r\n```\r\n`priority='project'` argument can be added although this changes signature.\r\n\r\nOther way is to inherit from `Mapping` instead of `MutableMapping` if this method and other base methods are redundant.\r\n\r\n**Current workaround**\r\nConvert `BaseSettings` object to a dictionary and only then use `setdefault`.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5821", "file_loc": {"base_commit": "b337c986ca1188f4b26d30c9ae4bb7ff457ed505", "files": [{"path": "scrapy/settings/__init__.py", "status": "modified", "Loc": {"('BaseSettings', None, 56)": {"add": [295]}}}, {"path": "tests/test_settings/__init__.py", "status": "modified", "Loc": {"('BaseSettingsTest', None, 64)": {"add": [67]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/settings/__init__.py", "tests/test_settings/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "794ab19660d369f273abdd5b93721c209f6e4eab", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4528", "iss_label": "enhancement", "title": "Fail or warn if from_crawler() returns None", "body": "## Summary\r\n\r\nGenerate a warning or error if from_crawler() for a middleware/extension/etc. returns None\r\n\r\n## Motivation\r\n\r\nI created a custom extension and connected signals in the from_crawler() classmethod, but neglected to return the new extension instance. Scrapy still reported the extension under \"Enabled extensions\", but none of the signals worked, since the instance was immediately garbage collected and its signals were silently disconnected.\r\n\r\nThis was of course an error on my part, but it would have saved me a lot of debugging if I had gotten a warning that from_crawler() was returning None, or if the extension were removed from the \"Enabled extensions\" list.\r\n\r\nWould it be appropriate for utils.misc.create_instance() to raise an error or generate a warning if it's about to return None? Or should MiddlewareManager treat create_instance() returning None the same as create_instance() raising NotConfigured?", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4532", "file_loc": {"base_commit": "794ab19660d369f273abdd5b93721c209f6e4eab", "files": [{"path": "scrapy/utils/misc.py", "status": "modified", "Loc": {"(None, 'create_instance', 128)": {"add": [139], "mod": [146, 148, 150]}}}, {"path": "tests/test_utils_misc/__init__.py", "status": "modified", "Loc": {"('UtilsMiscTestCase', None, 13)": {"add": [133]}, "('UtilsMiscTestCase', 'test_create_instance', 80)": {"mod": [117, 118, 126]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/utils/misc.py", "tests/test_utils_misc/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "c9d7386a32aeb4bc7fe9654d194651eee1ede56c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/908", "iss_label": "", "title": "Current caching of DjangoItem's instance property is buggy", "body": "The current implementation of `DjangoItem`'s `instance` should be rewritten in order to reflect further modifications of the item to the underlying django's model instance. \nThese are the steps to reproduce the issue:\n\ncreate a django item:\n`item = MyDjangoItem()`\n\nset a field value:\n`item['foo'] = 1`\n\nsave the item:\n`model = item.save(commit=False)`\nitem.instance is now cached and further modifications to the item are not reflected... \n\nthis returns \"1\"... and it's ok, because has been previously assigned to the instance\n`print model.foo` \n\nnow... set a new item field\n`item['bar'] = 2`\n\nthis prints None (because the underlying cache has not been updated!!)\n`print item.instance.bar` \n\nIn conclusion... the cache should be purged each times the item is updated!\n\nCurrently I overrided the DjangoItem in this way to avoid the issue:\n\n```\nfrom scrapy.contrib.djangoitem import DjangoItem as BaseDjangoItem\n\nclass DjangoItem(BaseDjangoItem):\n def __setitem__(self, key, value):\n self._instance = None\n return super(DjangoItem, self).__setitem__(key, value)\n\n def __delitem__(self, key):\n self._instance = None\n super(DjangoItem, self).__delitem__(key)\n```\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/1065", "file_loc": {"base_commit": "c9d7386a32aeb4bc7fe9654d194651eee1ede56c", "files": [{"path": "scrapy/contrib/djangoitem.py", "status": "modified", "Loc": {"('DjangoItem', '__init__', 30)": {"add": [33], "mod": [31]}}}, {"path": "tests/test_djangoitem/__init__.py", "status": "modified", "Loc": {"('DjangoItemTest', 'test_default_field_values', 100)": {"add": [103]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["tests/test_djangoitem/__init__.py", "scrapy/contrib/djangoitem.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "776129a9513e2b6ab6f7e8cda1dd3de66cbbff44", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2452", "iss_label": "bug", "title": "Image Background converting to green.", "body": "Hi, \r\n\r\nProblem is I'm downloading images with crawler but they are transparency. So image is looking like this.\r\n\r\n![a2062c7f64b9a136c16f1a3d8491b70902986fc4](https://cloud.githubusercontent.com/assets/6665723/21229545/97a44c8c-c2ea-11e6-9f96-5e60106bd340.jpg)\r\n\r\nBut it should look like this.\r\n ##\r\n![wd-bvbz0120jch-12tb-my-cloud-ex2-ultra-gigabit-ethernet-kisisel-bulut-depolama](https://cloud.githubusercontent.com/assets/6665723/21229505/7bb3b8d2-c2ea-11e6-94fc-921adb2f21ee.png)\r\n\r\n\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/2675", "file_loc": {"base_commit": "776129a9513e2b6ab6f7e8cda1dd3de66cbbff44", "files": [{"path": "scrapy/pipelines/images.py", "status": "modified", "Loc": {"('ImagesPipeline', 'convert_image', 130)": {"add": [134]}}}, {"path": "tests/test_pipeline_images.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [98]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/pipelines/images.py"], "doc": [], "test": ["tests/test_pipeline_images.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "d874c4d90bcf96c7e5b507babaa2a45a233da506", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4124", "iss_label": "bug\ndocs", "title": "Fix wrong fact in JOBDIR documentation about requests needing to be pickle-serializable", "body": "The documentation about using `JODBIR` says that requests need to be serializable with `pickle`.\r\n\r\nBut, thanks to feedback from @kmike, now I know that their callback and errback methods do not need to be `pickle`-serializable as long as they are spider methods.\r\n\r\nThe documentation should be clear about this.\r\n\r\nRelated to #4125.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4139", "file_loc": {"base_commit": "d874c4d90bcf96c7e5b507babaa2a45a233da506", "files": [{"path": "docs/topics/jobs.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [74, 75, 77, 78, 80, 82, 83, 84, 85, 87, 88, 90, 92, 93, 94, 95, 97, 98, 104]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/topics/jobs.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "62a4ede5e995f83abd5a90f7dd6ac242f2f3870d", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4250", "iss_label": "enhancement", "title": "Batch deliveries for long running crawlers", "body": "## Summary\r\n\r\nAdd a new setting `FEED_STORAGE_BATCH` that will deliver a file whenever `item_scraped_count` reaches a multiple of that number.\r\n\r\n## Motivation\r\n\r\nFor long running jobs (say we are consuming inputs from a working queue) we may want partial results instead of waiting for a long batch to finish.\r\n\r\n## Describe alternatives you've considered\r\n\r\nOf course we can stop and restart a spider every now and then.\r\nHowever, a simpler approach is to have it running as long as required, but delivering partial results.\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4434", "file_loc": {"base_commit": "62a4ede5e995f83abd5a90f7dd6ac242f2f3870d", "files": [{"path": "docs/topics/feed-exports.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [243, 294, 448]}}}, {"path": "scrapy/extensions/feedexport.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8]}, "('_FeedSlot', '__init__', 209)": {"add": [216], "mod": [214]}, "('FeedExporter', '__init__', 242)": {"add": [272]}, "('FeedExporter', None, 232)": {"add": [324, 325, 345], "mod": [371]}, "('FeedExporter', 'item_scraped', 325)": {"add": [329]}, "('_FeedSlot', None, 208)": {"mod": [209]}, "('FeedExporter', 'open_spider', 276)": {"mod": [278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291]}, "('FeedExporter', 'close_spider', 293)": {"mod": [296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321]}, "('FeedExporter', '_get_uri_params', 371)": {"mod": [375, 376]}}}, {"path": "scrapy/settings/default_settings.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [149]}}}, {"path": "scrapy/utils/conf.py", "status": "modified", "Loc": {"(None, 'feed_complete_default_values_from_settings', 116)": {"add": [117]}}}, {"path": "tests/test_feedexport.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 26, 29]}, "('FeedExportTest', None, 505)": {"add": [511], "mod": [505, 518, 519, 520, 521, 562, 563, 564, 565, 566, 567, 568, 570, 571, 572, 574, 575, 577, 578, 579, 580, 581, 582, 583, 585, 586, 588, 589, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 696, 697, 698, 699, 700, 701, 702, 703]}, "('FeedExportTest', 'test_multiple_feeds_failing_logs_blocking_feed_storage', 1147)": {"add": [1165]}, "('FeedExportTest', 'test_export_no_items_not_store_empty', 720)": {"mod": [728]}, "('FeedExportTest', 'test_export_no_items_store_empty', 731)": {"mod": [748]}}}, {"path": "tests/test_utils_conf.py", "status": "modified", "Loc": {"('FeedExportConfigTestCase', 'test_feed_complete_default_values_from_settings_empty', 144)": {"add": [151, 159]}, "('FeedExportConfigTestCase', 'test_feed_complete_default_values_from_settings_non_empty', 162)": {"add": [171, 179]}}}, {"path": "tox.ini", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/utils/conf.py", "scrapy/extensions/feedexport.py", "scrapy/settings/default_settings.py"], "doc": ["docs/topics/feed-exports.rst"], "test": ["tests/test_utils_conf.py", "tests/test_feedexport.py"], "config": ["tox.ini"], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "016c7e92d1d2893e7a8ce61c7f2e76818e71d019", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5161", "iss_label": "", "title": "Feeds Enhancement: Item Filters", "body": "<!--\r\n\r\nThanks for taking an interest in Scrapy!\r\n\r\nIf you have a question that starts with \"How to...\", please see the Scrapy Community page: https://scrapy.org/community/.\r\nThe GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.\r\n\r\nKeep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md\r\n\r\nThe following is a suggested template to structure your pull request, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#writing-patches and https://doc.scrapy.org/en/latest/contributing.html#submitting-patches\r\n\r\n-->\r\n\r\n## Summary\r\n\r\nCurrently there are no convenient ways to filter items before they can be exported. An ```ItemChecker``` class can be used to filter items while also providing flexibility to the user. \r\n\r\n## Motivation/Proposal\r\n\r\nScrapy currently doesn't have any convenient APIs to customize conditions for item exports. An ```ItemChecker``` class can be used by the user to define constraints for acceptable items for particular feeds.\r\n\r\nThe ```ItemChecker``` class can have 3 main public methods ```accepts```, ```accepts_class``` and ```accepts_fields```. Scrapy will mainly use ```accepts``` method to decide if an item is acceptable, ```accepts_class``` and ```accepts_fields``` will have certain default behaviors which can be overriden by the user should they want to customize them.\r\n```python\r\nclass ItemChecker:\r\n \"\"\"\r\n This will be used by FeedExporter to decide if an item should be allowed\r\n to be exported to a particular feed.\r\n :param feed_options: FEEDS dictionary passed from FeedExporter\r\n :type feed_options: dict\r\n \"\"\"\r\n accepted_items = [] # list of Items user wants to accept\r\n\r\n def __init__(self, feed_options):\r\n # populate accepted_items with item_classes values from feed_options if present\r\n\r\n def accepts(self, item):\r\n \"\"\"\r\n Main method to be used by FeedExporter to check if the item is acceptable according\r\n to defined constraints. This method uses accepts_class and accept_fields method\r\n to decide if the item is acceptable.\r\n :param item: scraped item which user wants to check if is acceptable\r\n :type item: scrapy supported items (dictionaries, Item objects, dataclass objects, and attrs objects)\r\n :return: `True` if accepted, `False` otherwise\r\n :rtype: bool\r\n \"\"\"\r\n\r\n def accepts_class(self, item):\r\n \"\"\"\r\n Method to check if the item is an instance of a class declared in accepted_items\r\n list. Can be overriden by user if they want allow certain item classes.\r\n Default behaviour: if accepted_items is empty then all items will be\r\n accepted else only items present in accepted_items will be accepted.\r\n :param item: scraped item\r\n :type item: scrapy supported items (dictionaries, Item objects, dataclass objects, and attrs objects)\r\n :return: `True` if item in accepted_items, `False` otherwise\r\n :rtype: bool\r\n \"\"\"\r\n\r\n def accepts_fields(self, fields):\r\n \"\"\"\r\n Method to check if certain fields of the item passes the filtering\r\n criteria. Users can override this method to add their own custom\r\n filters.\r\n Default behaviour: accepts all fields.\r\n :param fields: all the fields of the scraped item\r\n :type fields: dict\r\n :return: `True` if all the fields passes the filtering criteria, else `False`\r\n :rtype: bool\r\n \"\"\"\r\n```\r\nSuch custom filters can be declared in ```settings.py```. For convenience Items can also be declared here without needing to create a custom ```ItemChecker``` class.\r\n```python\r\nfrom myproject.filterfile import MyFilter1\r\nfrom myproject.items import MyItem1, MyItem2\r\n\r\nFEEDS = {\r\n 'items1.json': {\r\n 'format': 'json',\r\n 'item_filter': MyFilter1,\r\n },\r\n 'items2.xml': {\r\n 'format': 'xml',\r\n 'item_classes': (MyItem1, MyItem2),\r\n },\r\n}\r\n```\r\n\r\n\r\n## Describe alternatives you've considered\r\n\r\nThis feature builds upon #4576. \r\n\r\n## Additional context\r\n\r\nThis feature proposal is part of a GSoC project (see #4963). This issue has been created to get inputs from the Scrapy community to refine the proposed feature.\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5178", "file_loc": {"base_commit": "016c7e92d1d2893e7a8ce61c7f2e76818e71d019", "files": [{"path": "docs/topics/feed-exports.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [270, 313, 322, 328, 349]}}}, {"path": "scrapy/extensions/feedexport.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [48]}, "('_FeedSlot', '__init__', 218)": {"add": [227]}, "('FeedExporter', '__init__', 253)": {"add": [257, 271, 277]}, "('FeedExporter', '_start_new_batch', 342)": {"add": [370]}, "('FeedExporter', 'item_scraped', 376)": {"add": [378]}, "('FeedExporter', '_get_uri_params', 478)": {"add": [488]}, "('_FeedSlot', None, 217)": {"mod": [218]}}}, {"path": "tests/test_feedexport.py", "status": "modified", "Loc": {"('FeedExportTestBase', None, 556)": {"add": [563]}, "('FeedExportTest', None, 632)": {"add": [931]}, "('FeedExportTest', 'test_export_multiple_item_classes', 889)": {"mod": [891, 892, 893, 897]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/extensions/feedexport.py"], "doc": ["docs/topics/feed-exports.rst"], "test": ["tests/test_feedexport.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "1445ebd2294cd3d1d8886649fec969bfe78979ad", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5655", "iss_label": "", "title": "Add twine check to CI", "body": "It'd be nice to do https://twine.readthedocs.io/en/stable/#twine-check on CI, to ensure our changes don't break pypi rendering.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5656", "file_loc": {"base_commit": "1445ebd2294cd3d1d8886649fec969bfe78979ad", "files": [{"path": ".github/workflows/checks.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [27]}}}, {"path": "tox.ini", "status": "modified", "Loc": {"(None, None, None)": {"add": [73]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["tox.ini", ".github/workflows/checks.yml"], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "3a4d57b3f52633d77291e51f31353fd317034d8c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/1223", "iss_label": "", "title": "FeedExporters export empty dicts when FEED_EXPORT_FIELDS setting is not set", "body": "Reported by several users using S3 exporter and JsonItemExporter.\n\nIt looks like the docs for [FEED_EXPORT_FIELDS](http://doc.scrapy.org/en/master/topics/feed-exports.html?#std:setting-FEED_EXPORT_FIELDS) do not match current behaviour:\n\n> When omitted, Scrapy uses fields defined in Item subclasses a spider is yielding. If raw dicts are used as items Scrapy tries to infer field names from the exported data - currently it uses field names from the first item.\n\n```\n if self.fields_to_export is None:\n if include_empty and not isinstance(item, dict):\n field_iter = six.iterkeys(item.fields)\n else:\n field_iter = six.iterkeys(item)\n```\n\nhttps://github.com/scrapy/scrapy/blob/master/scrapy/exporters/__init__.py#L59\n\nThis following line fetching settings for FEED_EXPORT_FIELDS returns an empty list `[]` when setting is absent, and not `None` as one would expect (a bug in `settings.getlist()` IMO)\n\n```\nself.export_fields = settings.getlist('FEED_EXPORT_FIELDS')\n```\n\nhttps://github.com/scrapy/scrapy/blob/master/scrapy/extensions/feedexport.py#L154\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/1224", "file_loc": {"base_commit": "3a4d57b3f52633d77291e51f31353fd317034d8c", "files": [{"path": "docs/topics/feed-exports.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [238], "mod": [244, 245, 246]}}}, {"path": "scrapy/extensions/feedexport.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24]}, "('FeedExporter', '__init__', 142)": {"mod": [155]}}}, {"path": "tests/test_feedexport.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3]}, "('FeedExportTest', None, 120)": {"mod": [183, 197, 233, 247]}, "('FeedExportTest', 'test_export_csv_items', 183)": {"mod": [194]}, "('FeedExportTest', 'test_export_csv_multiple_item_classes', 197)": {"mod": [210, 212, 218, 220, 229, 230]}, "('FeedExportTest', 'test_export_csv_dicts', 233)": {"mod": [235, 240, 244]}, "('FeedExportTest', 'test_export_csv_feed_export_fields', 247)": {"mod": [263, 264, 272, 273]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/extensions/feedexport.py"], "doc": ["docs/topics/feed-exports.rst"], "test": ["tests/test_feedexport.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "e1f66620ec7341c55f3eb7f44088224b5f68c1ad", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5855", "iss_label": "good first issue\nCI", "title": "test_batch_path_differ sometimes fails", "body": "See https://github.com/scrapy/scrapy/pull/5847#issuecomment-1471778039.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5898", "file_loc": {"base_commit": "e1f66620ec7341c55f3eb7f44088224b5f68c1ad", "files": [{"path": "tests/test_feedexport.py", "status": "modified", "Loc": {"('BatchDeliveriesTest', 'test_batch_path_differ', 2542)": {"mod": [2545, 2555]}, "('BatchDeliveriesTest', 'test_s3_export', 2587)": {"mod": [2618]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["tests/test_feedexport.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "0f4b70f5821b4db2882ad4f01d340f62bbb01bf7", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/10", "iss_label": "enhancement", "title": "Add support for FTP downloads", "body": "We should add support for following FTP links like:\n ftp://www.example.com/somedir/somefile.xml\n\nI suppose Requests will only use the URL attribute (and perhaps some data in meta, if it's needed). \n\nAs for Responses, they will contain the file contents in the body, as one would expect.\nhere should be a flag to enable/disable passive FTP, perhaps even per spider.\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/329", "file_loc": {"base_commit": "0f4b70f5821b4db2882ad4f01d340f62bbb01bf7", "files": [{"path": "scrapy/settings/default_settings.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [58]}}}, {"path": "scrapy/tests/test_downloader_handlers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 18]}, "('S3TestCase', 'test_request_signing6', 309)": {"add": [328]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/settings/default_settings.py"], "doc": [], "test": ["scrapy/tests/test_downloader_handlers.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "c7654f7cb1081f0937f84c1b2ed272318c9c2c6c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2435", "iss_label": "enhancement", "title": "Exposing downloader stats to custom scheduler", "body": "In order to get maximum fetching performance, the queue have to be carefully metered. In order to do this the custom scheduler needs to know:\r\n- the type of the key in downloader (ip or hostname),\r\n- count of requests to specific hostname/ip in the queue,\r\n- delay/concurrency parameters of hostname/ip,\r\n- list of all hostname/ips in the queue.\r\n\r\nCurrent Scheduler API is designed for storage and resume-from-disk purposes, so I think it's time to re-think it taking into account efficiency of fetching. The most common problem with inefficient fetching is a queue filled with a single domain and polite crawling requirement.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3393", "file_loc": {"base_commit": "c7654f7cb1081f0937f84c1b2ed272318c9c2c6c", "files": [{"path": ".gitignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}}}, {"path": "docs/topics/signals.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [281]}}}, {"path": "scrapy/core/downloader/__init__.py", "status": "modified", "Loc": {"('Downloader', '_enqueue_request', 123)": {"add": [131]}}}, {"path": "scrapy/signals.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15]}}}, {"path": "tests/test_engine.py", "status": "modified", "Loc": {"('CrawlerRun', '__init__', 101)": {"add": [105]}, "('CrawlerRun', 'run', 111)": {"add": [126]}, "('CrawlerRun', None, 98)": {"add": [157]}, "('EngineTest', '_assert_scheduled_requests', 202)": {"add": [214]}, "('EngineTest', '_assert_downloaded_responses', 219)": {"add": [221]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/core/downloader/__init__.py", "scrapy/signals.py"], "doc": ["docs/topics/signals.rst"], "test": ["tests/test_engine.py"], "config": [".gitignore"], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "794ab19660d369f273abdd5b93721c209f6e4eab", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4556", "iss_label": "enhancement\ngood first issue\ndocs", "title": "Cover chompjs in the documentation", "body": "We cover js2xml in the documentation. However, the library can be rather slow. For use cases where https://github.com/Nykakin/chompjs may be used instead, it should be encouraged.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4562", "file_loc": {"base_commit": "794ab19660d369f273abdd5b93721c209f6e4eab", "files": [{"path": "docs/topics/dynamic-content.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [186, 243]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/topics/dynamic-content.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "6e49c379a8ecfe92c99a37b6bb6d7e440df56bd9", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3171", "iss_label": "enhancement\ndiscuss", "title": "Log with error level instead of debug when reaching max retry times", "body": "There\u2019s no easy/not hackish way to log an error when reaching max retry times with standard scrapy `RetryMiddleware`. It\u2019s useful for me to be able to see right away if a page I tried to crawl has not been downloaded.\r\n\r\nI think it\u2019s sensible to change this line to log to error level instead:\r\n\r\nhttps://github.com/scrapy/scrapy/blob/6cc6bbb5fc5c102271829a554772effb0444023c/scrapy/downloadermiddlewares/retry.py#L89", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3566", "file_loc": {"base_commit": "6e49c379a8ecfe92c99a37b6bb6d7e440df56bd9", "files": [{"path": "scrapy/downloadermiddlewares/retry.py", "status": "modified", "Loc": {"('RetryMiddleware', '_retry', 61)": {"mod": [87]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/downloadermiddlewares/retry.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "7ae32ea38d9b78402528ac3dffc8e1c5f1cf86b7", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5774", "iss_label": "enhancement\nCI", "title": "Deprecate direct invocation of `setup.py` ", "body": "I was reading this article: [Why you shouldn't invoke setup.py directly](https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html) that explains why this is not a good practice anymore. I thought about discussing this. Should we change to new approach?\r\n\r\nWe have only a few places that we directly invoke `setup.py` so should be an easy task to replace it.\r\n\r\nhttps://github.com/scrapy/scrapy/search?q=setup.py", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5776", "file_loc": {"base_commit": "7ae32ea38d9b78402528ac3dffc8e1c5f1cf86b7", "files": [{"path": ".github/workflows/publish.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [27, 28]}}}, {"path": "tox.ini", "status": "modified", "Loc": {"(None, None, None)": {"add": [77], "mod": [79]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["tox.ini", ".github/workflows/publish.yml"], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "c57512fa669e6f6b1b766a7639206a380f0d10ce", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/50", "iss_label": "enhancement\ndiscuss\npatch available", "title": "Offsite middleware ignoring port", "body": "In my spider I have the following:\n\nclass MySpider(BaseSpider):\n\n``` python\n allowed_domains = ['192.169.0.15:8080']\n```\n\nand in the parse method I do something like:\n\n``` python\n yield Request('http://192.169.0.15:8080/mypage.html', self.my_callback_function)\n```\n\nthe result when I run the code is that that scrapy reports:\n\n DEBUG: Filtered offsite request to '192.168.0.15': <GET http://192.168.0.15:8080/mypage.html>\n\nWhich is wrong - it seems to be ignoring the port. If I change the allowed_domains to:\n\n``` python\n allowed_domains = ['192.169.0.15:8080', '192.16.0.15']\n```\n\nThen it works as you would expect it to. No big deal, can work around it but I think it is a bug. The problem being located in the should_follow method of the OffsiteMiddleware class in contrib/spidermiddleware/offsite.py\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4413", "file_loc": {"base_commit": "c57512fa669e6f6b1b766a7639206a380f0d10ce", "files": [{"path": "scrapy/spidermiddlewares/offsite.py", "status": "modified", "Loc": {"('URLWarning', None, 71)": {"add": [72]}, "('OffsiteMiddleware', 'get_host_regex', 51)": {"mod": [56, 58, 62]}}}, {"path": "tests/test_spidermiddleware_offsite.py", "status": "modified", "Loc": {"('TestOffsiteMiddleware5', 'test_get_host_regex', 77)": {"add": [82]}, "(None, None, None)": {"mod": [7]}, "('TestOffsiteMiddleware', 'test_process_spider_output', 22)": {"mod": [29]}, "('TestOffsiteMiddleware3', None, 56)": {"mod": [58, 59]}, "('TestOffsiteMiddleware4', None, 62)": {"mod": [64, 65, 66]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/spidermiddlewares/offsite.py"], "doc": [], "test": ["tests/test_spidermiddleware_offsite.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "845c64b89df765ff5c015632c082b6472e61b7d3", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/306", "iss_label": "", "title": "--output-format raise on invalid format", "body": "_Using version 0.16.4_\n\nCurrently, if an invalid format is passed to the `-t` or `--output-format` options, the spider will proceed with its crawling operation, but no output will be saved or produced. This could be seen as frustrating on large scrape runs, in the event the user running it passed a mistyped format to the option and assumed their operation was saving output only to find later the scraped data has only been logged to stdout.\n\nShould we make the output format option raise or fail+exit if an invalid or unknown format is passed?\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/307", "file_loc": {"base_commit": "845c64b89df765ff5c015632c082b6472e61b7d3", "files": [{"path": "scrapy/commands/crawl.py", "status": "modified", "Loc": {"('Command', 'process_options', 24)": {"add": [34]}}}, {"path": "scrapy/commands/runspider.py", "status": "modified", "Loc": {"('Command', 'process_options', 46)": {"add": [56]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/commands/crawl.py", "scrapy/commands/runspider.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "5f194202114fd38530c78299d51b6966b4802f59", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5621", "iss_label": "", "title": "Support for Twisted 22.8.0", "body": "Twisted 22.8.0 was released recently, and it says:\r\n\r\n> Twisted now works with Cryptography versions 37 and above, and as a result, its minimum TLS protocol version has been upgraded to TLSv1.2.\r\n\r\nConsequently, tests on some envs, including 3.8, now fail because they install older cryptography.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5632", "file_loc": {"base_commit": "5f194202114fd38530c78299d51b6966b4802f59", "files": [{"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [23, 27, 29]}}}, {"path": "tests/test_crawler.py", "status": "modified", "Loc": {"('CrawlerProcessSubprocess', 'test_reactor_default_twisted_reactor_select', 328)": {"mod": [330]}}}, {"path": "tox.ini", "status": "modified", "Loc": {"(None, None, None)": {"mod": [76, 82, 84]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["setup.py"], "doc": [], "test": ["tests/test_crawler.py"], "config": ["tox.ini"], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "67ab8d4650c1e9212c9508803c7b5265e166cbaa", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/6433", "iss_label": "", "title": "core.engine/Signal handler polluting log", "body": "### Description\r\n\r\nThe `OffsiteMiddleware` logs a single message for each domain filtered. Great!\r\nBut then the `core.engine` logs a message for every single url filtered by the OffsiteMiddleware.\r\n(LOG_LEVEL: DEBUG)\r\n\r\nThe websites I am scraping have like 10 external links to twitter/youtube/etc in each page. For hundreds pages scrapped, the only thing I can see in the logs is `Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request`. \r\n\r\nI don't know if this is intended behavior. If so, it is obviously not a bug.\r\nBut nonetheless, it is very different behavior compared to previous 1.x Scrapy versions. (I don't know when it has changed and I couldn't find anything in the release notes about that.)\r\n\r\nIf not a bug, maybe we could discuss the possibility of changing this behavior so we can have logs less polluted when debugging.\r\n\r\n### Steps to Reproduce\r\n\r\n#### Just run the following spider.\r\n(url taken from another issue).\r\n\r\n```python\r\nimport scrapy\r\n\r\nclass TestSpider(scrapy.spiders.CrawlSpider):\r\n name = 'test'\r\n allowed_domains = ['capybala.com']\r\n start_urls = ['https://capybala.com/']\r\n custom_settings = {\r\n 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',\r\n 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',\r\n 'LOG_LEVEL': 'DEBUG'\r\n }\r\n\r\n rules = (scrapy.spiders.Rule(scrapy.linkextractors.LinkExtractor(), callback='parse', follow=True),)\r\n \r\n def parse(self, response):\r\n print('noop')\r\n```\r\n\r\n#### Output: \r\n```txt\r\n2024-07-08 16:34:43 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: scrapybot)\r\n2024-07-08 16:34:43 [scrapy.utils.log] INFO: Versions: lxml 5.2.2.0, libxml2 2.12.6, cssselect 1.2.0, parsel 1.9.1, w3lib 2.2.1, Twisted 24.3.0, Python 3.12.4 (main, Jul 3 2024, 16:55:58) [GCC 11.2.0], pyOpenSSL 24.1.0 (OpenSSL 3.2.2 4 Jun 2024), cryptography 42.0.8, Platform Linux-5.15.145-x86_64-AMD_Ryzen_9_5980HX_with_Radeon_Graphics-with-glibc2.33\r\n2024-07-08 16:34:43 [scrapy.addons] INFO: Enabled addons:\r\n[]\r\n2024-07-08 16:34:43 [asyncio] DEBUG: Using selector: EpollSelector\r\n2024-07-08 16:34:43 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor\r\n2024-07-08 16:34:43 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop\r\n2024-07-08 16:34:43 [scrapy.extensions.telnet] INFO: Telnet Password: d2c4cce2938fba32\r\n2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled extensions:\r\n['scrapy.extensions.corestats.CoreStats',\r\n 'scrapy.extensions.telnet.TelnetConsole',\r\n 'scrapy.extensions.memusage.MemoryUsage',\r\n 'scrapy.extensions.logstats.LogStats']\r\n2024-07-08 16:34:43 [scrapy.crawler] INFO: Overridden settings:\r\n{'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',\r\n 'SPIDER_LOADER_WARN_ONLY': True,\r\n 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}\r\n2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled downloader middlewares:\r\n['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',\r\n 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',\r\n 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',\r\n 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',\r\n 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',\r\n 'scrapy.downloadermiddlewares.retry.RetryMiddleware',\r\n 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',\r\n 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',\r\n 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',\r\n 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',\r\n 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',\r\n 'scrapy.downloadermiddlewares.stats.DownloaderStats']\r\n2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled spider middlewares:\r\n['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',\r\n 'scrapy.spidermiddlewares.referer.RefererMiddleware',\r\n 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',\r\n 'scrapy.spidermiddlewares.depth.DepthMiddleware']\r\n2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled item pipelines:\r\n[]\r\n2024-07-08 16:34:43 [scrapy.core.engine] INFO: Spider opened\r\n2024-07-08 16:34:43 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)\r\n2024-07-08 16:34:43 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/> (referer: None)\r\n2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'bokuran.com': <GET https://bokuran.com/>\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'webooker.info': <GET http://webooker.info/2013/10/ebook1-release/>\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'ebook-1.com': <GET https://ebook-1.com/>\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://ebook-1.com/> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/> (referer: https://capybala.com/)\r\n2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'chrome.google.com': <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono>\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'twitter.com': <GET https://twitter.com/orangain>\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/orangain> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/webooker_log> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/find-kindle-edition/> (referer: https://capybala.com/)\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/bokuran/> (referer: https://capybala.com/)\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/ebook-1/> (referer: https://capybala.com/)\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/dendrogram/> (referer: https://capybala.com/)\r\nnoop\r\n2024-07-08 16:34:44 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://capybala.com/> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://ebook-1.com/> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/orangain> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/webooker_log> before it reached the scheduler.\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/> before it reached the scheduler.\r\nnoop\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.\r\nnoop\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.\r\nnoop\r\nnoop\r\n2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.\r\n2024-07-08 16:34:45 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://tree.capybala.com/> (referer: https://capybala.com/)\r\nnoop\r\n2024-07-08 16:34:45 [scrapy.core.engine] INFO: Closing spider (finished)\r\n2024-07-08 16:34:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:\r\n{'downloader/request_bytes': 1735,\r\n 'downloader/request_count': 7,\r\n 'downloader/request_method_count/GET': 7,\r\n 'downloader/response_bytes': 17486,\r\n 'downloader/response_count': 7,\r\n 'downloader/response_status_count/200': 7,\r\n 'dupefilter/filtered': 16,\r\n 'elapsed_time_seconds': 1.950522,\r\n 'finish_reason': 'finished',\r\n 'finish_time': datetime.datetime(2024, 7, 8, 19, 34, 45, 376469, tzinfo=datetime.timezone.utc),\r\n 'httpcompression/response_bytes': 29892,\r\n 'httpcompression/response_count': 7,\r\n 'log_count/DEBUG': 33,\r\n 'log_count/INFO': 10,\r\n 'memusage/max': 70103040,\r\n 'memusage/startup': 70103040,\r\n 'offsite/domains': 5,\r\n 'offsite/filtered': 17,\r\n 'request_depth_max': 2,\r\n 'response_received_count': 7,\r\n 'scheduler/dequeued': 7,\r\n 'scheduler/dequeued/memory': 7,\r\n 'scheduler/enqueued': 7,\r\n 'scheduler/enqueued/memory': 7,\r\n 'start_time': datetime.datetime(2024, 7, 8, 19, 34, 43, 425947, tzinfo=datetime.timezone.utc)}\r\n2024-07-08 16:34:45 [scrapy.core.engine] INFO: Spider closed (finished)\r\n\r\n```\r\n\r\n**Expected behavior:**\r\n\r\nI was not expecting to see so many `[scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET [...]> before it reached the scheduler.` messages. I believe just the messages given by the OffsiteMiddleware are enough.\r\n\r\n**Actual behavior:**\r\n\r\nThere are **a lot** of \"dropped request\" messages.\r\nFurthermore the same message is replicated several times if the same url is found more than one time. (e.g. https://twitter.com/orangain or https://twitter.com/webooker_log in the previous log)\r\n\r\n**Reproduces how often:** always\r\n\r\n### Versions\r\n\r\n$ scrapy version --verbose\r\nScrapy : 2.11.2\r\nlxml : 5.2.2.0\r\nlibxml2 : 2.12.6\r\ncssselect : 1.2.0\r\nparsel : 1.9.1\r\nw3lib : 2.2.1\r\nTwisted : 24.3.0\r\nPython : 3.12.4 (main, Jul 3 2024, 16:55:58) [GCC 11.2.0]\r\npyOpenSSL : 24.1.0 (OpenSSL 3.2.2 4 Jun 2024)\r\ncryptography : 42.0.8\r\nPlatform : Linux-5.15.145-x86_64-AMD_Ryzen_9_5980HX_with_Radeon_Graphics-with-glibc2.33\r\n\r\n### Additional context\r\n\r\nI believe this has nothing to do with the `CrawlSpider`, but that is what I am using.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/6475", "file_loc": {"base_commit": "67ab8d4650c1e9212c9508803c7b5265e166cbaa", "files": [{"path": "scrapy/core/engine.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [42]}, "('ExecutionEngine', '_schedule_request', 319)": {"mod": [328, 329, 330, 331]}}}, {"path": "tests/test_engine.py", "status": "modified", "Loc": {"(None, 'test_request_scheduled_signal', 474)": {"mod": [502]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/core/engine.py"], "doc": [], "test": ["tests/test_engine.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "d60b4edd11436e61284615ec7ce89f8ac7e46d9a", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5857", "iss_label": "bug\ngood first issue\nhttps", "title": "TLS logging broken with new cryptography", "body": "https://github.com/pyca/cryptography/pull/8391 dropped `SSL_get_server_tmp_key()` so we need to disable the code that uses it if it's not available.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5858", "file_loc": {"base_commit": "d60b4edd11436e61284615ec7ce89f8ac7e46d9a", "files": [{"path": "scrapy/utils/ssl.py", "status": "modified", "Loc": {"(None, 'get_temp_key_info', 21)": {"add": [22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/utils/ssl.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "5fac2d7b90da8f06597df8536bbadd6cadef5d7e", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/509", "iss_label": "", "title": "Ubuntu repositories", "body": "Current repositories setup requires maintaining an apt repository per codename (precise, quantal, raring, saucy, trusty,...) and it bugs us on every new ubuntu release.\n\nThe true is that we build debian packages on a Precise host and upload the same package to all repositories. There is a legacy reason for using multiples repos per codename, we started publishing and building debian packages in Lucid (Python 2.6), when Precise arrived we had to build for Python 2.7. Lucid packages were published for Karmic, Maverick and Natty, while Precise for the others. There was also the ubuntu switch to Upstart that affected Scrapyd packaging by that time.\n\nThere are two ideas flowing around:\n1. Unify repositories and install instructions to:\n \n ```\n deb http://archive.scrapy.org/ubuntu scrapy main\n ```\n2. Move repositories to Ubuntu PPA managed by Scrapy team.\n\noption (1) is simple and will work as far as Python2.7 is available in ubuntu.\n\noption (2) has the advantage that a new debian package is built per codename, and we don't rely on ScrapingHub infrastructure to build and distribute debs. \n\nI intentionally left out the discussion about renaming `scrapy-VERSION` to `scrapy`, but it may be related if we want to publish oldstable/stable/trunk versions under the same name but in different repository _components_. \n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/549", "file_loc": {"base_commit": "7f30a671c3ae545417b627e314688058699b3ffa", "files": [{"path": "docs/topics/ubuntu.rst", "status": "modified", "Loc": {"(None, None, 14)": {"mod": [14, 15, 16]}, "(None, None, 18)": {"mod": [18]}, "(None, None, 20)": {"mod": [20, 21]}, "(None, None, 23)": {"mod": [23]}, "(None, None, 25)": {"mod": [25]}, "(None, None, 27)": {"mod": [27]}, "(None, None, 29)": {"mod": [29]}, "(None, None, 31)": {"mod": [31]}, "(None, None, 33)": {"mod": [33, 35, 37, 39, 40, 41, 43, 44, 46]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/topics/ubuntu.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "0ee04e1e91f42d7fdd69f20b00a06e7856cdc919", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4336", "iss_label": "enhancement\ndocs", "title": "Needs change on \"Example of shell session\" in Scrapy 1.8.0 docs", "body": "### Description\r\n\r\nI was learning how to use Scrapy shell but got error similar with this issue #3314, and got the solution in the issue as well. But, when I looked back into the Docs (1.8.0), the example still use (') instead of (\"). I think it is better to change it, especially for next learners like me.\r\n\r\n**Expected behavior:**\r\n\r\n*Example of shell session*\r\n...\r\nscrapy shell \"https://scrapy.org\" --nolog\r\n\r\n**Actual behavior:**\r\n\r\n*Example of shell session*\r\n...\r\nscrapy shell 'https://scrapy.org' --nolog\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4450", "file_loc": {"base_commit": "0ee04e1e91f42d7fdd69f20b00a06e7856cdc919", "files": [{"path": "docs/topics/shell.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [158]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/topics/shell.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "d43a35735a062a4260b002cfbcd3236c77ef9399", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/951", "iss_label": "bug", "title": "Extraction of gzipped sitemap fails in Scrapy 0.24.4", "body": "retrieving a gzipped sitemap xml (tested on amazon.de) fails.\n\nReproduce with :\nmodify /utils/gz.py gunzip method to write the incoming data to a file.\n\ngunzip the file on the command line.\n\nthe unzipped file contains garbled content\n\ngunzip that file with garbled content a second time and get the correct content\n\n-> I suspect that the content coming from the target server is already gzip compressed and scrapy has a bug that causes the gzip decompression to not work properly, resulting in a double compressed file arriving at the /utils/gz.py gunzip method \n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/2065", "file_loc": {"base_commit": "d43a35735a062a4260b002cfbcd3236c77ef9399", "files": [{"path": "scrapy/utils/gz.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [53]}, "(None, 'is_gzipped', 55)": {"mod": [58]}}}, {"path": "tests/test_downloadermiddleware_httpcompression.py", "status": "modified", "Loc": {"('HttpCompressionTest', None, 22)": {"add": [147]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/utils/gz.py"], "doc": [], "test": ["tests/test_downloadermiddleware_httpcompression.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "451f1474689a18d6a54630915c42172626624ef7", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2145", "iss_label": "bug\nin progress", "title": "Disabling RedirectMiddleware results in HttpCompressionMiddleware errors", "body": "I wanted not to redirect `303` responses, but instead retry them.\nFrom the docs, I thought I could achieve it through two settings:\n\n```\nREDIRECT_ENABLED = False\nRETRY_HTTP_CODES = [301, 302, 307, 308, 500, 502, 503, 504, 408]\n```\n\nIt ended up giving me errors on `HttpCompressionMiddleware`:\n\n```\nTraceback (most recent call last):\n File \"twisted/internet/defer.py\", line 1128, in _inlineCallbacks\n result = g.send(result)\n File \"scrapy/core/downloader/middleware.py\", line 53, in process_response\n spider=spider)\n File \"scrapy/downloadermiddlewares/httpcompression.py\", line 38, in process_response\n response = response.replace(**kwargs)\n File \"scrapy/http/response/text.py\", line 50, in replace\n return Response.replace(self, *args, **kwargs)\n File \"scrapy/http/response/__init__.py\", line 77, in replace\n return cls(*args, **kwargs)\nTypeError: __init__() got an unexpected keyword argument 'encoding'\n```", "pr_html_url": "https://github.com/scrapy/scrapy/pull/2393", "file_loc": {"base_commit": "451f1474689a18d6a54630915c42172626624ef7", "files": [{"path": "scrapy/downloadermiddlewares/httpcompression.py", "status": "modified", "Loc": {"('HttpCompressionMiddleware', 'process_response', 31)": {"mod": [41]}}}, {"path": "tests/test_downloadermiddleware_httpcompression.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9]}, "('HttpCompressionTest', None, 25)": {"add": [154]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/downloadermiddlewares/httpcompression.py"], "doc": [], "test": ["tests/test_downloadermiddleware_httpcompression.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "4626e90df8ba4a945bb9cd6be47a915788e76f23", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3871", "iss_label": "cleanup", "title": "Deprecate hacky code from get_project_settings()", "body": "[Reported](https://github.com/scrapy/scrapy/pull/3859#issuecomment-510838622) by @nyov:\r\n\r\n> @kmike, would you or someone perhaps also find time to correctly deprecate this (or just rip it > out)?: https://github.com/scrapy/scrapy/blob/9c90d9515a50ede29415b8b5d6ba11229f333b49/scrapy/utils/project.py#L70-L79\r\n> Or is it still needed.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3910", "file_loc": {"base_commit": "9c514b976ffdf069b81c4b7728a7e8e531710680", "files": [{"path": "scrapy/utils/project.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10]}, "(None, 'get_project_settings', 60)": {"add": [72]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/utils/project.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "c340e72988fc6ec615b7b9851c3d28c16c26a839", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4802", "iss_label": "bug\nupstream issue", "title": "CachingHostnameResolver does not work with reactor.resolve()", "body": "### Description\r\n\r\nHi. Thank you for maintaining this awesome software :)\r\n\r\nI am working on a project using scrapy that implements a custom downloader class ([link](https://github.com/michael-lazar/mozz-archiver/blob/master/mozz_archiver/downloaders.py)).\r\n\r\nI want to resolve IPv6 addresses, and I found the section in the documentation about the ``DNS_RESOLVER`` setting that was added in #4227. I tried enabling the new ``DNS_RESOLVER = \"scrapy.resolver.CachingHostnameResolver\"`` and was immediately greeted with this exception.\r\n\r\n```\r\nUnhandled Error\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/scrapy/commands/crawl.py\", line 27, in run\r\n self.crawler_process.start()\r\n File \"/usr/local/lib/python3.8/site-packages/scrapy/crawler.py\", line 327, in start\r\n reactor.run(installSignalHandlers=False) # blocking call\r\n File \"/usr/local/lib/python3.8/site-packages/twisted/internet/base.py\", line 1283, in run\r\n self.mainLoop()\r\n File \"/usr/local/lib/python3.8/site-packages/twisted/internet/base.py\", line 1292, in mainLoop\r\n self.runUntilCurrent()\r\n--- <exception caught here> ---\r\n File \"/usr/local/lib/python3.8/site-packages/twisted/internet/base.py\", line 913, in runUntilCurrent\r\n call.func(*call.args, **call.kw)\r\n File \"/usr/local/lib/python3.8/site-packages/twisted/internet/tcp.py\", line 449, in resolveAddress\r\n d = self.reactor.resolve(self.addr[0])\r\n File \"/usr/local/lib/python3.8/site-packages/twisted/internet/base.py\", line 638, in resolve\r\n return self.resolver.getHostByName(name, timeout)\r\n File \"/usr/local/lib/python3.8/site-packages/twisted/internet/_resolver.py\", line 277, in getHostByName\r\n self._nameResolver.resolveHostName(FirstOneWins(result), name, 0,\r\n File \"/usr/local/lib/python3.8/site-packages/scrapy/resolver.py\", line 80, in resolveHostName\r\n class CachingResolutionReceiver(resolutionReceiver):\r\nbuiltins.TypeError: __init__() takes 2 positional arguments but 4 were given\r\n```\r\n\r\n### Steps to Reproduce\r\n\r\nThis is also reproducible using the bundled FTP downloader\r\n\r\n1. ``scrapy startproject scrapy_test``\r\n2. ``scrapy genspider example mozz.us``\r\n3. Add ``DNS_RESOLVER = \"scrapy.resolver.CachingHostnameResolver\"`` to the settings file\r\n4. Change the spider start_url to ``ftp://mozz.us``\r\n5. ``scrapy crawl scrapy_test``\r\n\r\n### Versions\r\n\r\n```\r\nScrapy : 2.3.0\r\nlxml : 4.5.2.0\r\nlibxml2 : 2.9.10\r\ncssselect : 1.1.0\r\nparsel : 1.6.0\r\nw3lib : 1.22.0\r\nTwisted : 20.3.0\r\nPython : 3.8.5 (default, Jul 21 2020, 10:48:26) - [Clang 11.0.3 (clang-1103.0.32.62)]\r\npyOpenSSL : 19.1.0 (OpenSSL 1.1.1g 21 Apr 2020)\r\ncryptography : 3.0\r\nPlatform : macOS-10.15.6-x86_64-i386-64bit\r\n```\r\n\r\n### Additional context\r\n\r\nThis was a tricky one to debug because everything works as expected with the HTTP Agent downloader. This issue only appears when you implement a downloader that depends on calling ``reactor.resolve()`` directly without using ``twisted.internet.endpoints.HostnameEndpoint``.\r\n\r\nI discovered that in the twisted [IHostnameResolver](https://twistedmatrix.com/documents/current/api/twisted.internet.interfaces.IHostnameResolver.html) interface, the ``resolutionReceiver`` method argument is expected to be an *instance* of a resolution receiver class, and not a *type* of a resolution receiver class. So I believe the scrapy code below is incorrect:\r\n\r\nhttps://github.com/scrapy/scrapy/blob/5e997587d9b13344a0afa9bb4cf781829a66ce23/scrapy/resolver.py#L76-L80\r\n\r\nThe subclass here only works with the Scrapy Agent because the ``HostnameEndpoint`` does this weird thing where it defines a class with only static methods, so it can pass the class itself instead of instantiating it.\r\n\r\nhttps://github.com/twisted/twisted/blob/22f949f7ce187513f0c218b73186c8a73baa00b4/src/twisted/internet/endpoints.py#L942-L958\r\n\r\n```python\r\n @provider(IResolutionReceiver)\r\n class EndpointReceiver:\r\n @staticmethod\r\n def resolutionBegan(resolutionInProgress):\r\n pass\r\n\r\n @staticmethod\r\n def addressResolved(address):\r\n addresses.append(address)\r\n\r\n @staticmethod\r\n def resolutionComplete():\r\n d.callback(addresses)\r\n\r\n self._nameResolver.resolveHostName(\r\n EndpointReceiver, self._hostText, portNumber=self._port\r\n )\r\n```\r\n\r\nHowever, there are other places in the twisted reactor where twisted does pass an object instance directly to this method.\r\n\r\nhttps://github.com/twisted/twisted/blob/7e3ce790ca9f004ab386f9ecbba8f505d66cd3bd/src/twisted/internet/_resolver.py#L307\r\n\r\n```python\r\n result = Deferred()\r\n self._nameResolver.resolveHostName(FirstOneWins(result), name, 0, [IPv4Address])\r\n return result\r\n```\r\n\r\n\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4803", "file_loc": {"base_commit": "c340e72988fc6ec615b7b9851c3d28c16c26a839", "files": [{"path": "scrapy/resolver.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [52], "mod": [3]}, "('CachingHostnameResolver', 'resolveHostName', 76)": {"add": [105], "mod": [97, 100, 104]}, "('CachingHostnameResolver', None, 54)": {"mod": [76, 77, 79, 80, 82, 83, 84, 85, 87, 88, 89, 91, 92, 93, 94]}}}, {"path": "tests/CrawlerProcess/alternative_name_resolver.py", "status": "removed", "Loc": {}}, {"path": "tests/CrawlerProcess/default_name_resolver.py", "status": "modified", "Loc": {"('IPv6Spider', None, 5)": {"add": [5]}, "(None, None, None)": {"mod": [10, 11, 12]}}}, {"path": "tests/test_crawler.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [24]}, "('CrawlerProcessSubprocess', None, 292)": {"add": [328], "mod": [324, 325, 326]}, "('ScriptRunnerMixin', None, 282)": {"mod": [283]}, "('ScriptRunnerMixin', 'run_script', 283)": {"mod": [285]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/resolver.py", "tests/CrawlerProcess/alternative_name_resolver.py", "tests/CrawlerProcess/default_name_resolver.py"], "doc": [], "test": ["tests/test_crawler.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "eb49b459c18fc78709267803582376692519e224", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2076", "iss_label": "install", "title": "Ubuntu official repository - installation failed (not able to find python-support)", "body": "I used the [manual](http://doc.scrapy.org/en/latest/topics/ubuntu.html) to install scrapy on ubuntu 16.04, but it failed because it was not able to install python-support (>= 0.90.0). Other sources report that this package is not part of the new ubuntu xenial anymore.\n\nQuick&Dirty-Workaround:\n\n```\nwget http://launchpadlibrarian.net/109052632/python-support_1.0.15_all.deb\nsudo dpkg -i python-support_1.0.15_all.deb\nsudo apt-get update && sudo apt-get install scrapy\n```\n- http://askubuntu.com/questions/766169/why-no-more-python-support-in-16-04\n- https://launchpad.net/ubuntu/xenial/amd64/python-support/1.0.15\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/2267", "file_loc": {"base_commit": "eb49b459c18fc78709267803582376692519e224", "files": [{"path": "docs/intro/install.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [39, 51, 176], "mod": [10, 12, 14, 16, 17, 18, 20, 21, 23, 24, 26, 27, 29, 31, 33, 34, 36, 37, 41, 42, 44, 45, 47, 49, 92, 93, 98, 99, 100, 102, 103, 104, 108, 110, 112, 114, 115, 117, 118, 120, 122, 179, 182, 187]}}}, {"path": "docs/topics/ubuntu.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [13]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/intro/install.rst", "docs/topics/ubuntu.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "b0eaf114e5ebe1c5f38a56ed23fcd0515f34d048", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/1403", "iss_label": "bug", "title": "Exception in LxmLinkExtractor.extract_links 'charmap' codec can't encode character ", "body": "```\nStacktrace (most recent call last):\n\n File \"scrapy/utils/defer.py\", line 102, in iter_errback\n yield next(it)\n File \"scrapy/spidermiddlewares/offsite.py\", line 28, in process_spider_output\n for x in result:\n File \"scrapy/spidermiddlewares/referer.py\", line 22, in <genexpr>\n return (_set_referer(r) for r in result or ())\n File \"scrapy/spidermiddlewares/offsite.py\", line 28, in process_spider_output\n for x in result:\n File \"scrapy/spidermiddlewares/urllength.py\", line 37, in <genexpr>\n return (r for r in result or () if _filter(r))\n File \"scrapy/spidermiddlewares/depth.py\", line 54, in <genexpr>\n return (r for r in result or () if _filter(r))\n File \"scrapy/spiders/crawl.py\", line 69, in _parse_response\n for requests_or_item in iterate_spider_output(cb_res):\n File \"ex_link_crawl/spiders/external_link_spider.py\", line 45, in parse_obj\n for link in LxmlLinkExtractor(allow=(), deny=self.allowed_domains).extract_links(response):\n File \"scrapy/linkextractors/lxmlhtml.py\", line 108, in extract_links\n links = self._extract_links(doc, response.url, response.encoding, base_url)\n File \"scrapy/linkextractors/__init__.py\", line 103, in _extract_links\n return self.link_extractor._extract_links(*args, **kwargs)\n File \"scrapy/linkextractors/lxmlhtml.py\", line 57, in _extract_links\n url = url.encode(response_encoding)\n File \"encodings/cp1252.py\", line 12, in encode\n return codecs.charmap_encode(input,errors,encoding_table)\n\n```\n\nMy use of extractor is following:\n\n```\ndef parse_obj(self, response):\n if not isinstance(response, HtmlResponse):\n return\n for link in LxmlLinkExtractor(allow=(), deny=self.allowed_domains).extract_links(response):\n if not link.nofollow:\n yield LinkCrawlItem(domain=link.url)\n```\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4321", "file_loc": {"base_commit": "b0eaf114e5ebe1c5f38a56ed23fcd0515f34d048", "files": [{"path": "scrapy/linkextractors/lxmlhtml.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8, 12]}, "('LxmlParserLinkExtractor', '_extract_links', 54)": {"mod": [69]}}}, {"path": "tests/test_linkextractors.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}, "('LinkExtractorTestCase', None, 17)": {"mod": [19]}, "('LinkExtractorTestCase', 'test_extract_all_links', 31)": {"mod": [33, 34, 35, 36]}, "('LinkExtractorTestCase', 'test_restrict_xpaths_with_html_entities', 212)": {"mod": [217]}, "('LinkExtractorTestCase', 'test_attrs', 311)": {"mod": [313, 314, 315, 316]}, "('LxmlLinkExtractorTestCase', None, 469)": {"mod": [509]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/linkextractors/lxmlhtml.py"], "doc": [], "test": ["tests/test_linkextractors.py"], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "69398fa148603a1cf0c84fbe2fd5d59daf9caa0c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/1487", "iss_label": "enhancement\ngood first issue", "title": "Set `scrapy shell name.tld` default scheme to http", "body": "I propose default scheme for sites be set to http:// when using scrapy shell. Like how browsers work.\n\n`scrapy shell yahoo.com` fails but should work.\n\nissue label = trivial\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/1498", "file_loc": {"base_commit": "69398fa148603a1cf0c84fbe2fd5d59daf9caa0c", "files": [{"path": "scrapy/commands/shell.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}, "('Command', 'run', 42)": {"add": [43]}}}, {"path": "scrapy/utils/url.py", "status": "modified", "Loc": {"(None, 'escape_ajax', 86)": {"add": [112]}}}, {"path": "tests/test_utils_url.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [189], "mod": [7]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scrapy/commands/shell.py", "scrapy/utils/url.py"], "doc": [], "test": ["tests/test_utils_url.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "5a41febce249e7b74eb37ba7914998ff08321c38", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/3633", "iss_label": "", "title": "HTTPS requests through proxies in proposed/3.0.0 aren't configured correctly", "body": "In current master:\n\n```\n>>> import requests\n>>> requests.__version__\n'2.11.1'\n>>> session = requests.Session()\n>>> r = session.get('https://www.jcline.org/', verify=True, proxies={'http': 'http://vagrant:vagrant@localhost:3128', 'https': 'http://vagrant:vagrant@localhost:3128'})\n>>> \n```\n\nIn current proposed/3.0.0:\n\n```\n>>> import requests\n>>> requests.__version__\n'3.0.0'\n>>> session = requests.Session()\n>>> r = session.get('https://www.jcline.org/', verify=True, proxies={'http': 'http://vagrant:vagrant@localhost:3128', 'https': 'http://vagrant:vagrant@localhost:3128'})\nrequests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html\n InsecureRequestWarning)\n>>> \n```\n\nThis is a problem I introduced in https://github.com/kennethreitz/requests/pull/3109 :disappointed:. What happens right now is if a request is _not_ through a proxy and it's HTTPS, the urllib3 pool manager's `connection_pool_kw` are updated before requesting a new connection using [requests.adapters.HTTPAdapter._update_poolmanager_ssl_kw](https://github.com/kennethreitz/requests/blob/proposed/3.0.0/requests/adapters.py#L204). If it _is_ through a proxy, the keywords aren't updated and the request is made with the default settings for urllib3.\n\nTo me, the most appealing way to fix this is to add a keyword argument, `connection_kwargs` or something, to all the `urllib3.poolmanager.PoolManager.connection_from_*` methods that is either merged into `connection_pool_kw` or overrides them. That way `urllib3` can handle getting the connection pool with the new kwargs in a thread-safe manner. Currently, `requests` has to manage updating the keys and getting the new connection pool with a lock. It seems like that would be better in `urllib3`.\n\nThe other option is to patch up what's currently in `HTTPAdapter` so it handles updating the proxy manager or plain pool manager based on whether proxies are in use.\n\nWhat do people think?\n", "pr_html_url": "https://github.com/psf/requests/pull/4173", "file_loc": {"base_commit": "f3cdbcb86d9535f054f56d937e29293cebc3c55d", "files": [{"path": "requests/adapters.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [55], "mod": [13, 14, 15, 16]}, "('HTTPAdapter', '__init__', 114)": {"mod": [129]}, "('HTTPAdapter', '__setstate__', 137)": {"mod": [142]}, "('HTTPAdapter', None, 85)": {"mod": [207, 208, 209, 210, 211, 213, 214, 215, 216, 217, 218, 219, 220, 221, 223, 225, 226, 227, 229, 230, 232, 233, 234, 236, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 249, 250, 251, 252, 253, 254, 255, 257, 258, 259, 260, 261, 262, 263, 264]}, "('HTTPAdapter', 'get_connection', 303)": {"mod": [312, 313, 314, 316, 318, 319, 320, 321, 322, 323, 324, 325, 326]}}}, {"path": "tests/test_requests.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [33]}, "('TestPreparingURLs', 'test_parameters_for_nonstandard_schemes', 2760)": {"add": [2767]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/adapters.py"], "doc": [], "test": ["tests/test_requests.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "4e89ba707714e3b58a46c2ed9e220cff8b7f1e6a", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/2872", "iss_label": "", "title": "Post request hangs in certain cases when body is a StringIO", "body": "This is related to a report for the [Dropbox Python SDK](https://github.com/dropbox/dropbox-sdk-python/issues/27).\n\nThe following hangs:\n\n```\nfrom StringIO import StringIO\ns = StringIO()\ns.write('hello') # This is seeked to the end\nrequests.post('http://www.google.com', data=s) # Hangs: A success would be a 405 error\n```\n\nAfter a cursory look, it looks like the request isn't fully formed so the server doesn't attempt to send a response which leaves the client hanging.\n\nIf we call `s.seek(0)`, this works. A bit more counterintuitively, this also works:\n\n```\nrequests.post('http://www.google.com', data=StringIO())\n```\n", "pr_html_url": "https://github.com/psf/requests/pull/2873", "file_loc": {"base_commit": "4e89ba707714e3b58a46c2ed9e220cff8b7f1e6a", "files": [{"path": "requests/utils.py", "status": "modified", "Loc": {"(None, 'super_len', 50)": {"add": [50], "mod": [52, 54, 55, 57, 63, 78, 80, 81, 82]}}}, {"path": "test_requests.py", "status": "modified", "Loc": {"('UtilsTestCase', None, 1330)": {"add": [1353]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/utils.py"], "doc": [], "test": ["test_requests.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "56ecdebcc507c71f2386d3bf2ea14db2d27cc834", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/2756", "iss_label": "Bug\nContributor Friendly", "title": "Json supersedes data in prepare_body", "body": "When not a stream, json supersedes data in prepare_body:\nhttps://github.com/kennethreitz/requests/blob/f5dacf84468ab7e0631cc61a3f1431a32e3e143c/requests/models.py#L446\n\nThis conflicts with the docstring, which indicates that json is only used when data is not specified:\nhttps://github.com/kennethreitz/requests/blob/f5dacf84468ab7e0631cc61a3f1431a32e3e143c/requests/models.py#L195\n", "pr_html_url": "https://github.com/psf/requests/pull/2763", "file_loc": {"base_commit": "56ecdebcc507c71f2386d3bf2ea14db2d27cc834", "files": [{"path": "requests/models.py", "status": "modified", "Loc": {"('PreparedRequest', 'prepare_body', 406)": {"mod": [417, 446]}}}, {"path": "test_requests.py", "status": "modified", "Loc": {"('RequestsTestCase', None, 59)": {"add": [1064]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/models.py"], "doc": [], "test": ["test_requests.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "1c52d15d9772e459add567cbdc9d38a284a8d939", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1882", "iss_label": "", "title": "ResourceWarning in python 3.2+", "body": "Requests issues a ResourceWarning in python 3.2+ as sockets are not explicitly closed before garbage collection occurs. While ResourceWarnings are not displayed by default, it can be a distraction to some developers when working with warnings enabled.\n\nFile: test.py\n\n``` python\nimport requests\n\ndef make_request():\n resp = requests.get('http://google.com')\n resp.close() # this appears to have no effect, even though the function exists\n\nmake_request()\n```\n\n```\n$ python -Wall test.py \ntest.py:7: ResourceWarning: unclosed <socket.socket object, fd=4, family=2, type=1, proto=6>\n make_request()\ntest.py:7: ResourceWarning: unclosed <socket.socket object, fd=3, family=2, type=1, proto=6>\n make_request()\n```\n\nIt would be great if there was a way to prevent the ResourceWarning from occurring, without issuing a `Connection:close` header.\n", "pr_html_url": "https://github.com/psf/requests/pull/2326", "file_loc": {"base_commit": "1c52d15d9772e459add567cbdc9d38a284a8d939", "files": [{"path": "requests/api.py", "status": "modified", "Loc": {"(None, 'request', 17)": {"mod": [49]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "be62645dd56580dd7576032b348cf79d880851d8", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1208", "iss_label": "", "title": "Not possible to specify max_retries in v1.X?", "body": "In older versions of requests (pre v1.0), I was able to do:\n\n```\nrequests.get('http://nonexistentdomainfoobar.com', config={\"max_retries\":10})\n```\n\nas far as I can tell, this isn't possible in v.1.0+. `HTTPAdapter.max_retries` uses `DEFAULT_RETRIES` and there's no way to change this.\n\nWould it be possible to restore this feature? If not, perhaps a note in the FAQ informing users that this isn't possible and they'll have to write a loop themselves?\n", "pr_html_url": "https://github.com/psf/requests/pull/1219", "file_loc": {"base_commit": "be62645dd56580dd7576032b348cf79d880851d8", "files": [{"path": "AUTHORS.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [123]}}}, {"path": "requests/adapters.py", "status": "modified", "Loc": {"('HTTPAdapter', '__init__', 47)": {"mod": [48]}, "('HTTPAdapter', None, 45)": {"mod": [169]}, "('HTTPAdapter', 'send', 169)": {"mod": [191]}}}, {"path": "requests/api.py", "status": "modified", "Loc": {"(None, 'request', 17)": {"add": [29]}}}, {"path": "requests/sessions.py", "status": "modified", "Loc": {"('SessionRedirectMixin', 'resolve_redirects', 82)": {"add": [151], "mod": [83]}, "('Session', 'request', 232)": {"add": [239, 306]}, "('Session', 'send', 389)": {"add": [401], "mod": [422]}}}, {"path": "test_requests.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [356]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/sessions.py", "requests/adapters.py", "requests/api.py"], "doc": ["AUTHORS.rst"], "test": ["test_requests.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "1642996798416efaca754e4678506502e4c4c1f3", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1228", "iss_label": "", "title": "Problem with missing cookies after redirect", "body": "I sent this by e-mail - no response. I think this might be of interest to others:\n\n> I have a problem when connecting to a site. Here's the scenario:\n> \n> 1) I enter a login page, which has a form\n> 2) I send (using Requests) a POST with the username, pw, etc.\n> (This POST includes the SESSIONID)\n> 3) The webpage with a 302,\n> 4) To which requests does automatically a GET to the new address\n> 5) In Firefox, this works, In Requests, I get redirected to the\n> login - page (with another 302).\n> \n> The only important difference I can detect is that in point 4),\n> Firefox repeats automatically the SESSION ID, which Requests does\n> not do. Can I enable this?\n\nI solved the problem by disabling automatic redirects, and creating\na new request manually, with the sessionid cookie. Now the process\nruns successfully. \n\nThis confirms the necessity of the repeating the cookie in the \nrequest after the 302, but it defeat the 'neatness' of the auto\nredirects.\n\nCheers,\nJohn\n", "pr_html_url": "https://github.com/psf/requests/pull/1239", "file_loc": {"base_commit": "1642996798416efaca754e4678506502e4c4c1f3", "files": [{"path": "requests/sessions.py", "status": "modified", "Loc": {"('SessionRedirectMixin', 'resolve_redirects', 82)": {"mod": [93]}}}, {"path": "test_requests.py", "status": "modified", "Loc": {"('RequestsTestCase', None, 29)": {"add": [120]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/sessions.py"], "doc": [], "test": ["test_requests.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "4683f169909857d663275346655975af7190fd62", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1979", "iss_label": "", "title": "Authentication Handlers lost on redirect.", "body": "I'am trying to use the requests library by making a redirection with Digest authentication method, but the response is 401. I mention that it works with basic authentication. I've captured the packets with wireshark, and noticed that the first HTTP request is without the Authorization header, the 401 unauthorized answered is received, and after that the traffic continues as it should be, the Authorization header is added, the 302 answer is received, and after that with the https cyphers exchange. I don't know why the requests.send method returns 401.\n", "pr_html_url": "https://github.com/psf/requests/pull/2253", "file_loc": {"base_commit": "a718a81d273503bd2ffae8e6cb036a8516eb426a", "files": [{"path": "requests/auth.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}, "('HTTPDigestAuth', None, 60)": {"add": [152]}, "('HTTPDigestAuth', '__call__', 188)": {"add": [196]}, "('HTTPDigestAuth', 'handle_401', 153)": {"mod": [185]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/auth.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "1c2022cf868cb503815f34901ad8e85cf524d01a", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/4239", "iss_label": "Feature Request\nContributor Friendly", "title": "Add header name to InvalidHeader exception message", "body": "requests.get('http://example.com', headers={'foo': 1})\r\nrequests.exceptions.InvalidHeader: Header value 1 must be of type str or bytes, not <class 'int'>\r\n\r\nIt would be good to add the name of the bad header to make it easier\r\nto track this down in large bodies of code. Something like:\r\n\r\nrequests.exceptions.InvalidHeader: Header foo value 1 must be of type str or bytes, not <class 'int'>\r\n\r\nThanks.\r\n\r\nSummary.\r\n\r\n## Expected Result\r\n\r\nWhat you expected.\r\n\r\n## Actual Result\r\n\r\nWhat happened instead.\r\n\r\n## Reproduction Steps\r\n\r\n```python\r\nimport requests\r\n\r\n```\r\n\r\n## System Information\r\n\r\n $ python -m requests.help\r\n\r\n```\r\n<paste here>\r\n```\r\n\r\nThis command is only available on Requests v2.16.4 and greater. Otherwise,\r\nplease provide some basic information about your system (Python version,\r\noperating system, &c).e", "pr_html_url": "https://github.com/psf/requests/pull/4240", "file_loc": {"base_commit": "1c2022cf868cb503815f34901ad8e85cf524d01a", "files": [{"path": "HISTORY.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [9]}}}, {"path": "requests/utils.py", "status": "modified", "Loc": {"(None, 'check_header_validity', 854)": {"mod": [871, 872]}}}, {"path": "tests/test_requests.py", "status": "modified", "Loc": {"('TestRequests', 'test_header_value_not_str', 1395)": {"add": [1405, 1408, 1411], "mod": [1404, 1407, 1410]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/utils.py"], "doc": ["HISTORY.rst"], "test": ["tests/test_requests.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "0192aac24123735b3eaf9b08df46429bb770c283", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/2876", "iss_label": "Needs BDFL Input\nPropose Close", "title": "Exception messages", "body": "As a user I would like it to be easy to generate simple helpful messages upon an exception. A common way this is done in is to simply cast the exception to a string. However, with requests, the result is often something you don't want to show an end user. For example:\n\n``` python\n try:\n downloaded = requests.get(url)\n except (requests.Timeout) as err:\n print(str(err))\n```\n\nResults in the following message to the user:\n\n```\n HTTPSConnectionPool(host='cal.example.com', port=443): Max retries exceeded with url: /ken/ken.ics/00832974-ffb3-42ea-ba3e-84ba3c0a30f6.ics (Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fd4644ef400>, 'Connection to cal.example.com timed out. (connect timeout=0.1)'))\n```\n\nThere is useful information in this message, but it is not easily user accessible and is rather intimidating for end users. The information is probably available in the exception itself, but it is not clear how to get it. Also, it seems like accessing it would likely be different for each type of exception, which greatly increases the complexity of catching and reporting exceptions.\n\nWhat I would expect is something like::\n\n```\n Connection to cal.example.com timed out.\n```\n\nIt would be very helpful if there were an easy way to generate user friendly error messages from requests exceptions. If there is such a way, I have not been able to find it. Thus, I suggest it be added to the otherwise excellent introduction to requests. If there is not such a way, I would like to to suggest that it be added.\n", "pr_html_url": "https://github.com/certbot/certbot/pull/4733", "file_loc": {"base_commit": "0192aac24123735b3eaf9b08df46429bb770c283", "files": [{"path": "requests/sessions.py", "status": "modified", "Loc": {"('Session', 'prepare_request', 417)": {"mod": [423]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/sessions.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "e23bf10cf4ecc62f6c3dd6284043516fb833d9ce", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/2411", "iss_label": "", "title": "Requests 2.5.1 doesn't recognize unicode filenames for uploads", "body": "After merge of https://github.com/kennethreitz/requests/pull/2379, to allow filenames to be `int` types, unicode filenames are no longer recognized under Python 2. \n\nThis checks that the filename is a `builtin` `str`, which has different behaviour on Python 2 and Python 3:\n`requests/utils.py:118: if name and isinstance(name, builtin_str) and name[0] != '<' and name[-1] != '>':`\n\nIn `requests/compat.py`, `builtin_str` is defines as `str`, which is non-unicode `bytes` in Python 2 and unicode in Python 3. Perhaps the check should be against basestring, or is this change in behaviour intended?\n", "pr_html_url": "https://github.com/psf/requests/pull/2413", "file_loc": {"base_commit": "d2d576b6b1101e2871c82f63adf2c2b534c2dabc", "files": [{"path": "requests/compat.py", "status": "modified", "Loc": {}}, {"path": "requests/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [28]}, "(None, 'guess_filename', 115)": {"mod": [118]}}}, {"path": "test_requests.py", "status": "modified", "Loc": {"('UtilsTestCase', None, 1223)": {"add": [1267]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/utils.py", "requests/compat.py"], "doc": [], "test": ["test_requests.py"], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "9473f15909fb3f2329247812e0d3c661421ceafc", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1397", "iss_label": "Bug", "title": "bug report", "body": "Dear Kenneth Reitz,\n\nI use your Requests library which is quite cool. I ran into some issues like httplib uncaught exceptions\nwhich (i think) should be handled by Requests.\n## Consider the following code:\n\nimport requests\n## r = requests.get('http://www.bilhetos.com')\n\nIt raises 'httplib.IncompleteRead' exception which is not handled properly in Requests.\n\nPlease consider urls below for testing:\nhttp://www.tusseymountaintitans.com\nhttp://www.abbottpanthers.com\nhttp://www.spanishmoms.com\nhttp://www.long-island-storage.com\nhttp://www.cupertinohelpwanted.com\nhttp://www.hoffmanestateshawks.com\nhttp://www.brothermartincrusaders.com\nhttp://www.1-800-printer.com\nhttp://www.impiretickets.com\nhttp://www.gdickinson.com\nhttp://www.forensicsline.com\nhttp://www.gardeningtime.com\nhttp://www.ecollegetennis.com\nhttp://www.milacasaints.com\nhttp://www.bartoninsuranceagency.com\nhttp://www.djnatural.com\nhttp://www.containers2000.com\nhttp://www.indiancreektimberwolves.com\nhttp://www.athenswarriors.com\nhttp://www.logansportcats.com\nhttp://www.osani.com\nhttp://www.xn--sammler-brse-djb.com\nhttp://www.800usahealth.com\nhttp://www.wealth-wise.com\nhttp://www.foothillmustangs.com\nhttp://www.manasquanbigblue.com\nhttp://www.bilhetos.com\nhttp://www.atlantahomesteam.com\nhttp://www.foxcitiessatellite.com\nhttp://www.chargersmail.com\nhttp://www.fighterplace.com\n\nBest regards,\nVladimir Goncharov\n", "pr_html_url": "https://github.com/psf/requests/pull/1498", "file_loc": {"base_commit": "9473f15909fb3f2329247812e0d3c661421ceafc", "files": [{"path": "requests/compat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [92, 107]}}}, {"path": "requests/exceptions.py", "status": "modified", "Loc": {"('InvalidURL', None, 54)": {"add": [55]}}}, {"path": "requests/models.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [22, 29]}, "('Response', 'generate', 547)": {"mod": [550, 551]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["requests/exceptions.py", "requests/models.py", "requests/compat.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "414aae70b160a9eaff55c4314d339305cb33c6e9", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/41299", "iss_label": "networking\nperformance\nmodule\nsupport:community\nbug\nmeraki\naffects_2.7\ncisco", "title": "Meraki_admin doesn\u2019t always use org_id and net_id", "body": "##### SUMMARY\r\n`org_id` and `net_id` can be provided to improve playbook execution performance since less lookups are needed. `org_id` and `net_id` should be used within the module, when possible to avoid unnecessarily API calls.\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nmeraki_admin\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.7.0.dev0 (meraki/meraki_device 387c37e255) last updated 2018/06/06 20:11:36 (GMT -500)\r\n config file = None\r\n configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /Users/kbreit/Documents/Programming/ansible/lib/ansible\r\n executable location = /Users/kbreit/Documents/Programming/ansible/bin/ansible\r\n python version = 3.5.4 (default, Feb 25 2018, 14:56:02) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of \"ansible-config dump --only-changed\"\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).-->", "pr_html_url": "https://github.com/ansible/ansible/pull/41518", "file_loc": {"base_commit": "414aae70b160a9eaff55c4314d339305cb33c6e9", "files": [{"path": "lib/ansible/modules/network/meraki/meraki_admin.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [71, 77, 83, 89, 97, 105], "mod": [87]}, "(None, 'get_admin_id', 174)": {"mod": [174, 179]}, "(None, 'main', 274)": {"mod": [349, 355, 360, 374]}}}, {"path": "test/integration/targets/meraki_admin/tasks/main.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/modules/network/meraki/meraki_admin.py"], "doc": [], "test": [], "config": ["test/integration/targets/meraki_admin/tasks/main.yml"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "707458cc8cc78f5162d6ee76d01fc112499313be", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/69678", "iss_label": "support:core\nbug\nhas_pr\nP3\naffects_2.10", "title": "constants.py: functions and constants deprecated, to be removed in 2.8 resp. 2.10", "body": "##### SUMMARY\r\nlib/ansible/constants.py has its own deprecation mechanism:\r\nhttps://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py#L32-L39\r\n\r\nThe following functions were supposed to be removed in 2.8:\r\n- `mk_boolean` https://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py#L42\r\n- `get_config` https://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py#L48\r\n\r\nThe following constant was supposed to be removed in 2.10:\r\n- `BECOME_METHODS` https://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py#L89\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nlib/ansible/constants.py\r\n\r\n##### ANSIBLE VERSION\r\n```paste below\r\ndevel\r\n```\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/70466", "file_loc": {"base_commit": "707458cc8cc78f5162d6ee76d01fc112499313be", "files": [{"path": "lib/ansible/constants.py", "status": "modified", "Loc": {"(None, 'mk_boolean', 42)": {"mod": [42, 43, 44, 45, 48, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 65]}, "(None, None, None)": {"mod": [88, 89, 90, 91, 92, 93, 94, 95]}}}, {"path": "test/units/test_constants.py", "status": "modified", "Loc": {"('TestMkBoolean', None, 97)": {"mod": [97, 98, 99, 100, 102, 103, 105, 106, 107, 108, 110, 111, 112, 113, 114, 116, 117, 118, 119, 120, 121, 122]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/constants.py"], "doc": [], "test": ["test/units/test_constants.py"], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "3e6c76fc2e6157487a254d42feb17c9673dd4987", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/40903", "iss_label": "cloud\nopenstack\nc:inventory/contrib_script\ninventory\nsupport:core\naffects_2.5\nbug\ntraceback", "title": "OpenStack Inventory doesn't work when multiple clouds defined", "body": "<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nTHIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.\r\nAlso test if the latest release, and devel branch are affected too.\r\nALWAYS add information AFTER (OUTSIDE) these html comments.\r\nOtherwise it may end up being automatically closed by our bot. -->\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\nWhen more than 1 cloud is configured in `clouds.yaml`, OpenStack inventory errors \r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.\r\nDo not include extra details here, e.g. \"vyos_command\" not \"the network module vyos_command\" or the full path-->\r\ncontrib/inventory/openstack_inventory.py\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.5.2\r\n config file = None\r\n configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/ubuntu/.local/lib/python2.7/site-packages/ansible\r\n executable location = /home/ubuntu/.local/bin/ansible\r\n python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]\r\n```\r\nInventory version: latest from devel branch (Ansible 2.6 version)\r\n##### CONFIGURATION\r\n\r\n<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of \"ansible-config dump --only-changed\"\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).-->\r\nUsing Ansible defaults\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.-->\r\nUbuntu 16.04 64-bit\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used. -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\nclouds.yaml file:\r\n```yaml\r\nclouds:\r\n test:\r\n auth:\r\n auth_url: %AUTHURL%\r\n username: fakeusername\r\n password: fakepassword\r\n project_name: fakeproject\r\n test2:\r\n auth:\r\n auth_url: %AUTHURL%\r\n username: fakeusername\r\n password: fakepassword\r\n project_name: fakeproject\r\n```\r\nThen running the inventory:\r\n```\r\n./openstack_inventory.py --list\r\n```\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nI expect it to aggregate all the cloud inventory into one continuous inventory.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\n$ ./openstack_inventory.py --list\r\nTraceback (most recent call last):\r\n File \"./openstack_inventory.py\", line 265, in <module>\r\n main()\r\n File \"./openstack_inventory.py\", line 254, in main\r\n output = get_host_groups(inventory, refresh=args.refresh, cloud=args.cloud)\r\n File \"./openstack_inventory.py\", line 118, in get_host_groups\r\n (cache_file, cache_expiration_time) = get_cache_settings(cloud)\r\n File \"./openstack_inventory.py\", line 195, in get_cache_settings\r\n config_files=cloud_config.CONFIG_FILES + CONFIG_FILES).get_one()\r\n File \"/home/ubuntu/.local/lib/python2.7/site-packages/openstack/config/loader.py\", line 1096, in get_one\r\n auth_plugin = loader.load_from_options(**config['auth'])\r\n File \"/home/ubuntu/.local/lib/python2.7/site-packages/keystoneauth1/loading/base.py\", line 162, in load_from_options\r\n raise exceptions.MissingRequiredOptions(missing_required)\r\nkeystoneauth1.exceptions.auth_plugins.MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url\r\n```\r\nWhen I remove the second `test2` cloud, the inventory works as expected", "pr_html_url": "https://github.com/ansible/ansible/pull/41664", "file_loc": {"base_commit": "3e6c76fc2e6157487a254d42feb17c9673dd4987", "files": [{"path": "contrib/inventory/openstack_inventory.py", "status": "modified", "Loc": {"(None, 'get_cache_settings', 193)": {"mod": [195, 196]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["contrib/inventory/openstack_inventory.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "81308e8b22c0d49e9ed27434d15ce4b0d984136c", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/34855", "iss_label": "cloud\nmodule\ndocker\naffects_2.4\nsupport:community\nfeature", "title": "docker_network does not support ipv6 networks", "body": "<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest -->\r\n - Feature Idea\r\n\r\n##### COMPONENT NAME\r\ndocker_network\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.4.2.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/sm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0]\r\n```\r\n\r\n##### SUMMARY\r\nThe docker_network module does not support defining an ipv6 network. There is no `enable_ipv6` parameter. Furthermore, a new strategy must be chosen to allow defining custom ipv4 and ipv6 options.\r\nAt the moment an ipv6 subnet could be defined with\r\n```yml\r\n- name: Create ipv6 network\r\n docker_network:\r\n name: ipv6\r\n ipam_options:\r\n subnet: 'a:b:c:d::/80'\r\n```\r\nbut without setting `enable_ipv6`, containers don't get an ipv6 address. Furthermore, if the task definition does not change on further runs, the task outputs that the network changed, because it does not expect an ipv6 subnet.\r\n\r\nTo implement ipv6 network definitions, two changes are required.\r\nFirst, a parameter to enable ipv6 must be introduced. Maybe with the name `enable_ipv6` which is not required and defaults to no.\r\nSecond, the `ipam_options` directive must be extended to allow multiple config entries. I would suggest a list:\r\n```yml\r\n- name: Create ipv6 network\r\n docker_network:\r\n name: ipv6\r\n enable_ipv6: yes\r\n ipam_options:\r\n - subnet: '172.3.26.0/16'\r\n gateway: 172.3.26.1\r\n - subnet: 'a:b:c:d::/80'\r\n```\r\n\r\n\r\n\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/47492", "file_loc": {"base_commit": "81308e8b22c0d49e9ed27434d15ce4b0d984136c", "files": [{"path": "changelogs/fragments/35370-add_support_for_docker_network_internal_flag.yaml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}}}, {"path": "lib/ansible/modules/cloud/docker/docker_network.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [64, 71, 168, 204], "mod": [107, 116, 144, 149, 150, 151, 152]}, "('TaskParameters', '__init__', 182)": {"add": [191, 195]}, "('DockerNetworkManager', '__init__', 207)": {"add": [221]}, "('DockerNetworkManager', 'create_network', 290)": {"add": [291], "mod": [293, 294, 295, 296, 297, 300, 301, 303, 304, 307, 308, 309, 310, 311]}, "(None, 'main', 387)": {"add": [401, 403], "mod": [396, 397, 398, 405, 406]}, "('DockerNetworkManager', 'has_different_config', 234)": {"mod": [260, 261, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/modules/cloud/docker/docker_network.py"], "doc": ["changelogs/fragments/35370-add_support_for_docker_network_internal_flag.yaml"], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "f7d7890df93393b3364fe40c4d8a65c76610c4db", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/81294", "iss_label": "module\nbug\nhas_pr\naffects_2.16", "title": "Gathering facts fails on a remote macOS host", "body": "### Summary\n\nWhen I try to run my playbook against a macOS host, the implicit facts gathering task fails because the non-interactive shell has nothing in its PATH, and ansible is trying to call 'sysctl hw.model'\r\n\r\nSee \u200elib/ansible/module_utils/facts/hardware/darwin.py\u200e line 71\r\n\r\nI suggest using the full path like so: '/usr/sbin/sysctl hw.model'\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nfacts\n\n### Ansible Version\n\n```console\n$ ansible --version\r\nansible [core 2.13.10]\r\n config file = None\r\n configured module search path = ['/Users/avivpeled/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /opt/homebrew/lib/python3.11/site-packages/ansible\r\n ansible collection location = /Users/avivpeled/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /opt/homebrew/bin/ansible\r\n python version = 3.11.4 (main, Jun 15 2023, 07:55:38) [Clang 14.0.3 (clang-1403.0.22.14.1)]\r\n jinja version = 3.1.2\r\n libyaml = True\n```\n\n\n### Configuration\n\n```console\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\n```\n\n\n### OS / Environment\n\nRunning ansible on macOS ventura, target host is macOS monterey\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n---\r\n- name: Test\r\n hosts: macos\r\n\r\n tasks:\r\n - name: Print Gathered Facts\r\n debug:\r\n var: ansible_facts\r\n```\r\n\n\n### Expected Results\n\nI expect to see the list of collected facts\n\n### Actual Results\n\n```console\nPLAY [Test] *******************************************************************************************************************************************************************\r\n\r\nTASK [Gathering Facts] ********************************************************************************************************************************************************\r\n[WARNING]: Module invocation had junk after the JSON data: exit status 1\r\nfatal: [mac-mini-intel-04]: FAILED! => {\"ansible_facts\": {}, \"changed\": false, \"failed_modules\": {\"ansible.legacy.setup\": {\"cmd\": \"sysctl hw.model\", \"failed\": true, \"invocation\": {\"module_args\": {\"fact_path\": \"/etc/ansible/facts.d\", \"filter\": [], \"gather_subset\": [\"all\"], \"gather_timeout\": 10}}, \"msg\": \"[Errno 2] No such file or directory: b'sysctl'\", \"rc\": 2, \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}}, \"msg\": \"The following modules failed to execute: ansible.legacy.setup\\n\"}\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "pr_html_url": "https://github.com/ansible/ansible/pull/81297", "file_loc": {"base_commit": "f7d7890df93393b3364fe40c4d8a65c76610c4db", "files": [{"path": "lib/ansible/module_utils/basic.py", "status": "modified", "Loc": {"('AnsibleModule', None, 360)": {"mod": [1351]}, "('AnsibleModule', 'get_bin_path', 1351)": {"mod": [1356, 1358, 1367, 1368]}}}, {"path": "lib/ansible/module_utils/common/process.py", "status": "modified", "Loc": {"(None, 'get_bin_path', 12)": {"add": [29, 36, 42, 47], "mod": [15, 16, 17, 18, 21, 32, 33, 38, 39]}}}, {"path": "lib/ansible/module_utils/facts/hardware/aix.py", "status": "modified", "Loc": {"('AIXHardware', 'get_dmi_facts', 126)": {"mod": [132]}, "('AIXHardware', 'get_vgs_facts', 146)": {"mod": [163, 164]}, "('AIXHardware', 'get_mount_facts', 188)": {"mod": [197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 219, 220, 221, 222, 223, 225]}, "('AIXHardware', 'get_device_facts', 231)": {"mod": [235, 236, 237, 239, 240, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258]}}}, {"path": "lib/ansible/module_utils/facts/hardware/darwin.py", "status": "modified", "Loc": {"('DarwinHardware', 'get_memory_facts', 89)": {"mod": [97, 101, 102, 103, 104, 105, 106, 108, 109, 111, 112, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 126]}, "('DarwinHardware', 'get_uptime_facts', 130)": {"mod": [133]}}}, {"path": "lib/ansible/module_utils/facts/hardware/freebsd.py", "status": "modified", "Loc": {}}, {"path": "lib/ansible/module_utils/facts/hardware/hpux.py", "status": "modified", "Loc": {"('HPUXHardware', 'populate', 40)": {"add": [42]}}}, {"path": "lib/ansible/module_utils/facts/hardware/netbsd.py", "status": "modified", "Loc": {"('NetBSDHardware', 'get_uptime_facts', 162)": {"mod": [164]}}}, {"path": "lib/ansible/module_utils/facts/hardware/openbsd.py", "status": "modified", "Loc": {"('OpenBSDHardware', 'get_uptime_facts', 113)": {"mod": [115]}}}, {"path": "lib/ansible/module_utils/facts/hardware/sunos.py", "status": "modified", "Loc": {"('SunOSHardware', 'get_dmi_facts', 167)": {"mod": [175]}}}, {"path": "lib/ansible/module_utils/facts/network/aix.py", "status": "modified", "Loc": {"('AIXNetwork', 'get_default_interfaces', 31)": {"mod": [34, 36, 37, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48]}, "('AIXNetwork', 'get_interfaces_info', 53)": {"mod": [61, 62, 63]}}}, {"path": "lib/ansible/module_utils/facts/network/fc_wwn.py", "status": "modified", "Loc": {"('FcWwnInitiatorFactCollector', 'collect', 33)": {"mod": [50, 62, 63, 84, 85]}}}, {"path": "lib/ansible/module_utils/facts/network/generic_bsd.py", "status": "modified", "Loc": {"('GenericBsdIfconfigNetwork', 'populate', 35)": {"mod": [37, 42]}}}, {"path": "lib/ansible/module_utils/facts/network/hpux.py", "status": "modified", "Loc": {"('HPUXNetwork', 'populate', 30)": {"mod": [32]}, "('HPUXNetwork', 'get_default_interfaces', 47)": {"mod": [49]}, "('HPUXNetwork', 'get_interfaces_info', 60)": {"mod": [62]}}}, {"path": "lib/ansible/module_utils/facts/network/hurd.py", "status": "modified", "Loc": {"('HurdPfinetNetwork', 'populate', 63)": {"mod": [66]}}}, {"path": "lib/ansible/module_utils/facts/network/iscsi.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24]}, "('IscsiInitiatorNetworkCollector', 'collect', 33)": {"mod": [83, 84, 85, 95, 96, 97, 98]}}}, {"path": "lib/ansible/module_utils/facts/other/facter.py", "status": "modified", "Loc": {"('FacterFactCollector', 'find_facter', 24)": {"mod": [25, 26]}, "('FacterFactCollector', 'collect', 58)": {"mod": [76, 77]}}}, {"path": "lib/ansible/module_utils/facts/other/ohai.py", "status": "modified", "Loc": {"('OhaiFactCollector', 'find_ohai', 38)": {"mod": [39, 40]}, "('OhaiFactCollector', None, 27)": {"mod": [42]}, "('OhaiFactCollector', 'collect', 57)": {"mod": [70, 71]}}}, {"path": "lib/ansible/module_utils/facts/sysctl.py", "status": "modified", "Loc": {"(None, 'get_sysctl', 23)": {"mod": [24, 25, 26, 30, 31, 32, 33, 34, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 53, 54, 55, 56, 58, 59]}}}, {"path": "test/units/module_utils/facts/network/test_fc_wwn.py", "status": "modified", "Loc": {"(None, 'mock_get_bin_path', 92)": {"mod": [92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104]}, "(None, 'mock_run_command', 107)": {"mod": [109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119]}}}, {"path": "test/units/module_utils/facts/network/test_generic_bsd.py", "status": "modified", "Loc": {"(None, 'get_bin_path', 25)": {"mod": [25, 26, 27, 28, 29, 30]}}}, {"path": "test/units/module_utils/facts/network/test_iscsi_get_initiator.py", "status": "modified", "Loc": {"(None, 'test_get_iscsi_info', 39)": {"mod": [44, 50]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/module_utils/facts/hardware/hpux.py", "lib/ansible/module_utils/basic.py", "lib/ansible/module_utils/facts/hardware/sunos.py", "lib/ansible/module_utils/facts/hardware/openbsd.py", "lib/ansible/module_utils/facts/network/hpux.py", "lib/ansible/module_utils/facts/other/facter.py", "lib/ansible/module_utils/facts/network/iscsi.py", "lib/ansible/module_utils/facts/sysctl.py", "lib/ansible/module_utils/facts/other/ohai.py", "lib/ansible/module_utils/common/process.py", "lib/ansible/module_utils/facts/network/hurd.py", "lib/ansible/module_utils/facts/hardware/freebsd.py", "lib/ansible/module_utils/facts/network/generic_bsd.py", "lib/ansible/module_utils/facts/hardware/darwin.py", "lib/ansible/module_utils/facts/network/aix.py", "lib/ansible/module_utils/facts/hardware/aix.py", "lib/ansible/module_utils/facts/hardware/netbsd.py", "lib/ansible/module_utils/facts/network/fc_wwn.py"], "doc": [], "test": ["test/units/module_utils/facts/network/test_generic_bsd.py", "test/units/module_utils/facts/network/test_fc_wwn.py", "test/units/module_utils/facts/network/test_iscsi_get_initiator.py"], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "b5cffe8ced3c06c5c1542e37c382c74d5f61f3eb", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/39759", "iss_label": "networking\nmodule\nsupport:network\nnxos\nbug\naffects_2.6\ncisco", "title": "nxos_snmp_user issues", "body": "<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nTHIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.\r\nAlso test if the latest release, and devel branch are affected too.\r\nALWAYS add information AFTER (OUTSIDE) these html comments.\r\nOtherwise it may end up being automatically closed by our bot. -->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.\r\nDo not include extra details here, e.g. \"vyos_command\" not \"the network module vyos_command\" or the full path-->\r\nnxos_snmp_user\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.6.0 (devel fed20b825f) last updated 2018/02/15 12:51:12 (GMT -400)\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /root/agents-ci/ansible/lib/ansible\r\n executable location = /root/agents-ci/ansible/bin/ansible\r\n python version = 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4]\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.-->\r\nAnsible Server : Ubuntu 14.04\r\nDevice: N7K running 7.0(3)D1(1)\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used. -->\r\n\r\nThere are few issues with nxos_snmp_user module\r\n1. group is not a required parameter. When group is not specified, the platform does accept the CLI and assigns the default group (usually network-operator).\r\n2. more than one group cannot be added properly\r\n3. group cannot be removed after adding without removing the user itself.\r\n4. There are also platform bugs where the 'show snmp user | json' output is not consistent across older platforms and the code fails for these old platforms.\r\n5. dead code\r\n\r\nNote: I will open a PR shortly to address these issues.", "pr_html_url": "https://github.com/ansible/ansible/pull/39760", "file_loc": {"base_commit": "b5cffe8ced3c06c5c1542e37c382c74d5f61f3eb", "files": [{"path": "lib/ansible/modules/network/nxos/nxos_snmp_user.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [52, 55], "mod": [45]}, "(None, 'get_snmp_user', 124)": {"add": [168], "mod": [151, 152]}, "(None, 'config_snmp_user', 181)": {"add": [192], "mod": [181, 182, 187, 189, 191]}, "(None, 'remove_snmp_user', 177)": {"mod": [177, 178]}, "(None, 'main', 214)": {"mod": [217, 254, 255, 256, 258, 263, 266, 276, 288, 289, 294, 295]}}}, {"path": "test/integration/targets/nxos_snmp_user/tests/common/sanity.yaml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, 21, 24, 25, 26, 27, 28, 31, 33, 35, 36, 37, 39, 40, 41]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2\n4", "iss_reason": "1\n2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/modules/network/nxos/nxos_snmp_user.py"], "doc": [], "test": [], "config": ["test/integration/targets/nxos_snmp_user/tests/common/sanity.yaml"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "44b53141748d29220441e0799b54ea3130ac6753", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/78079", "iss_label": "support:core\nbug\nhas_pr\naffects_2.12", "title": "Password lookup with seed not idempotent", "body": "### Summary\n\nAccording to the [docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/password_lookup.html#parameter-seed), providing a seed should make the password lookup idempotent, but this does not appear to be the case.\r\n\r\n> Identical seeds will yield identical passwords.\r\n\r\n\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nansible.builtin.password\n\n### Ansible Version\n\n```console\n$ ansible --version\r\nansible [core 2.12.6]\r\n config file = None\r\n configured module search path = ['/Users/mike/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /Users/mike/Development/vagrant/.venv/lib/python3.8/site-packages/ansible\r\n ansible collection location = /Users/mike/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /Users/mike/Development/vagrant/.venv/bin/ansible\r\n python version = 3.8.9 (default, Apr 13 2022, 08:48:07) [Clang 13.1.6 (clang-1316.0.21.2.5)]\r\n jinja version = 3.1.2\r\n libyaml = True\n```\n\n\n### Configuration\n\n```console\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\n```\n\n\n### OS / Environment\n\nMacOS 12.4\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```shell\r\nfor i in {0..5}; do ansible -i /dev/null localhost -m debug -a 'msg={{ lookup(\"ansible.builtin.password\", \"/dev/null\", seed=\"foo\")}}'; done\r\n```\r\n\n\n### Expected Results\n\nThe same password should be produced each time\n\n### Actual Results\n\n```console\nDifferent password is produced each time:\r\n\r\nlocalhost | SUCCESS => {\r\n \"msg\": \"gvlUM1Mx27449Q5ga7QG\"\r\n}\r\nlocalhost | SUCCESS => {\r\n \"msg\": \"oyPZ8QPS-Y1aqgAccGAg\"\r\n}\r\nlocalhost | SUCCESS => {\r\n \"msg\": \"LeYqMugFDPr4tW7UBtDu\"\r\n}\r\nlocalhost | SUCCESS => {\r\n \"msg\": \"P.3Eaq3AUgBqvHzP3o_s\"\r\n}\r\nlocalhost | SUCCESS => {\r\n \"msg\": \":nFjSHte6H4Q20oGs,CC\"\r\n}\r\nlocalhost | SUCCESS => {\r\n \"msg\": \"6l7s1:vfXjMoOePMiXh,\"\r\n}\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "pr_html_url": "https://github.com/ansible/ansible/pull/78080", "file_loc": {"base_commit": "44b53141748d29220441e0799b54ea3130ac6753", "files": [{"path": "lib/ansible/plugins/lookup/password.py", "status": "modified", "Loc": {"(None, '_parse_parameters', 142)": {"add": [147], "mod": [142, 175, 176, 177, 178, 180]}, "(None, None, None)": {"mod": [127]}, "('LookupModule', 'run', 337)": {"mod": [341]}}}, {"path": "test/integration/targets/lookup_password/tasks/main.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [104]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/plugins/lookup/password.py"], "doc": [], "test": [], "config": ["test/integration/targets/lookup_password/tasks/main.yml"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "fc3cc73b73a39b0ab629ba76ac4f9ca65cc38eee", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/21893", "iss_label": "affects_2.2\nc:module_utils/facts\nbug", "title": "Gathering facts, zero division error in get_cpu_facts", "body": "<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Name of the module/plugin/task/feature -->\r\nmodule setup (ansible/module_utils/facts.py)\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \u201cansible --version\u201d between quotes below -->\r\n```\r\nansible 2.2.1.0\r\nPython 2.7.10 (host)\r\n\r\nPython 2.7.3 (remote)\r\n```\r\n\r\n##### CONFIGURATION\r\n<!---\r\nMention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).\r\n-->\r\nI have a lot of hosts and problem only with one. I updated ansible couple of time and didn't test changes on this host.\r\n\r\n##### OS / ENVIRONMENT\r\n<!---\r\nMention the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \u201cN/A\u201d for anything that is not platform-specific.\r\n-->\r\n```\r\n# host\r\ntried on MacOS and CentOS 6.8\r\n# remote\r\nLinux hostname 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:19:35 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\r\n```\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\nWhen I run command:\r\n```\r\nansible hostname -m setup -a 'gather_subset=!all'\r\n```\r\neverything work fine, but when i run playbook or just try to gather facts I have a module failure:\r\n```\r\nansible hostname -m setup\r\nhostname | FAILED! => {\r\n \"changed\": false,\r\n \"failed\": true,\r\n \"module_stderr\": \"Shared connection to hostnameIP closed.\\r\\n\",\r\n \"module_stdout\": \"Traceback (most recent call last):\\r\\n File \\\"/tmp/ansible_Z9fwoS/ansible_module_setup.py\\\", line 134, in <module>\\r\\n main()\\r\\n File \\\"/tmp/ansible_Z9fwoS/ansible_module_setup.py\\\", line 126, in main\\r\\n data = get_all_facts(module)\\r\\n File \\\"/tmp/ansible_Z9fwoS/ansible_modlib.zip/ansible/module_utils/facts.py\\\", line 3518, in get_all_facts\\r\\n File \\\"/tmp/ansible_Z9fwoS/ansible_modlib.zip/ansible/module_utils/facts.py\\\", line 3461, in ansible_facts\\r\\n File \\\"/tmp/ansible_Z9fwoS/ansible_modlib.zip/ansible/module_utils/facts.py\\\", line 987, in populate\\r\\n File \\\"/tmp/ansible_Z9fwoS/ansible_modlib.zip/ansible/module_utils/facts.py\\\", line 1132, in get_cpu_facts\\r\\nZeroDivisionError: integer division or modulo by zero\\r\\n\",\r\n \"msg\": \"MODULE FAILURE\"\r\n}\r\n```\r\nI don't understand how I can fix facts.py or find problem with zero division on sources...\r\nWhen I run on remote host:\r\n```\r\ncat /proc/cpuinfo\r\n```\r\nit shows info about CPU without problem.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\nI can reproduce it only on my single host with other ones everything works fine.\r\nI tried to update all packages on remote host but it didn't help me.\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/24428", "file_loc": {"base_commit": "fc3cc73b73a39b0ab629ba76ac4f9ca65cc38eee", "files": [{"path": "lib/ansible/module_utils/facts.py", "status": "modified", "Loc": {"('LinuxHardware', 'get_cpu_facts', 1124)": {"mod": [1207]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/module_utils/facts.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "9495ddbc21da2a5c7967f01c4a958d32f203af65", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/54231", "iss_label": "module\nsupport:community\nfeature\naffects_2.8\nremote_management", "title": "redfish_facts- Chassis - GetChassisThermals", "body": "<!--- Verify first that your feature was not already discussed on GitHub -->\r\n<!--- Complete *all* sections as described, this form is processed automatically -->\r\n\r\n##### SUMMARY\r\n<!--- Describe the new feature/improvement briefly below -->\r\nThis feature would implement a GetChassisThermals command for the Chassis category of redfish_facts, and would retrieve temperature related properties from the Chassis/Thermal field for each sensor available.\r\n\r\n##### ISSUE TYPE\r\n- Feature Idea\r\n\r\n##### COMPONENT NAME\r\n<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->\r\nredfish_facts\r\n##### ADDITIONAL INFORMATION\r\n<!--- Describe how the feature would be used, why it is needed and what it would solve -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n\r\n```\r\n\r\n<!--- HINT: You can also paste gist.github.com links for larger files -->\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/54399", "file_loc": {"base_commit": "9495ddbc21da2a5c7967f01c4a958d32f203af65", "files": [{"path": "lib/ansible/module_utils/redfish_utils.py", "status": "modified", "Loc": {"('RedfishUtils', None, 21)": {"add": [895]}}}, {"path": "lib/ansible/modules/remote_management/redfish/redfish_facts.py", "status": "modified", "Loc": {"(None, 'main', 180)": {"add": [273]}, "(None, None, None)": {"mod": [165]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/modules/remote_management/redfish/redfish_facts.py", "lib/ansible/module_utils/redfish_utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "197a360977a52a31d6ab40db1f4752454e8b93e3", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/22374", "iss_label": "cloud\naws\naffects_2.1\nmodule\nsupport:certified\nbug", "title": "ec2_vpc_route_table can't update routes", "body": "<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Name of the module/plugin/task/feature -->\r\nec2_vpc_route_table\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.1.3.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n```\r\n\r\n##### CONFIGURATION\r\n<!---\r\nMention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).\r\n-->\r\n\r\n##### OS / ENVIRONMENT\r\n<!---\r\nMention the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \u201cN/A\u201d for anything that is not platform-specific.\r\n-->\r\nN/A\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\nRan a script to create a NAT instance, created the routes going through the NAT using ec2_vpc_route_table \r\nDeleted the NAT and ran the same script again.\r\nec2_vpc_route_table was not able to update the route with the new instance id, but left the old network interface (which no longer existed) in place, thereby resulting in a black hole.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: Create Backend route 1 and route it through NAT 1\r\n ec2_vpc_route_table:\r\n vpc_id: '{{ vpc_id }}'\r\n region: '{{ vpc_region }}'\r\n tags:\r\n Name: \"{{ vpc_name }} Backend network 1\"\r\n routes:\r\n - dest: 0.0.0.0/0\r\n instance_id: '{{ instance_id }}'\r\n subnets:\r\n - \"{{ vpc_subnet['web_subnet']['subnet_one'].resource_tags.Name }}\"\r\n - \"{{ vpc_subnet['db_subnet']['subnet_one'].resource_tags.Name }}\"\r\n\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nI naturally expected that the route table would be updated\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\nchanged: [10.77.200.10] => {\"changed\": true, \"invocation\": {\"module_args\": {\"aws_access_key\": null, \"aws_secret_key\": null, \"ec2_url\": null, \"lookup\": \"tag\", \"profile\": null, \"propagating_vgw_ids\": null, \"region\": \"us-west-2\", \"route_table_id\": null, \"routes\": [{\"destination_cidr_block\": \"0.0.0.0/0\", \"instance_id\": \"i-1234567890123456\"}], \"security_token\": null, \"state\": \"present\", \"subnets\": [\"test - web - us-west-2c\", \"test - database - us-west-2c\"], \"tags\": {\"Name\": \"test Backend network 1\"}, \"validate_certs\": true, \"vpc_id\": \"vpc-12345678\"}, \"module_name\": \"ec2_vpc_route_table\"}, \"route_table\": {\"id\": \"rtb-23456789\", \"routes\": [{\"destination_cidr_block\": \"10.99.0.0/16\", \"gateway_id\": null, \"instance_id\": \"i-0987654321098765\", \"interface_id\": \"eni-12345678\", \"origin\": \"CreateRoute\", \"state\": \"active\", \"vpc_peering_connection_id\": null}, {\"destination_cidr_block\": \"10.77.0.0/16\", \"gateway_id\": \"local\", \"instance_id\": null, \"interface_id\": null, \"origin\": \"CreateRouteTable\", \"state\": \"active\", \"vpc_peering_connection_id\": null}, {\"destination_cidr_block\": \"0.0.0.0/0\", \"gateway_id\": null, \"instance_id\": null, \"interface_id\": \"eni-87654321\", \"origin\": \"CreateRoute\", \"state\": \"blackhole\", \"vpc_peering_connection_id\": null}], \"tags\": {\"Name\": \"test Backend network 1\"}, \"vpc_id\": \"vpc-12345678\"}}\r\n\r\n```\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/27234", "file_loc": {"base_commit": "197a360977a52a31d6ab40db1f4752454e8b93e3", "files": [{"path": "lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [336]}, "(None, 'index_of_matching_route', 342)": {"add": [345]}, "(None, 'ensure_routes', 348)": {"add": [351, 355, 394], "mod": [375]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "8d78a829c60cc63e668683fb5d626eba942e6a39", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/33877", "iss_label": "support:core\naffects_2.5\nbug", "title": "YAML inventory: ungrouped group isn't populated", "body": "##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nlib/ansible/plugins/inventory/yaml.py\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.5.0 (devel 7c187cae93) last updated 2017/12/13 16:21:51 (GMT +200)\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### SUMMARY\r\nWhen using YAML inventory, [`ungrouped` default group](http://docs.ansible.com/ansible/devel/intro_inventory.html#default-groups) is never populated.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n`hosts.yaml`:\r\n```yaml\r\nall:\r\n hosts:\r\n testhost:\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n```\r\n$ ansible-inventory -i hosts.yml --list\r\n{\r\n \"_meta\": {\r\n \"hostvars\": {\r\n \"testhost\": {}\r\n }\r\n },\r\n \"all\": {\r\n \"children\": [\r\n \"ungrouped\"\r\n ]\r\n },\r\n \"ungrouped\": {\r\n \"hosts\": [\r\n \"testhost\"\r\n ]\r\n }\r\n}\r\n```\r\n\r\n```\r\n$ ansible localhost -i hosts.yml -m debug -a 'msg={{ groups }}'\r\nlocalhost | SUCCESS => {\r\n \"msg\": {\r\n \"all\": [\r\n \"localhost\"\r\n ],\r\n \"ungrouped\": [\r\n \"localhost\"\r\n ]\r\n }\r\n}\r\n``` \r\n\r\n##### ACTUAL RESULTS\r\n\r\n```\r\n$ ansible-inventory -i hosts.yml --list\r\n```\r\n\r\n```\r\n$ ansible-inventory -i /tmp/hosts.yml --list\r\n{\r\n \"_meta\": {\r\n \"hostvars\": {\r\n \"localhost\": {}\r\n }\r\n },\r\n \"all\": {\r\n \"children\": [\r\n \"ungrouped\"\r\n ]\r\n },\r\n \"ungrouped\": {}\r\n}\r\n```\r\n```\r\n$ ansible localhost -i /tmp/hosts.yml -m debug -a 'msg={{ groups }}'\r\nlocalhost | SUCCESS => {\r\n \"changed\": false,\r\n \"msg\": {\r\n \"all\": [\r\n \"localhost\"\r\n ],\r\n \"ungrouped\": []\r\n }\r\n}\r\n```\r\n\r\n##### RESULT WITH 2.3\r\n\r\nUsing ansible 2.3 (`ansible 2.3.3.0 (stable-2.3 797d999513) last updated 2017/12/13 17:38:28 (GMT +200)`) `localhost` belongs to `ungrouped`.\r\n\r\n```\r\n$ ansible localhost -i hosts.yml -m debug -a 'msg={{ groups }}'\r\nlocalhost | SUCCESS => {\r\n \"msg\": {\r\n \"all\": [\r\n \"localhost\"\r\n ],\r\n \"ungrouped\": [\r\n \"localhost\"\r\n ]\r\n }\r\n}\r\n```", "pr_html_url": "https://github.com/ansible/ansible/pull/33878", "file_loc": {"base_commit": "bf29cc79a681ea7c706fda4f95cd0d7fbd77b55a", "files": [{"path": "lib/ansible/inventory/data.py", "status": "modified", "Loc": {"('InventoryData', 'reconcile_inventory', 105)": {"mod": [128, 129, 130, 140]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/inventory/data.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "e633b93f859daafea3cf68bb79ad140ed8a42495", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/48415", "iss_label": "cloud\nazure\nmodule\nsupport:community\nbug\naffects_2.6\npostgresql", "title": "storage_mb parameter is not working in azure_rm_postgresqlserver", "body": "##### SUMMARY\r\nThe storage configuration to create a new database server instance is not working in azure_rm_postgresqlserver. Storage_mb is always configured with a 5Gb default value.\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nazure_rm_postgresqlserver\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.6.4\r\n config file = None\r\n configured module search path = [u'/Users/xxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /Library/Python/2.7/site-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 2.7.10 (default, Oct 6 2017, 22:29:07) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)]\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nubuntu:18.04\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\nI'm executing the following provision in the module:\r\n\r\n```\r\nTASK [Create ADP server instance] *************************************************************************************************************************************************\r\ntask path: /home/baikal/delivery/ansible/playbook_infra_create_15_dbaas_azure.yml:40\r\nUsing module file /usr/local/lib/python2.7/dist-packages/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py\r\n<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root\r\n<localhost> EXEC /bin/sh -c 'AZURE_SUBSCRIPTION_ID=xxxxxxxxx python && sleep 0'\r\n [WARNING]: Azure API profile latest does not define an entry for PostgreSQLManagementClient\r\n\r\nchanged: [localhost] => {\r\n \"changed\": true,\r\n \"fully_qualified_domain_name\": \"adptest-db.postgres.database.azure.com\",\r\n \"id\": \"/subscriptions/xxxxxxxxx/resourceGroups/adptest-rg/providers/Microsoft.DBforPostgreSQL/servers/adptest-db\",\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"ad_user\": null,\r\n \"adfs_authority_url\": null,\r\n \"admin_password\": \"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER\",\r\n \"admin_username\": \"postgres\",\r\n \"api_profile\": \"latest\",\r\n \"auth_source\": null,\r\n \"cert_validation_mode\": null,\r\n \"client_id\": null,\r\n \"cloud_environment\": \"AzureCloud\",\r\n \"create_mode\": \"Default\",\r\n \"enforce_ssl\": false,\r\n \"location\": \"northeurope\",\r\n \"name\": \"adptest-db\",\r\n \"password\": null,\r\n \"profile\": null,\r\n \"resource_group\": \"adptest-rg\",\r\n \"secret\": null,\r\n \"sku\": {\r\n \"capacity\": \"4\",\r\n \"name\": \"GP_Gen5_4\",\r\n \"tier\": \"GeneralPurpose\"\r\n },\r\n \"state\": \"present\",\r\n \"storage_mb\": 307200,\r\n \"subscription_id\": null,\r\n \"tenant\": null,\r\n \"version\": \"10\"\r\n }\r\n },\r\n \"state\": \"Ready\",\r\n \"version\": \"10\"\r\n}\r\n```\r\nAfter the ansible module execution I can check the postgres server configuration and I can find the following:\r\n\r\n```\r\n$ az postgres server show --resource-group adptest-rg --name adptest-db\r\n{\r\n \"administratorLogin\": \"postgres\",\r\n \"earliestRestoreDate\": \"2018-11-09T11:07:05.180000+00:00\",\r\n \"fullyQualifiedDomainName\": \"adptest-db.postgres.database.azure.com\",\r\n \"id\": \"/subscriptions/xxxxxxxxxx/resourceGroups/adptest-rg/providers/Microsoft.DBforPostgreSQL/servers/adptest-db\",\r\n \"location\": \"northeurope\",\r\n \"name\": \"adptest-db\",\r\n \"resourceGroup\": \"adptest-rg\",\r\n \"sku\": {\r\n \"capacity\": 4,\r\n \"family\": \"Gen5\",\r\n \"name\": \"GP_Gen5_4\",\r\n \"size\": null,\r\n \"tier\": \"GeneralPurpose\"\r\n },\r\n \"sslEnforcement\": \"Disabled\",\r\n \"storageProfile\": {\r\n \"backupRetentionDays\": 7,\r\n \"geoRedundantBackup\": \"Disabled\",\r\n \"storageMb\": 5120\r\n },\r\n \"tags\": null,\r\n \"type\": \"Microsoft.DBforPostgreSQL/servers\",\r\n \"userVisibleState\": \"Ready\",\r\n \"version\": \"10\"\r\n}\r\n```\r\n\r\nWhere you can see the storageMb capacity of the database server has been provisioned with 5Gb instead the value specified in storage_mb param of azure_rm_postgresqlserver for 300 Gb.\r\n\r\nAs workaround, after the database server provision I'm executing the following command:\r\n\r\n`az postgres server update --storage-size 307200 --resource-group adptest-rg --name adptest-db`\r\n\r\nNow if we check again the current configuration of the database instance we can see it has been correctly provisioned:\r\n\r\n```\r\n$ az postgres server show --resource-group adptest-rg --name adptest-db\r\n{\r\n \"administratorLogin\": \"postgres\",\r\n \"earliestRestoreDate\": \"2018-11-09T11:07:05.180000+00:00\",\r\n \"fullyQualifiedDomainName\": \"adptest-db.postgres.database.azure.com\",\r\n \"id\": \"/subscriptions/xxxxxxx/resourceGroups/adptest-rg/providers/Microsoft.DBforPostgreSQL/servers/adptest-db\",\r\n \"location\": \"northeurope\",\r\n \"name\": \"adptest-db\",\r\n \"resourceGroup\": \"adptest-rg\",\r\n \"sku\": {\r\n \"capacity\": 4,\r\n \"family\": \"Gen5\",\r\n \"name\": \"GP_Gen5_4\",\r\n \"size\": null,\r\n \"tier\": \"GeneralPurpose\"\r\n },\r\n \"sslEnforcement\": \"Disabled\",\r\n \"storageProfile\": {\r\n \"backupRetentionDays\": 7,\r\n \"geoRedundantBackup\": \"Disabled\",\r\n \"storageMb\": 307200\r\n },\r\n \"tags\": null,\r\n \"type\": \"Microsoft.DBforPostgreSQL/servers\",\r\n \"userVisibleState\": \"Ready\",\r\n \"version\": \"10\"\r\n}\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\nCreate the postgres database server instance with the specified storage size (storage_mb)\r\n\r\n\r\n##### ACTUAL RESULTS\r\nAlways created an instance with 5Gb as storage size\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/51653", "file_loc": {"base_commit": "e633b93f859daafea3cf68bb79ad140ed8a42495", "files": [{"path": "lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py", "status": "modified", "Loc": {"('AzureRMServers', 'create_update_postgresqlserver', 308)": {"add": [322]}, "('AzureRMServers', 'exec_module', 212)": {"mod": [230]}}}, {"path": "test/integration/targets/azure_rm_postgresqlserver/aliases", "status": "modified", "Loc": {"(None, None, None)": {"mod": [9]}}}, {"path": "test/integration/targets/azure_rm_postgresqlserver/tasks/main.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [62]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py"], "doc": [], "test": [], "config": ["test/integration/targets/azure_rm_postgresqlserver/tasks/main.yml"], "asset": ["test/integration/targets/azure_rm_postgresqlserver/aliases"]}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "8e8a7c869ae219debf80456d3edac5804af22c2c", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/27729", "iss_label": "affects_2.3\nmodule\nsupport:core\nbug", "title": "Removed restricted key from module data: ansible_lxc_bridge", "body": "<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nGathering Facts\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.3.1.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n python version = 2.7.13 (default, Jul 21 2017, 03:24:34) [GCC 7.1.1 20170630]\r\n```\r\n\r\n##### CONFIGURATION\r\nNo\r\n\r\n##### OS / ENVIRONMENT\r\nArchlinux, but probably not platform specific\r\n\r\n##### SUMMARY\r\nDuring gathering facts I get following warning\r\n\r\n```\r\nTASK [Gathering Facts] ************************************************************************\r\n [WARNING]: Removed restricted key from module data: ansible_lxc_bridge = {u'macaddress':\r\nu'70:85:c2:0b:a3:4a', u'features': {}, u'interfaces': [u'vethG18OR8', u'enp0s31f6',\r\nu'vethWYJVBN'], u'mtu': 1500, u'active': True, u'promisc': False, u'stp': False, u'ipv4':\r\n{u'broadcast': u'192.168.0.255', u'netmask': u'255.255.255.0', u'network': u'192.168.0.0',\r\nu'address': u'192.168.0.110'}, u'ipv6': [{u'scope': u'link', u'prefix': u'64', u'address':\r\nu'fe80::7285:c2ff:fe0b:a34a'}], u'device': u'lxc_bridge', u'type': u'bridge', u'id':\r\nu'8000.7085c20ba34a'}\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\nTry to run playbook against host with lxc and bridge configured.\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\nNo warning.\r\n\r\n##### ACTUAL RESULTS\r\nWarning.", "pr_html_url": "https://github.com/ansible/ansible/pull/28401", "file_loc": {"base_commit": "8e8a7c869ae219debf80456d3edac5804af22c2c", "files": [{"path": "lib/ansible/playbook/task.py", "status": "modified", "Loc": {"('Task', 'preprocess_data', 158)": {"add": [211, 224], "mod": [208, 209, 214, 215, 216, 217, 223]}, "(None, None, None)": {"mod": [28]}}}, {"path": "lib/ansible/plugins/action/__init__.py", "status": "modified", "Loc": {"('ActionBase', '_clean_returned_data', 770)": {"mod": [783]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/plugins/action/__init__.py", "lib/ansible/playbook/task.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "a01ee2759d309f8433aefbdaf477903fe0156639", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/15988", "iss_label": "affects_2.0\nsupport:core\nbug", "title": "ansible -B n -P 0 does not return job_id", "body": "##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\ncore\n\n##### ANSIBLE VERSION\n\n```\nansible 2.0.0.2\n config file = /home/tg/workspace/training/ansible/content/samples/ad-hoc/ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### CONFIGURATION\n\n```\n[defaults]\nhost_key_checking=False\n```\n##### OS / ENVIRONMENT\n\nControl machine & hosts: Ubuntu 14.04 x86_64\n##### SUMMARY\n\nWhen running an ad-hoc command with `-B` against the managed hosts, there does not seem to be any way to get hold of the job_id for later checking via the `async_status` module.\n##### STEPS TO REPRODUCE\n\n```\n$ ansible all -i hosts -B 3600 -P 0 -a \"sleep 1000\" \ntraining-1-1.tgbyte.de | SUCCESS | rc=0 >>\n\n\ntraining-1-2.tgbyte.de | SUCCESS | rc=0 >>\n\n\ntraining-1-3.tgbyte.de | SUCCESS | rc=0 >>\n```\n##### EXPECTED RESULTS\n\nInstead of just a success message, I'd expect the response to contain some indication of the job_id that could be used for checking the status using `async_status`. http://grokbase.com/t/gg/ansible-project/14bcxt8xhc/three-questions-regarding-asynchronous-jobs hints at that this used to work before.\n##### ACTUAL RESULTS\n\n```\nansible all -i hosts -B 3600 -P 0 -vvvv -a \"sleep 1000\" \nUsing /home/tg/workspace/training/ansible/content/samples/ad-hoc/ansible.cfg as config file\nLoaded callback minimal of type stdout, v2.0\n<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de '( umask 22 && mkdir -p \"$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281 )\" && echo \"$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281 )\" )' \n<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de '( umask 22 && mkdir -p \"$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592 )\" && echo \"$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592 )\" )' \n<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de '( umask 22 && mkdir -p \"$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146 )\" && echo \"$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146 )\" )' \n<training-1-1.tgbyte.de> PUT /tmp/tmpp3t6oH TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/command\n<training-1-1.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-1.tgbyte.de]' \n<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/command' \n<training-1-1.tgbyte.de> PUT /tmp/tmp_g1yYO TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/async_wrapper\n<training-1-1.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-1.tgbyte.de]' \n<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/async_wrapper' \n<training-1-1.tgbyte.de> PUT /tmp/tmpA8qdMB TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/arguments\n<training-1-1.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-1.tgbyte.de]'\n<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/async_wrapper 729788062869 3600 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/command /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/arguments'\n<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de 'rm -f -r /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/ > /dev/null 2>&1'\ntraining-1-1.tgbyte.de | SUCCESS | rc=0 >>\n\n\n<training-1-3.tgbyte.de> PUT /tmp/tmpPAuNt8 TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/command\n<training-1-3.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-3.tgbyte.de]'\n<training-1-2.tgbyte.de> PUT /tmp/tmpc1dxFb TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/command\n<training-1-2.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-2.tgbyte.de]'\n<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/command'\n<training-1-3.tgbyte.de> PUT /tmp/tmpMDq4jL TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/async_wrapper\n<training-1-3.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-3.tgbyte.de]'\n<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/command'\n<training-1-2.tgbyte.de> PUT /tmp/tmpZck5cm TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/async_wrapper\n<training-1-2.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-2.tgbyte.de]'\n<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/async_wrapper'\n<training-1-3.tgbyte.de> PUT /tmp/tmppWLnjT TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/arguments\n<training-1-3.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-3.tgbyte.de]'\n<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/async_wrapper'\n<training-1-2.tgbyte.de> PUT /tmp/tmpt5tfrR TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/arguments\n<training-1-2.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-2.tgbyte.de]'\n<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/async_wrapper 80137669282 3600 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/command /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/arguments'\n<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/async_wrapper 39280840727 3600 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/command /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/arguments'\n<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de 'rm -f -r /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/ > /dev/null 2>&1'\ntraining-1-3.tgbyte.de | SUCCESS | rc=0 >>\n\n\n<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung\n<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de 'rm -f -r /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/ > /dev/null 2>&1'\ntraining-1-2.tgbyte.de | SUCCESS | rc=0 >>\n```\n", "pr_html_url": "https://github.com/ansible/ansible/pull/59935", "file_loc": {"base_commit": "a01ee2759d309f8433aefbdaf477903fe0156639", "files": [{"path": "lib/ansible/plugins/callback/minimal.py", "status": "modified", "Loc": {"('CallbackModule', 'v2_runner_on_ok', 53)": {"mod": [65]}}}, {"path": "lib/ansible/plugins/callback/oneline.py", "status": "modified", "Loc": {"('CallbackModule', 'v2_runner_on_ok', 58)": {"mod": [67]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/plugins/callback/minimal.py", "lib/ansible/plugins/callback/oneline.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "a28709f92ddd62138f59967aa1bce319ffacf576", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/81018", "iss_label": "module\nbug\nhas_pr\nP3\nverified\naffects_2.14", "title": "dnf module : gcc-toolset-12-binutils package does not gets updated", "body": "### Summary\r\n\r\ndnf module : gcc-toolset-12-binutils package does not gets updated using the ansible playbook using the dnf module.\r\nrest all the packages gets updated . Tried and tested using the below ansible playbook.\r\n```\r\n- hosts: localhost\r\n tasks:\r\n - name: update wget\r\n dnf:\r\n name: httpd,gcc-toolset-12-binutils\r\n state: latest\r\n update_cache: yes\r\n update_only: yes\r\n```\r\n\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ndnf\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\n\r\n# rpm -qa | grep ansible-core\r\nansible-core-2.14.2-3.el8.x86_64\r\n\r\n# ansible --version\r\nansible [core 2.14.2]\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.11/site-packages/ansible\r\n ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/bin/ansible\r\n python version = 3.11.2 (main, Feb 17 2023, 09:28:16) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/usr/bin/python3.11)\r\n jinja version = 3.1.2\r\n libyaml = True\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\n# cat /etc/redhat-release \r\nRed Hat Enterprise Linux release 8.8 (Ootpa)\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n\r\n```yaml (paste below)\r\n- hosts: localhost\r\n tasks:\r\n - name: update wget\r\n dnf:\r\n name: httpd,gcc-toolset-12-binutils\r\n state: latest\r\n update_cache: yes\r\n update_only: yes\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nboth the packages httpd,gcc-toolset-12-binutils should be updated to the latest with is not the case with the gcc-toolset-12-binutils package. it does not gets updated.\r\n\r\n### Actual Results\r\n\r\n```console\r\n# rpm -qa | grep gcc-toolset-12-binutils\r\ngcc-toolset-12-binutils-gold-2.38-17.el8.x86_64\r\ngcc-toolset-12-binutils-2.38-16.el8.x86_64\r\n\r\n[root@rhel84 ~]# rpm -qa | grep httpd\r\nhttpd-2.4.37-56.module+el8.8.0+18758+b3a9c8da.6.x86_64\r\nhttpd-tools-2.4.37-56.module+el8.8.0+18758+b3a9c8da.6.x86_64\r\nredhat-logos-httpd-84.5-1.el8.noarch\r\nhttpd-filesystem-2.4.37-56.module+el8.8.0+18758+b3a9c8da.6.noarch\r\n\r\n\r\nLatest package of gcc-toolset-12-binutils is available.\r\n\r\n~~~\r\n# yum list gcc-toolset-12-binutils\r\n\r\nUpdating Subscription Management repositories.\r\nLast metadata expiration check: 0:14:40 ago on Sat 10 Jun 2023 02:25:31 AM IST.\r\nInstalled Packages\r\ngcc-toolset-12-binutils.x86_64 2.38-16.el8 @rhel-8-for-x86_64-appstream-rpms\r\nAvailable Packages\r\ngcc-toolset-12-binutils.x86_64 2.38-17.el8 rhel-8-for-x86_64-appstream-rpms \r\n~~~\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "pr_html_url": "https://github.com/ansible/ansible/pull/82725", "file_loc": {"base_commit": "a28709f92ddd62138f59967aa1bce319ffacf576", "files": [{"path": "lib/ansible/modules/dnf.py", "status": "modified", "Loc": {"('DnfModule', '_is_newer_version_installed', 832)": {"add": [852], "mod": [833, 834, 835, 836, 837, 839, 840, 841, 842, 844, 845, 846, 847, 848, 850, 851]}, "(None, None, None)": {"mod": [390]}, "('DnfModule', None, 410)": {"mod": [482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 497, 498, 499, 500, 502, 503, 504, 505, 506, 508, 509, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 531, 532, 534, 535, 537, 538, 540, 541, 542, 543, 544, 545, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568]}, "('DnfModule', '_ensure_dnf', 570)": {"mod": [578]}, "('DnfModule', '_is_installed', 815)": {"mod": [816, 818, 819, 820, 821, 823, 824, 825, 826, 827, 828, 830]}, "('DnfModule', '_install_remote_rpms', 983)": {"mod": [1003]}}}, {"path": "lib/ansible/modules/dnf5.py", "status": "modified", "Loc": {"(None, 'is_newer_version_installed', 366)": {"mod": [377, 378, 379, 380, 382, 384, 385, 386, 387, 389]}, "('Dnf5Module', 'run', 462)": {"mod": [607, 608, 609, 610, 611, 612, 613]}}}, {"path": "test/integration/targets/dnf/tasks/repo.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [469]}}}, {"path": "test/integration/targets/setup_rpm_repo/library/create_repo.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [51]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/modules/dnf5.py", "lib/ansible/modules/dnf.py", "test/integration/targets/setup_rpm_repo/library/create_repo.py"], "doc": [], "test": [], "config": ["test/integration/targets/dnf/tasks/repo.yml"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "2f75662a474b96ce377fdba15cc139d1ac25a138", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/6765", "iss_label": "mysql", "title": "Bug report: mysql_db does not fail when using import and bz2 or gz", "body": "##### Issue Type:\n\nBug Report\n##### Ansible Version:\n\nansible 1.6\n\nBug was introduced https://github.com/ansible/ansible/pull/4307\n##### Environment:\n\nN/A applies to all \n##### Summary:\n\nWhen using state=import, and the target= ends with .bz2 or .gz, it will succeed even when the bunzip2 or gunzip command fails. If the target does not exist, it succeeds. If the target exists but is not actually a zipped up file, it still succeeds. The module should fail if the bunzip2 or gunzip commands fail.\n##### Steps To Reproduce:\n\nansible -i hosts realhostname -m mysql_db -a \"name=test target=/backup/test.sql.gz state=import\"\n\nand the target does not exist, or is not really zipped up.\n##### Expected Results:\n\nI expect the module to return back fail with the stderr of the bunzip2 or gunzip command. \n##### Actual Results:\n\nIt returns ok as in the entire thing succeeded (when indeed it did not)\n", "pr_html_url": "https://github.com/ansible/ansible/pull/6766", "file_loc": {"base_commit": "2f75662a474b96ce377fdba15cc139d1ac25a138", "files": [{"path": "library/database/mysql_db", "status": "modified", "Loc": {"(None, None, None)": {"mod": [151, 153]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["library/database/mysql_db"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "2b723c6130f7d7887ba13cf5623bd49c39150bbf", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/10840", "iss_label": "cloud\naws\naffects_2.0\naffects_2.3\nc:inventory/contrib_script\ndocs", "title": "EC2 inventory script (ec2.py) needs better error messages & guidance", "body": "Tried running ec2.py/ec2.ini \"out of the box\" with all the proper boto configuration in place.\nGot error \"Forbidden\" and nothing else - obviously not helpful in tracking down the issue.\n\nAfter I hacked the script and added some additional error printing to the script I got:\n\n```\n<Code>OptInRequired</Code>\n<Message>The AWS Access Key Id needs a subscription for the service</Message>\n```\n\nStill it wasn't clear what the problem was and where to go to fix it.\nEventually I guessed lucky and set rds = False in ec2.ini and this worked.\n\nSuggestions:\n- rds should be defaulted to 'False' especially since script fails cryptically for users not signed up to rds\n- Error message should indicate which part of the script failed (rds, ec2, etc)\n- Error message should ideally suggest a solution (i.e. set rds = False if you're not signed up to rds)\n- Script should provide fuller error message not just \"Forbidden\"\n", "pr_html_url": "https://github.com/ansible/ansible/pull/11006", "file_loc": {"base_commit": "2b723c6130f7d7887ba13cf5623bd49c39150bbf", "files": [{"path": "contrib/inventory/ec2.py", "status": "modified", "Loc": {"('Ec2Inventory', 'fail_with_error', 517)": {"add": [518]}, "('Ec2Inventory', 'get_instances_by_region', 386)": {"mod": [409]}, "('Ec2Inventory', 'get_rds_instances_by_region', 411)": {"mod": [428]}, "('Ec2Inventory', 'get_elasticache_clusters_by_region', 430)": {"mod": [451, 461]}, "('Ec2Inventory', 'get_elasticache_replication_groups_by_region', 466)": {"mod": [485, 495]}, "('Ec2Inventory', None, 137)": {"mod": [517]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["contrib/inventory/ec2.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "ff5253fa0efacf5192b6d0f8b41b27a3033d7897", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/65815", "iss_label": "cloud\npython3\nmodule\ndocker\nsupport:community\nbug\nhas_pr\naffects_2.9", "title": "docker_network with multiple subnets always changes", "body": "<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Also test if the latest release and devel branch are affected too -->\r\n<!--- Complete *all* sections as described, this form is processed automatically -->\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly below -->\r\nWhen using `docker_network` to create a network with multiple subnets, the task will delete/create the network even if it already exists with the correct subnets. Ansible fails to judge if the existing subnets are correct, probably because of the way the arrays of subnets are compared in python.\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->\r\ndocker_network\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes -->\r\n```paste below\r\nansible 2.9.2\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/gunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.8/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- Paste verbatim output from \"ansible-config dump --only-changed\" between quotes -->\r\n```paste below\r\nANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True\r\nDEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log\r\nHOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = True\r\nINTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = /usr/bin/python3\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->\r\nBoth systems are running ArchLinux.\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: \"deploy network namespace that can hold all IPs\"\r\n docker_network:\r\n name: \"macvlan1\"\r\n driver: \"macvlan\"\r\n internal: false\r\n driver_options:\r\n parent: \"{{ ansible_default_ipv4.alias }}\"\r\n ipam_config: \"{{ macvlan_subnets }}\"\r\n```\r\nalso vars:\r\n```\r\nmacvlan_subnets:\r\n- gateway: 10.162.208.1\r\n subnet: 10.162.208.0/24\r\n- gateway: 10.162.223.1\r\n subnet: 10.162.223.0/24\r\n- gateway: 10.162.210.1\r\n subnet: 10.162.210.0/24\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- Describe what you expected to happen when running the steps above -->\r\nI was expecting to run the play 10 times and get Changed only on the first run and OK on the other 9 runs.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->\r\nThe docker network ALWAYS changes, even if the subnets are correct on the server, causing all docker containers on the network to disconnect. This will cause downtime for all the services that run on the node.\r\n\r\n<!--- Paste verbatim command output between quotes -->\r\n```paste below\r\nTASK [gen4 : deploy network namespace that can hold all IPs] ****************************************************************\r\n--- before\r\n+++ after\r\n@@ -1,19 +1,19 @@\r\n {\r\n- \"connected.10.162.208.129\": false,\r\n- \"connected.10.162.210.161\": false,\r\n- \"connected.10.162.210.169\": false,\r\n- \"connected.10.162.210.170\": false,\r\n- \"connected.10.162.210.171\": false,\r\n- \"connected.10.162.210.172\": false,\r\n- \"connected.10.162.210.173\": false,\r\n- \"connected.10.162.223.72\": false,\r\n- \"connected.10.162.223.73\": false,\r\n- \"connected.10.162.223.74\": false,\r\n- \"connected.10.162.223.75\": false,\r\n- \"connected.10.162.223.76\": false,\r\n+ \"connected.10.162.208.129\": true,\r\n+ \"connected.10.162.210.161\": true,\r\n+ \"connected.10.162.210.169\": true,\r\n+ \"connected.10.162.210.170\": true,\r\n+ \"connected.10.162.210.171\": true,\r\n+ \"connected.10.162.210.172\": true,\r\n+ \"connected.10.162.210.173\": true,\r\n+ \"connected.10.162.223.72\": true,\r\n+ \"connected.10.162.223.73\": true,\r\n+ \"connected.10.162.223.74\": true,\r\n+ \"connected.10.162.223.75\": true,\r\n+ \"connected.10.162.223.76\": true,\r\n \"exists\": true,\r\n- \"ipam_config[0].gateway\": \"10.162.210.1\",\r\n- \"ipam_config[0].subnet\": \"10.162.210.0/24\",\r\n- \"ipam_config[1].gateway\": \"10.162.210.1\",\r\n- \"ipam_config[1].subnet\": \"10.162.210.0/24\"\r\n+ \"ipam_config[0].gateway\": \"10.162.208.1\",\r\n+ \"ipam_config[0].subnet\": \"10.162.208.0/24\",\r\n+ \"ipam_config[1].gateway\": \"10.162.223.1\",\r\n+ \"ipam_config[1].subnet\": \"10.162.223.0/24\"\r\n }\r\n\r\nchanged: [server1337.gun1x]\r\n```\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/65839", "file_loc": {"base_commit": "ff5253fa0efacf5192b6d0f8b41b27a3033d7897", "files": [{"path": "lib/ansible/modules/cloud/docker/docker_network.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [367]}, "('DockerNetworkManager', '__init__', 370)": {"add": [390]}, "('DockerNetworkManager', 'has_different_config', 408)": {"add": [451], "mod": [454, 455, 456, 457, 458, 459, 460, 467, 468, 469, 470, 471, 472, 475]}, "(None, 'get_ip_version', 338)": {"mod": [338, 339]}, "(None, 'normalize_ipam_config_key', 354)": {"mod": [355]}}}, {"path": "test/integration/targets/docker_network/tasks/tests/ipam.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [14, 101, 172, 233, 282]}}}, {"path": "test/units/modules/cloud/docker/test_docker_network.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8]}, "(None, 'test_get_ip_version_positives', 18)": {"mod": [18, 19]}, "(None, 'test_get_ip_version_negatives', 28)": {"mod": [28, 30]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/modules/cloud/docker/docker_network.py"], "doc": ["test/integration/targets/docker_network/tasks/tests/ipam.yml"], "test": ["test/units/modules/cloud/docker/test_docker_network.py"], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "9de4f24d7ac3a205cdc723402f78d03a1fc961f8", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/75675", "iss_label": "support:core\ndocs\ndocsite\naffects_2.12\ndocs_only\nhackathon", "title": "Docs: Use code-block elements to format code examples: Community Guide ", "body": "### Summary\r\n\r\n**Problem**:\r\nThroughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.\r\n\r\n**Solution:**\r\nEnclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.\r\nFor a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).\r\n\r\n**Scope:**\r\nIn the Community Guide, there is 1 instance of a lead-in sentence ending with `::`. Use the following `grep` command to identify the files and line numbers:\r\n```\r\n$ grep -rn --include \"*.rst\" \"^[[:blank:]]*[^[:blank:]\\.\\.].*::$\" . `\r\n```\r\n\r\n**Example:**\r\n\r\nBefore:\r\n```\r\n* If the file has a unique title, use that for the main page anchor::\r\n\r\n .. _unique_page::\r\n\r\n```\r\n\r\nAfter:\r\n```\r\n* If the file has a unique title, use that for the main page anchor.\r\n\r\n .. code-block:: rst\r\n \r\n .. _unique_page::\r\n\r\n```\r\n\r\n### Issue Type\r\n\r\nDocumentation Report\r\n\r\n### Component Name\r\n\r\ndocs/docsite/rst/dev_guide\r\n\r\n### Ansible Version\r\n\r\n```console\r\nn/a\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\nn/a\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nn/a\r\n\r\n### Additional Information\r\n\r\nWhen example code is enclosed within a `code-block` element, translation programs do not attempt to translate the code.\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "pr_html_url": "https://github.com/ansible/ansible/pull/75847", "file_loc": {"base_commit": "9de4f24d7ac3a205cdc723402f78d03a1fc961f8", "files": [{"path": "docs/docsite/rst/community/communication.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [74]}}}, {"path": "docs/docsite/rst/community/development_process.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [316, 323, 331]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/docsite/rst/community/communication.rst", "docs/docsite/rst/community/development_process.rst"], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "ea1639e633fffac8a9db4b8b00ff8aaa4a23dadb", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/52316", "iss_label": "windows\nsupport:core\ndocs\naffects_2.8", "title": "Windows FAQ should mention possible SSL protocol issue", "body": "<!--- Verify first that your improvement is not already reported on GitHub -->\r\n<!--- Also test if the latest release and devel branch are affected too -->\r\n<!--- Complete *all* sections as described, this form is processed automatically -->\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly below, add suggestions to wording or structure -->\r\nTLS 1.0 is by default the maximum TLS supported version on Windows 7. However, Linux distributions (at least Debian) begin to disable it to allow TLS 1.2 as a minimum. Thus by default connection fails with this message:\r\n\r\n`ntlm: HTTPSConnectionPool(host='my-host', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))\r\n`\r\nCould you explain this issue on https://docs.ansible.com/ansible/latest/user_guide/windows_faq.html and add the possible workarounds (enable TLS 1.2 on Windows 7 target / temporary re-enable TLS 1.0 on controller) that are well described on the original discussion on https://groups.google.com/forum/#!msg/ansible-project/CCjQTWSAt4I/mHsdpJGUAwAJ ?\r\n\r\n<!--- HINT: Did you know the documentation has an \"Edit on GitHub\" link on every page ? -->\r\n\r\n##### ISSUE TYPE\r\n- Documentation Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->\r\nwindows_faq.rst\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes -->\r\n```paste below\r\n\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- Paste verbatim output from \"ansible-config dump --only-changed\" between quotes -->\r\n```paste below\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->\r\nDebian testing with openssl 1.1.1a-1.\r\n\r\n##### ADDITIONAL INFORMATION\r\n<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->\r\n\r\nWindows 7 is probably still a common target, and Debian Buster (next stable probably available in the summer) will probably be a common controller, so this issue should be briefly explained in the documentation.\r\n\r\nRegards,\r\nYvan\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/54016", "file_loc": {"base_commit": "ea1639e633fffac8a9db4b8b00ff8aaa4a23dadb", "files": [{"path": "docs/docsite/rst/user_guide/windows_winrm.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [751], "mod": [505, 506, 507, 509, 510, 512, 514, 515, 516, 517, 519, 520, 521, 522]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/docsite/rst/user_guide/windows_winrm.rst"], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "a6d4c3ff7cf43c24be6622102cee834fc5096496", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/78600", "iss_label": "easyfix\nsupport:core\nhas_pr\ndocs\naffects_2.13", "title": "scp_if_ssh not working as intended with OpenSSH since version 9.0", "body": "### Summary\r\n\r\nThe option `scp_if_ssh = true` is used to force Ansible to use scp instead of sftp on targets, that don't support sftp. However since OpenSSH 9.0 (8.8 on Arch Linux it seems) even the scp utility defaults to using sftp. The old behavior can be enabled by additionally setting `scp_extra_args = \"-O\"` to force scp to use the old protocol.\r\n\r\nI recognize that this is not an Ansible bug, but it may break documented and expected behavior.\r\n\r\nOpenSSH Changelog: https://www.openssh.com/txt/release-9.0\r\n> This release switches scp(1) from using the legacy scp/rcp protocol to using the SFTP protocol by default.\r\n\r\n### Issue Type\r\n\r\n~Bug Report~\r\nDocumentation Report\r\n\r\n### Component Name\r\n\r\nconnection, ssh, scp\r\n\r\n### Ansible Version\r\n\r\n```console\r\nansible [core 2.13.2]\r\n config file = None\r\n configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible\r\n ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]\r\n jinja version = 3.1.2\r\n libyaml = True\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\nCONNECTION:\r\n==========\r\n\r\nssh:\r\n___\r\nscp_extra_args(env: ANSIBLE_SCP_EXTRA_ARGS) = -O\r\nscp_if_ssh(env: ANSIBLE_SCP_IF_SSH) = true\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nDebian Sid\r\n\r\n### Steps to Reproduce\r\n\r\nconfigure sshd to not offer sftp. (eg. delete `Subsystem sftp /usr/lib/ssh/sftp-server` from `/etc/ssh/sshd_config` and restart)\r\n\r\ncreate a small example playbook, contents are irrelevant\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- hosts: localhost\r\n gather_facts: true\r\n remote_user: root\r\n tasks:\r\n - name: install a nonexistant package\r\n package:\r\n name:\r\n - less-is-more\r\n```\r\n\r\nexecute wit Ansible configuration or environment setting to use scp:\r\n```\r\nexport ANSIBLE_SCP_IF_SSH=false\r\nansible-playbook -c ssh playbook.yml\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\nansible@instance:~$ ansible-playbook -c ssh playbook.yml\r\n\r\nPLAY [localhost] ***************************************************************************************************\r\n\r\nTASK [Gathering Facts] *********************************************************************************************\r\nok: [localhost]\r\n\r\nTASK [install a nonexistant package] *******************************************************************************\r\nfatal: [localhost]: FAILED! => {\"changed\": false, \"msg\": \"No package matching 'less-is-more' is available\"}\r\n\r\nPLAY RECAP *********************************************************************************************************\r\nlocalhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0\r\n```\r\n\r\n### Actual Results\r\n\r\n```console\r\nwith only `scp_if_ssh`:\r\n\r\n\r\nansible@instance:~$ ansible-playbook -c ssh playbook.yml\r\n\r\nPLAY [localhost] ***************************************************************************************************************************************\r\n\r\nTASK [Gathering Facts] *********************************************************************************************************************************\r\nfatal: [localhost]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to connect to the host via scp: scp: Connection closed\\r\\n\", \"unreachable\": true}\r\n\r\nPLAY RECAP *********************************************************************************************************************************************\r\nlocalhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0\r\n```\r\n\r\nwith additional setting to acc `-O`to scp (working correctly):\r\n```\r\nansible@instance:~$ export ANSIBLE_SCP_EXTRA_ARGS=\"-O\"\r\nansible@instance:~$ ansible-playbook -c ssh playbook.yml\r\n\r\nPLAY [localhost] ***************************************************************************************************\r\n\r\nTASK [Gathering Facts] *********************************************************************************************\r\nok: [localhost]\r\n\r\nTASK [install a nonexistant package] *******************************************************************************\r\nfatal: [localhost]: FAILED! => {\"changed\": false, \"msg\": \"No package matching 'less-is-more' is available\"}\r\n\r\nPLAY RECAP *********************************************************************************************************\r\nlocalhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0\r\n```\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "pr_html_url": "https://github.com/ansible/ansible/pull/78745", "file_loc": {"base_commit": "a6d4c3ff7cf43c24be6622102cee834fc5096496", "files": [{"path": "lib/ansible/plugins/connection/ssh.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [294, 312]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/plugins/connection/ssh.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "4c5a6d9d44f81d88cca2a9f13966af326bed4b64", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/23078", "iss_label": "affects_2.4\nsupport:core\nbug", "title": "Jinja filters output trailing whitespace breaking idempotency", "body": "<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n`lib/ansible/parsing/yaml/dumper.py:AnsibleDumper`\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \u201cansible --version\u201d between quotes below -->\r\n```\r\nansible 2.4.0 (devel 6c101087ac) last updated 2017/03/29 16:09:54 (GMT +200)\r\n config file = \r\n configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n python version = 2.7.9 (default, Jun 29 2016, 13:08:31) [GCC 4.9.2]\r\n\r\n[and]\r\nansible 2.2.2.0 (stable-2.2 2273800f7c) last updated 2017/03/29 10:54:33 (GMT +200)\r\n lib/ansible/modules/core: (detached HEAD 31a1f19cd8) last updated 2017/03/29 16:07:02 (GMT +200)\r\n lib/ansible/modules/extras: (detached HEAD 921bc0d464) last updated 2017/03/29 14:42:48 (GMT +200)\r\n```\r\n\r\n##### CONFIGURATION\r\nAnsible default. No changes.\r\n\r\n\r\n##### OS / ENVIRONMENT\r\nIsolated Debian Jessie VM on Qubes OS setup only for testing with Ansible devel.\r\n\r\n##### SUMMARY\r\nThe `to_nice_json` filter and others like `indent` output trailing whitespace. That is not in itself a problem (although bad style). But in the case of `to_nice_json` it becomes a problem because it is potentially not idempotent which breaks CI which test for this property (e.g. DebOps).\r\n\r\nAlso note that the task itself outputs trailing whitespace (select the task output below).\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n---\r\n\r\n- hosts: localhost\r\n vars:\r\n input:\r\n - test: True\r\n test2:\r\n - 23\r\n - test: True\r\n\r\n tasks:\r\n\r\n - name: Jinja2 templating outputting trailing spaces which change depending\r\n debug:\r\n msg: \"{{ (input | to_nice_json).split('\\n') }}\"\r\n\r\n # Workaround is part of https://github.com/debops/debops-playbooks/blob/master/templates/debops__tpl_macros.j2\r\n - name: Clean Jinja2 templating using workaround\r\n debug:\r\n msg: \"{{ (input | to_nice_json | regex_replace(\\\"[ \\\\t\\\\r\\\\f\\\\v]+(\\\\n|$)\\\", \\\"\\\\1\\\")).split('\\n') }}\"\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\n\r\n```\r\nTASK [Clean Jinja2 templating using workaround]\r\nok: [localhost] => {\r\n \"changed\": false, \r\n \"msg\": [\r\n \"[\", \r\n \" {\", \r\n \" \\\"test\\\": true,\", \r\n \" \\\"test2\\\": [\", \r\n \" 23\", \r\n \" ]\", \r\n \" },\", \r\n \" {\", \r\n \" \\\"test\\\": true\", \r\n \" }\", \r\n \"]\"\r\n ]\r\n}\r\n```\r\n\r\nExample role in CI: https://travis-ci.org/debops/ansible-apt/builds/216374820\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\nTASK [Jinja2 template outputting trailing spaces which change depending on next element]\r\nok: [localhost] => {\r\n \"changed\": false, \r\n \"msg\": [\r\n \"[\", \r\n \" {\", \r\n \" \\\"test\\\": true, \", \r\n \" \\\"test2\\\": [\", \r\n \" 23\", \r\n \" ]\", \r\n \" }, \", \r\n \" {\", \r\n \" \\\"test\\\": true\", \r\n \" }\", \r\n \"]\"\r\n ]\r\n}\r\n```\r\n\r\nExample role in CI: https://travis-ci.org/debops/ansible-apt/builds/216355310#L824-L825\r\n", "pr_html_url": "https://github.com/ansible/ansible/pull/42633", "file_loc": {"base_commit": "4c5a6d9d44f81d88cca2a9f13966af326bed4b64", "files": [{"path": "lib/ansible/plugins/filter/core.py", "status": "modified", "Loc": {"(None, 'to_nice_json', 87)": {"mod": [90]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["lib/ansible/plugins/filter/core.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "d5ca8ca34e6a63978f368e733c11fad0b6619096", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2405", "iss_label": "bug", "title": "Can't train in DDP mode after recent update", "body": "## \ud83d\udc1b Bug\r\n\r\nWhen I pull the latest code, I found that DDP training would get stuck in the first few epochs.\r\nI ran some tests to see which commit caused this bug and I found commit `a3ecf0fd640465f9a7c009e81bcc5ecabf381004` on Mar 3 worked well. \r\nBut when I `git checkout` commit `e931b9da33f45551928059b8d61bddd50e401e48` on Mar 4, the bug appeared. \r\nAnd the bug still exists in the latest commit.\r\n\r\n\r\n## To Reproduce (REQUIRED)\r\n`python3 -m torch.distributed.launch --nproc_per_node 4 train.py`\r\n\r\nThe training process would get stuck forever unless you terminate it manually.\r\nAnd it still occupied the GPU memory unless killing the process by `kill -9 xxxxx`\r\n\r\n![stuck](https://user-images.githubusercontent.com/5948604/110415966-daa79980-80cd-11eb-8e8d-f7f56c2c9cd5.png)\r\n\r\n\r\n## Expected behavior\r\nRoll back to the older code, and get the expected behavior.\r\n```bash\r\n$ git checkout a3ecf0fd640465f9a7c009e81bcc5ecabf381004\r\n$ python3 -m torch.distributed.launch --nproc_per_node 4 train.py\r\n```\r\n![worked well](https://user-images.githubusercontent.com/5948604/110415987-e004e400-80cd-11eb-8fea-b00a5305459e.png)\r\n\r\n\r\n## Environment\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n - OS: Ubuntu 20.04\r\n - GPU: 1080 Ti * 4\r\n - Python: 3.8\r\n - pytorch: 1.7.1\r\n - CUDA: 11.1\r\n - Driver: 455.32\r\n\r\n## Additional\r\nIt seems like the latest commit working fine on 2 * 3090, I'm not sure yet, I will do some further tests on 3090 or other GPU.", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2421", "file_loc": {"base_commit": "d5ca8ca34e6a63978f368e733c11fad0b6619096", "files": [{"path": "train.py", "status": "modified", "Loc": {"(None, 'train', 40)": {"mod": [184, 185, 186, 217]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "d95978a562bec74eed1d42e370235937ab4e1d7a", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/6153", "iss_label": "enhancement", "title": "Enable AdamW Optimizer", "body": "### Search before asking\r\n\r\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.\r\n\r\n\r\n### Description\r\n\r\nWhen we use Adam, we have to tune learning rate along with the batch size.\r\nIt is cumbersome; with [AdamW](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html), we don't have to re-tune learning rate even if we change batch size.\r\nSo, it is nice to be able to use this option.\r\n\r\nI have created PR to enable AdamW optimizer. Please check it out.\r\n#6152\r\n\r\n### Use case\r\n\r\n_No response_\r\n\r\n### Additional\r\n\r\n_No response_\r\n\r\n### Are you willing to submit a PR?\r\n\r\n- [X] Yes I'd like to help by submitting a PR!", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/6152", "file_loc": {"base_commit": "d95978a562bec74eed1d42e370235937ab4e1d7a", "files": [{"path": "train.py", "status": "modified", "Loc": {"(None, 'train', 58)": {"add": [159], "mod": [158]}, "(None, None, None)": {"mod": [25]}, "(None, 'parse_opt', 442)": {"mod": [463]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "b2bef8f6d8e4c008bae72c211a186d75732fc213", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1639", "iss_label": "enhancement", "title": "Promote a new activation function recently developed by Kuangshi technology!!!!", "body": "## \ud83d\ude80 Feature\r\n\r\nReLU and PReLU are extended to 2D activation functions by adding negligible space condition overhead.\r\n\r\n## Motivation\r\n\r\nCan a visual task specific activation function be designed?\r\n\r\n## Pitch\r\n\r\nI would like to suggest a branch, but because the work is too busy, directly paste the code. It can be used directly.\r\n\r\n## Alternatives\r\n\r\nNone.\r\n\r\n## Additional context\r\n\r\n```python3\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\n\r\nclass FReLU(nn.Module):\r\n r\"\"\" Applies the FReLU function element-wise.\r\n\r\n `\"Funnel Activation for Visual Recognition\" <https://arxiv.org/pdf/2007.11824.pdf>`_\r\n\r\n Examples:\r\n >>> channels = 64\r\n >>> frelu = FReLU(channels)\r\n >>> input = torch.randn(1, channels, 64, 64)\r\n >>> output = frelu(input)\r\n \"\"\"\r\n\r\n def __init__(self, channels):\r\n super().__init__()\r\n self.FReLU = nn.Sequential(\r\n nn.Conv2d(channels, channels, kernel_size=3, stride=1, padding=1, groups=channels, bias=False),\r\n nn.BatchNorm2d(channels)\r\n )\r\n\r\n def forward(self, input: Tensor):\r\n out = self.FReLU(input)\r\n return torch.max(input, out)\r\n```\r\nThank you very much for your long-term promotion of Yolo technology. I will submit some code after a while. Good luck to you!", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/1666", "file_loc": {"base_commit": "b2bef8f6d8e4c008bae72c211a186d75732fc213", "files": [{"path": "utils/activations.py", "status": "modified", "Loc": {"('FReLU', '__init__', 66)": {"mod": [68]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["utils/activations.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "d223460f3a4b4151437b15ac83990cea4b0f42e2", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/11170", "iss_label": "bug", "title": "Class filtering does not work in segmentation code", "body": "### Search before asking\r\n\r\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.\r\n\r\n\r\n### YOLOv5 Component\r\n\r\nTraining\r\n\r\n### Bug\r\n\r\nI tried to filter classes that I train with as explained [here](https://github.com/ultralytics/yolov5/issues/1978). I found out that it works with `train.py` but not with `segment/train.py`.\r\n\r\nI expect that if I change the following line:\r\n\r\n```python\r\ninclude_class = [1] # filter labels to include only these classes (optional)\r\n```\r\n\r\nin `utils/dataloaders.py` line `533`, then in `train.py` and `segment/train.py`\r\n- the code does not crash\r\n- the code trains with only class `1` (if such class exists in the `.yaml` file)\r\n\r\nWhat I get:\r\n- `train.py` -> works as expected\r\n- `segment/train.py` -> crashes:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"segment/train.py\", line 664, in <module>\r\n main(opt)\r\n File \"segment/train.py\", line 555, in main\r\n train(opt.hyp, opt, device, callbacks)\r\n File \"segment/train.py\", line 180, in train\r\n train_loader, dataset = create_dataloader(\r\n File \"yolov5/utils/segment/dataloaders.py\", line 46, in create_dataloader\r\n dataset = LoadImagesAndLabelsAndMasks(\r\n File \"yolov5/utils/segment/dataloaders.py\", line 102, in __init__\r\n super().__init__(path, img_size, batch_size, augment, hyp, rect, image_weights, cache_images, single_cls,\r\n File \"yolov5/utils/dataloaders.py\", line 540, in __init__\r\n self.segments[i] = segment[j]\r\nTypeError: only integer scalar arrays can be converted to a scalar index\r\n```\r\n\r\n### Environment\r\n\r\n- YOLO: yolov5 `3e55763d45f9c5f8217e4dad5ba1e6c1f42e3bf8`\r\n- OS: Ubuntu 20.04\r\n- Python 3.8\r\n\r\n\r\n### Minimal Reproducible Example\r\n\r\n- clone yolov5 repo\r\n- install dependencies with pip\r\n- edit the lines as explained in the `Bug` section\r\n\r\n```\r\npython3 segment/train.py\r\n```\r\n\r\n### Additional\r\n\r\nThere will be a PR showing the fix for this\r\n\r\n### Are you willing to submit a PR?\r\n\r\n- [X] Yes I'd like to help by submitting a PR!", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/11171", "file_loc": {"base_commit": "d223460f3a4b4151437b15ac83990cea4b0f42e2", "files": [{"path": "utils/dataloaders.py", "status": "modified", "Loc": {"('LoadImagesAndLabels', '__init__', 439)": {"add": [533], "mod": [540]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["utils/dataloaders.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "2da6444c9251f77cfd3e410369cd067245d961b5", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/916", "iss_label": "question\nStale", "title": "premature end of JPEG images", "body": "## \u2754Question\r\n`Epoch gpu_mem GIoU obj cls total targets img_size\r\n 1/99 2.87G 0.05456 0.04197 0 0.09652 10 640: 100% 157/157 [00:52<00:00, 2.98it/s]\r\n Class Images Targets P R mAP@.5 mAP@.5:.95: 0% 0/157 [00:00<?, ?it/s]Premature end of JPEG file\r\n Class Images Targets P R mAP@.5 mAP@.5:.95: 100% 157/157 [00:19<00:00, 8.21it/s]\r\n all 2.5e+03 1e+04 0.362 0.777 0.684 0.338`\r\n\r\nIt shows premature end of JPEG images during validation, what leads to this?\r\n\r\n## Additional context\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/4548", "file_loc": {"base_commit": "2da6444c9251f77cfd3e410369cd067245d961b5", "files": [{"path": "utils/datasets.py", "status": "modified", "Loc": {"('LoadStreams', '__init__', 280)": {"mod": [317]}, "('LoadImagesAndLabels', '__getitem__', 529)": {"mod": [571]}, "(None, 'verify_image_label', 861)": {"mod": [864, 875, 878, 899]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["utils/datasets.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "5afc9c25ef0874dff0c18267947ea4e8b03c90f4", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/5040", "iss_label": "bug", "title": "Error caused by emoji in comments in yolov5/data/hyps/*.yaml file", "body": "Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following,\r\notherwise it is non-actionable, and we can not help you:\r\n\r\n- **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo\r\n- **Common dataset**: coco.yaml or coco128.yaml\r\n- **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments\r\n\r\nIf this is a custom dataset/training question you **must include** your `train*.jpg`, `val*.jpg` and `results.png`\r\nfigures, or we can not help you. You can generate these with `utils.plot_results()`.\r\n\r\n## \ud83d\udc1b Bug\r\n\r\nDecode error occurs when executing the command suggested for input after git clone in Windows environment\r\n\r\n## To Reproduce (REQUIRED)\r\n\r\nInput:\r\n\r\n```\r\n(env38) PS C:\\Users\\Username\\PycharmProjects\\yolov5> python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt\r\n```\r\n\r\nOutput:\r\n\r\n```\r\nDownloading https://ultralytics.com/assets/Arial.ttf to C:\\Users\\Username\\AppData\\Roaming\\Ultralytics\\Arial.ttf...\r\ntrain: weights=yolov5s.pt, cfg=, data=coco128.yaml, hyp=data\\hyps\\hyp.scratch.yaml, epochs=3, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, entity=None, project=runs\\train, name=exp, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=100\r\ngithub: up to date with https://github.com/ultralytics/yolov5\r\nYOLOv5 v5.0-493-g1922dde torch 1.8.0+cu111 CUDA:0 (GeForce RTX 3090, 24576.0MB)\r\n\r\nTraceback (most recent call last):\r\n File \"train.py\", line 615, in <module>\r\n main(opt)\r\n File \"train.py\", line 512, in main\r\n train(opt.hyp, opt, device, callbacks)\r\n File \"train.py\", line 76, in train\r\n hyp = yaml.safe_load(f) # load hyps dict\r\n File \"C:\\Users\\Username\\miniconda3\\envs\\env38\\lib\\site-packages\\yaml\\__init__.py\", line 162, in safe_load\r\n return load(stream, SafeLoader)\r\n File \"C:\\Users\\Username\\miniconda3\\envs\\env38\\lib\\site-packages\\yaml\\__init__.py\", line 112, in load\r\n loader = Loader(stream)\r\n File \"C:\\Users\\Username\\miniconda3\\envs\\env38\\lib\\site-packages\\yaml\\loader.py\", line 34, in __init__\r\n Reader.__init__(self, stream)\r\n File \"C:\\Users\\Username\\miniconda3\\envs\\env38\\lib\\site-packages\\yaml\\reader.py\", line 85, in __init__\r\n self.determine_encoding()\r\n File \"C:\\Users\\Username\\miniconda3\\envs\\env38\\lib\\site-packages\\yaml\\reader.py\", line 124, in determine_encoding\r\n self.update_raw()\r\n File \"C:\\Users\\Username\\miniconda3\\envs\\env38\\lib\\site-packages\\yaml\\reader.py\", line 178, in update_raw\r\n data = self.stream.read(size)\r\nUnicodeDecodeError: 'cp949' codec can't decode byte 0xf0 in position 9: illegal multibyte sequence\r\n```\r\n\r\n## Expected behavior\r\n\r\nA clear and concise description of what you expected to happen.\r\n\r\n## Environment\r\n\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n- OS: Windows 10\r\n- GPU : RTX3090\r\n\r\n## Additional context\r\n\r\nThis error occurs because of the rocket-shaped emoji (\ud83d\ude80) in the yolov5/data/hyps/*.yaml file. You can fix the error by editing the yaml file or specifying the decoding method in detail.\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/5060", "file_loc": {"base_commit": "5afc9c25ef0874dff0c18267947ea4e8b03c90f4", "files": [{"path": "models/yolo.py", "status": "modified", "Loc": {"('Model', '__init__', 83)": {"mod": [90]}}}, {"path": "train.py", "status": "modified", "Loc": {"(None, 'train', 59)": {"mod": [75]}, "(None, 'main', 479)": {"mod": [491, 555]}}}, {"path": "utils/aws/resume.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [24]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["train.py", "utils/aws/resume.py", "models/yolo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "4e65052f28b1184b9d463c1e44b3a79b95113904", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4409", "iss_label": "bug", "title": "Training failed with 4 GPUs after first epoch", "body": "\r\n## \ud83d\udc1b Bug\r\n\r\nI was able to train on OVH AI Cloud with 4 classes and 500 images in total three days ago with 4 GPUs but when I try again to train with my full dataset this time (around 9000 images for 4 classes), the training stops after the first epoch, when the validation step is about to finish.\r\n\r\nI tried to change different things: get rid of the cache argument, change for a smaller model (I was using 5MP6 at first), changing batch size, changing the number of GPUs, still the same.\r\n\r\n## To Reproduce (REQUIRED)\r\n\r\nFirst, here is my Dockerfile. It is based on the Official Yolov5 docker image with W&B integrated:\r\n\r\n```dockerfile\r\nFROM ultralytics/yolov5:latest\r\n\r\n# unfortunately, wandb is commented out in the official image\r\nRUN pip3 install wandb\r\n\r\n# pass the wandb API key at build time\r\nARG wandb_key\r\nENV wandb_api_key=$wandb_key\r\n\r\n# setup wandb account\r\nRUN wandb login \"$wandb_api_key\"\r\n\r\nWORKDIR /usr/src/app\r\n\r\nRUN chown -R 42420:42420 /usr/src\r\n\r\n# do stuff at start\r\nCOPY entrypoint.sh /usr/src/app\r\n\r\nENTRYPOINT [\"/bin/bash\", \"-c\", \"./entrypoint.sh && bash\"]\r\n```\r\n\r\nentrypoint.sh with:\r\n* a call to the `autosplit()` function ;\r\n* a call to train.py to start the training.\r\n\r\n```sh\r\n#!/bin/bash\r\n\r\n# split datasets into training, validation & test\r\npython3 -c \"from utils.datasets import autosplit; autosplit('../logos/images', annotated_only=True);\"\r\n\r\n# start the training\r\npython3 -m torch.distributed.launch \\\r\n --nproc_per_node 4 train.py \\\r\n --img-size 1280 \\\r\n --epochs 100 \\\r\n --data ../logos/logo.yaml \\\r\n --weights yolov5m.pt \\\r\n --batch-size 64 \\\r\n --device 0,1,2,3 \\\r\n --project results \\\r\n --name \"$(date +'%Y-%m-%d')\" \\\r\n --exist-ok \\\r\n --workers 0\r\n```\r\n\r\nFull output from the server:\r\n\r\n```\r\n\r\n 0%| | 0/11481 [00:00<?, ?it/s]\r\n 3%|\u258e | 293/11481 [00:00<00:03, 2926.18it/s]\r\n 4%|\u258d | 483/11481 [00:00<00:04, 2517.14it/s]\r\n 6%|\u258c | 670/11481 [00:00<00:04, 2276.94it/s]\r\n 8%|\u258a | 909/11481 [00:00<00:04, 2309.31it/s]\r\n 10%|\u2589 | 1103/11481 [00:00<00:04, 2181.77it/s]\r\n 11%|\u2588 | 1280/11481 [00:00<00:05, 2035.86it/s]\r\n 13%|\u2588\u258e | 1457/11481 [00:00<00:05, 1936.71it/s]\r\n 15%|\u2588\u258d | 1699/11481 [00:00<00:04, 2059.93it/s]\r\n 17%|\u2588\u258b | 1935/11481 [00:00<00:04, 2139.96it/s]\r\n 19%|\u2588\u2589 | 2234/11481 [00:01<00:03, 2338.47it/s]\r\n 22%|\u2588\u2588\u258f | 2469/11481 [00:01<00:04, 2164.21it/s]\r\n 23%|\u2588\u2588\u258e | 2689/11481 [00:01<00:04, 2037.83it/s]\r\n 25%|\u2588\u2588\u258c | 2897/11481 [00:01<00:04, 1946.45it/s]\r\n 27%|\u2588\u2588\u258b | 3096/11481 [00:01<00:04, 1909.82it/s]\r\n 29%|\u2588\u2588\u258a | 3290/11481 [00:01<00:04, 1840.54it/s]\r\n 30%|\u2588\u2588\u2588 | 3477/11481 [00:01<00:04, 1772.23it/s]\r\n 32%|\u2588\u2588\u2588\u258f | 3671/11481 [00:01<00:04, 1819.05it/s]\r\n 34%|\u2588\u2588\u2588\u258e | 3855/11481 [00:01<00:04, 1778.36it/s]\r\n 35%|\u2588\u2588\u2588\u258c | 4060/11481 [00:02<00:04, 1851.52it/s]\r\n 37%|\u2588\u2588\u2588\u258b | 4273/11481 [00:02<00:03, 1924.63it/s]\r\n 39%|\u2588\u2588\u2588\u2589 | 4468/11481 [00:02<00:03, 1872.27it/s]\r\n 41%|\u2588\u2588\u2588\u2588 | 4658/11481 [00:02<00:03, 1879.67it/s]\r\n 43%|\u2588\u2588\u2588\u2588\u258e | 4891/11481 [00:02<00:03, 1994.13it/s]\r\n 44%|\u2588\u2588\u2588\u2588\u258d | 5108/11481 [00:02<00:03, 2042.15it/s]\r\n 47%|\u2588\u2588\u2588\u2588\u258b | 5435/11481 [00:02<00:02, 2301.21it/s]\r\n 50%|\u2588\u2588\u2588\u2588\u2588 | 5773/11481 [00:02<00:02, 2543.61it/s]\r\n 53%|\u2588\u2588\u2588\u2588\u2588\u258e | 6047/11481 [00:02<00:02, 2598.59it/s]\r\n 55%|\u2588\u2588\u2588\u2588\u2588\u258c | 6319/11481 [00:02<00:02, 2573.29it/s]\r\n 64%|\u2588\u2588\u2588\u2588\u2588\u2588\u258d | 7351/11481 [00:03<00:01, 3320.80it/s]\r\n 68%|\u2588\u2588\u2588\u2588\u2588\u2588\u258a | 7847/11481 [00:03<00:01, 2920.33it/s]\r\n 72%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 8263/11481 [00:03<00:01, 2642.29it/s]\r\n 75%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258c | 8619/11481 [00:03<00:01, 2663.95it/s]\r\n 78%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a | 8950/11481 [00:03<00:01, 2476.00it/s]\r\n 81%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 9245/11481 [00:03<00:00, 2286.48it/s]\r\n 83%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 9510/11481 [00:04<00:00, 2122.86it/s]\r\n 85%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258d | 9750/11481 [00:04<00:00, 2170.76it/s]\r\n 87%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 9987/11481 [00:04<00:00, 2163.66it/s]\r\n 89%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589 | 10218/11481 [00:04<00:00, 2051.27it/s]\r\n 91%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 10434/11481 [00:04<00:00, 1996.93it/s]\r\n 93%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e| 10642/11481 [00:04<00:00, 1875.36it/s]\r\n 94%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258d| 10837/11481 [00:04<00:00, 1888.51it/s]\r\n 96%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258c| 11031/11481 [00:04<00:00, 1875.07it/s]\r\n 98%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a| 11257/11481 [00:04<00:00, 1975.48it/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 11481/11481 [00:04<00:00, 2308.00it/s]\r\nAutosplitting images from ../logos/images, using *.txt labeled images only\r\n/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead\r\n logger.warn(\r\nThe module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run\r\nWARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases.\r\n Please read local_rank from `os.environ('LOCAL_RANK')` instead.\r\nINFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:\r\n entrypoint : train.py\r\n min_nodes : 1\r\n max_nodes : 1\r\n nproc_per_node : 4\r\n run_id : none\r\n rdzv_backend : static\r\n rdzv_endpoint : 127.0.0.1:29500\r\n rdzv_configs : {'rank': 0, 'timeout': 900}\r\n max_restarts : 3\r\n monitor_interval : 5\r\n log_dir : None\r\n metrics_cfg : {}\r\n\r\nINFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1\r\nINFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python3\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group\r\n/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future.\r\n warnings.warn(\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:\r\n restart_count=0\r\n master_addr=127.0.0.1\r\n master_port=29500\r\n group_rank=0\r\n group_world_size=1\r\n local_ranks=[0, 1, 2, 3]\r\n role_ranks=[0, 1, 2, 3]\r\n global_ranks=[0, 1, 2, 3]\r\n role_world_sizes=[4, 4, 4, 4]\r\n global_world_sizes=[4, 4, 4, 4]\r\n\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_0/0/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_0/1/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_0/2/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_0/3/error.json\r\n\u001b[34m\u001b[1mtrain: \u001b[0mweights=yolov5m.pt, cfg=, data=../logos/logo.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=100, batch_size=64, imgsz=1280, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1,2,3, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=0, project=results, entity=None, name=2021-08-13, exist_ok=True, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=0, freeze=0\r\n\u001b[34m\u001b[1mgithub: \u001b[0mskipping check (Docker image), for updates see https://github.com/ultralytics/yolov5\r\nYOLOv5 \ud83d\ude80 v5.0-360-gd9f23ed torch 1.9.0+cu102 CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n\r\nAdded key: store_based_barrier_key:1 to store for rank: 0\r\nRank 0: Completed store-based barrier for 4 nodes.\r\n\u001b[34m\u001b[1mhyperparameters: \u001b[0mlr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0\r\n\u001b[34m\u001b[1mTensorBoard: \u001b[0mStart with 'tensorboard --logdir results', view at http://localhost:6006/\r\n[W ProcessGroupNCCL.cpp:1569] Rank 3 using best-guess GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.\r\n[W ProcessGroupNCCL.cpp:1569] Rank 2 using best-guess GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.\r\n[W ProcessGroupNCCL.cpp:1569] Rank 1 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.\r\nwandb: Currently logged in as: hivacruz (use `wandb login --relogin` to force relogin)\r\n\r\nCondaEnvException: Unable to determine environment\r\n\r\nPlease re-run this command with one of the following options:\r\n\r\n* Provide an environment name via --name or -n\r\n* Re-run this command inside an activated conda environment.\r\n\r\nwandb: Tracking run with wandb version 0.12.0\r\nwandb: Syncing run 2021-08-13\r\nwandb: View project at https://wandb.ai/hivacruz/results\r\n\r\nwandb: View run at https://wandb.ai/hivacruz/results/runs/b7blzdq6\r\nwandb: Run data is saved locally in /usr/src/app/wandb/run-20210813_152353-b7blzdq6\r\nwandb: Run `wandb offline` to turn off syncing.\r\n[W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.\r\nDownloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5m.pt to yolov5m.pt...\r\n\r\n 0% 0.00/41.1M [00:00<?, ?B/s]\r\n 4% 1.80M/41.1M [00:00<00:02, 18.6MB/s]\r\n 15% 6.01M/41.1M [00:00<00:01, 22.5MB/s]\r\n 40% 16.3M/41.1M [00:00<00:00, 29.6MB/s]\r\n 55% 22.7M/41.1M [00:00<00:00, 34.3MB/s]\r\n 72% 29.6M/41.1M [00:00<00:00, 40.8MB/s]\r\n 88% 36.1M/41.1M [00:00<00:00, 46.3MB/s]\r\n100% 41.1M/41.1M [00:00<00:00, 56.7MB/s]\r\n\r\nOverriding model.yaml nc=80 with nc=4\r\n\r\n from n params module arguments \r\n 0 -1 1 5280 models.common.Focus [3, 48, 3] \r\n 1 -1 1 41664 models.common.Conv [48, 96, 3, 2] \r\n 2 -1 2 65280 models.common.C3 [96, 96, 2] \r\n 3 -1 1 166272 models.common.Conv [96, 192, 3, 2] \r\n 4 -1 6 629760 models.common.C3 [192, 192, 6] \r\n 5 -1 1 664320 models.common.Conv [192, 384, 3, 2] \r\n 6 -1 6 2512896 models.common.C3 [384, 384, 6] \r\n 7 -1 1 2655744 models.common.Conv [384, 768, 3, 2] \r\n 8 -1 1 1476864 models.common.SPP [768, 768, [5, 9, 13]] \r\n 9 -1 2 4134912 models.common.C3 [768, 768, 2, False] \r\n 10 -1 1 295680 models.common.Conv [768, 384, 1, 1] \r\n 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \r\n 12 [-1, 6] 1 0 models.common.Concat [1] \r\n 13 -1 2 1182720 models.common.C3 [768, 384, 2, False] \r\n 14 -1 1 74112 models.common.Conv [384, 192, 1, 1] \r\n 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \r\n 16 [-1, 4] 1 0 models.common.Concat [1] \r\n 17 -1 2 296448 models.common.C3 [384, 192, 2, False] \r\n 18 -1 1 332160 models.common.Conv [192, 192, 3, 2] \r\n 19 [-1, 14] 1 0 models.common.Concat [1] \r\n 20 -1 2 1035264 models.common.C3 [384, 384, 2, False] \r\n 21 -1 1 1327872 models.common.Conv [384, 384, 3, 2] \r\n 22 [-1, 10] 1 0 models.common.Concat [1] \r\n 23 -1 2 4134912 models.common.C3 [768, 768, 2, False] \r\n 24 [17, 20, 23] 1 36369 models.yolo.Detect [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [192, 384, 768]]\r\nModel Summary: 391 layers, 21068529 parameters, 21068529 gradients, 50.4 GFLOPs\r\n\r\nTransferred 500/506 items from yolov5m.pt\r\nScaled weight_decay = 0.0005\r\n\u001b[34m\u001b[1moptimizer:\u001b[0m SGD with parameter groups 83 weight, 86 weight (no decay), 86 bias\r\n\r\n\u001b[34m\u001b[1mtrain: \u001b[0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100% 9465/9465 [00:00<?, ?it/s]\u001b[34m\u001b[1mtrain: \u001b[0mWARNING: Ignoring corrupted image and/or label /usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png: cannot identify image file '/usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png'\r\n\u001b[34m\u001b[1mtrain: \u001b[0mWARNING: Ignoring corrupted image and/or label /usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png: cannot identify image file \"/usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png\"\r\n\u001b[34m\u001b[1mtrain: \u001b[0mWARNING: Ignoring corrupted image and/or label /usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png: cannot identify image file '/usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png'\r\n\u001b[34m\u001b[1mtrain: \u001b[0mWARNING: Ignoring corrupted image and/or label /usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png: cannot identify image file '/usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png'\r\n\r\n\u001b[34m\u001b[1mtrain: \u001b[0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100% 9465/9465 [00:00<?, ?it/s]\r\n\r\n\u001b[34m\u001b[1mval: \u001b[0mScanning '/usr/src/logos/autosplit_val.cache' images and labels... 996 found, 0 missing, 286 empty, 0 corrupted: 100% 996/996 [00:00<?, ?it/s]\r\n\u001b[34m\u001b[1mval: \u001b[0mScanning '/usr/src/logos/autosplit_val.cache' images and labels... 996 found, 0 missing, 286 empty, 0 corrupted: 100% 996/996 [00:00<?, ?it/s]\r\nPlotting labels... \r\n\r\n\u001b[34m\u001b[1mtrain: \u001b[0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 9465/9465 [00:00<?, ?it/s]\r\n\u001b[34m\u001b[1mtrain: \u001b[0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 9465/9465 [00:00<?, ?it/s]\r\n\r\n\u001b[34m\u001b[1mtrain: \u001b[0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 9465/9465 [00:00<?, ?it/s]\r\n\u001b[34m\u001b[1mtrain: \u001b[0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 9465/9465 [00:00<?, ?it/s]\r\n\r\n\u001b[34m\u001b[1mtrain: \u001b[0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 9465/9465 [00:00<?, ?it/s]\r\n\u001b[34m\u001b[1mtrain: \u001b[0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 9465/9465 [00:00<?, ?it/s]\r\n\r\n\u001b[34m\u001b[1mautoanchor: \u001b[0mAnalyzing anchors... anchors/target = 5.73, Best Possible Recall (BPR) = 1.0000\r\nImage sizes 1280 train, 1280 val\r\nUsing 0 dataloader workers\r\nLogging results to results/2021-08-13\r\nStarting training for 100 epochs...\r\n\r\n Epoch gpu_mem box obj cls labels img_size\r\n\r\n 0% 0/148 [00:00<?, ?it/s]\r\n 0/99 19.8G 0.1254 0.08545 0.04525 30 1280: 0% 0/148 [00:11<?, ?it/s]\r\n 0/99 19.8G 0.1254 0.08545 0.04525 30 1280: 1% 1/148 [00:15<37:35, 15.34s/it]Reducer buckets have been rebuilt in this iteration.\r\n\r\n 0/99 21.5G 0.123 0.08545 0.04553 25 1280: 1% 1/148 [00:20<37:35, 15.34s/it]\r\n 0/99 21.5G 0.123 0.08545 0.04553 25 1280: 1% 2/148 [00:20<29:54, 12.29s/it]\r\n 0/99 21.5G 0.1212 0.08502 0.04532 17 1280: 1% 2/148 [00:25<29:54, 12.29s/it]\r\n 0/99 21.5G 0.1212 0.08502 0.04532 17 1280: 2% 3/148 [00:25<24:42, 10.22s/it]\r\n 0/99 21.5G 0.1217 0.08223 0.0452 17 1280: 2% 3/148 [00:31<24:42, 10.22s/it]\r\n 0/99 21.5G 0.1217 0.08223 0.0452 17 1280: 3% 4/148 [00:31<20:53, 8.70s/it]\r\n 0/99 21.5G 0.1213 0.07896 0.04531 20 1280: 3% 4/148 [00:36<20:53, 8.70s/it]\r\n 0/99 21.5G 0.1213 0.07896 0.04531 20 1280: 3% 5/148 [00:36<18:19, 7.69s/it]\r\n 0/99 21.5G 0.1211 0.07538 0.04512 25 1280: 3% 5/148 [00:41<18:19, 7.69s/it]\r\n 0/99 21.5G 0.1211 0.07538 0.04512 25 1280: 4% 6/148 [00:41<16:25, 6.94s/it]\r\n 0/99 21.5G 0.1203 0.07139 0.04532 13 1280: 4% 6/148 [00:46<16:25, 6.94s/it]\r\n 0/99 21.5G 0.1203 0.07139 0.04532 13 1280: 5% 7/148 [00:46<14:59, 6.38s/it]\r\n 0/99 21.5G 0.1193 0.06797 0.04522 20 1280: 5% 7/148 [00:51<14:59, 6.38s/it]\r\n 0/99 21.5G 0.1193 0.06797 0.04522 20 1280: 5% 8/148 [00:51<14:04, 6.03s/it]\r\n 0/99 21.5G 0.1192 0.06457 0.04495 21 1280: 5% 8/148 [00:57<14:04, 6.03s/it]\r\n 0/99 21.5G 0.1192 0.06457 0.04495 21 1280: 6% 9/148 [00:57<13:24, 5.79s/it]\r\n 0/99 21.5G 0.1181 0.06186 0.0449 24 1280: 6% 9/148 [01:02<13:24, 5.79s/it]\r\n 0/99 21.5G 0.1181 0.06186 0.0449 24 1280: 7% 10/148 [01:02<12:55, 5.62s/it]\r\n 0/99 21.5G 0.118 0.05965 0.04474 38 1280: 7% 10/148 [01:07<12:55, 5.62s/it]\r\n 0/99 21.5G 0.118 0.05965 0.04474 38 1280: 7% 11/148 [01:07<12:41, 5.56s/it]\r\n 0/99 21.5G 0.117 0.05714 0.04468 18 1280: 7% 11/148 [01:12<12:41, 5.56s/it]\r\n 0/99 21.5G 0.117 0.05714 0.04468 18 1280: 8% 12/148 [01:12<12:22, 5.46s/it]\r\n 0/99 21.5G 0.1164 0.05487 0.04446 20 1280: 8% 12/148 [01:18<12:22, 5.46s/it]\r\n 0/99 21.5G 0.1164 0.05487 0.04446 20 1280: 9% 13/148 [01:18<12:07, 5.39s/it]\r\n 0/99 21.5G 0.1158 0.05271 0.04426 19 1280: 9% 13/148 [01:23<12:07, 5.39s/it]\r\n 0/99 21.5G 0.1158 0.05271 0.04426 19 1280: 9% 14/148 [01:23<11:56, 5.35s/it]\r\n 0/99 21.5G 0.1152 0.05089 0.0441 22 1280: 9% 14/148 [01:28<11:56, 5.35s/it]\r\n 0/99 21.5G 0.1152 0.05089 0.0441 22 1280: 10% 15/148 [01:28<11:51, 5.35s/it]\r\n 0/99 21.5G 0.1143 0.04923 0.04396 20 1280: 10% 15/148 [01:34<11:51, 5.35s/it]\r\n 0/99 21.5G 0.1143 0.04923 0.04396 20 1280: 11% 16/148 [01:34<11:41, 5.32s/it]\r\n 0/99 21.5G 0.1138 0.04776 0.04378 22 1280: 11% 16/148 [01:39<11:41, 5.32s/it]\r\n 0/99 21.5G 0.1138 0.04776 0.04378 22 1280: 11% 17/148 [01:39<11:37, 5.33s/it]\r\n 0/99 21.5G 0.1132 0.04637 0.0436 19 1280: 11% 17/148 [01:44<11:37, 5.33s/it]\r\n 0/99 21.5G 0.1132 0.04637 0.0436 19 1280: 12% 18/148 [01:44<11:27, 5.29s/it]\r\n 0/99 21.5G 0.1122 0.04538 0.0434 25 1280: 12% 18/148 [01:49<11:27, 5.29s/it]\r\n 0/99 21.5G 0.1122 0.04538 0.0434 25 1280: 13% 19/148 [01:49<11:24, 5.31s/it]\r\n 0/99 21.5G 0.1116 0.04417 0.04321 20 1280: 13% 19/148 [01:55<11:24, 5.31s/it]\r\n 0/99 21.5G 0.1116 0.04417 0.04321 20 1280: 14% 20/148 [01:55<11:19, 5.31s/it]\r\n 0/99 21.5G 0.1109 0.04299 0.04305 16 1280: 14% 20/148 [02:00<11:19, 5.31s/it]\r\n 0/99 21.5G 0.1109 0.04299 0.04305 16 1280: 14% 21/148 [02:00<11:10, 5.28s/it]\r\n 0/99 21.5G 0.1102 0.04209 0.04288 21 1280: 14% 21/148 [02:05<11:10, 5.28s/it]\r\n 0/99 21.5G 0.1102 0.04209 0.04288 21 1280: 15% 22/148 [02:05<11:04, 5.28s/it]\r\n 0/99 21.5G 0.1095 0.04134 0.0427 23 1280: 15% 22/148 [02:10<11:04, 5.28s/it]\r\n 0/99 21.5G 0.1095 0.04134 0.0427 23 1280: 16% 23/148 [02:10<10:53, 5.23s/it]\r\n 0/99 21.5G 0.1084 0.04068 0.04254 22 1280: 16% 23/148 [02:16<10:53, 5.23s/it]\r\n 0/99 21.5G 0.1084 0.04068 0.04254 22 1280: 16% 24/148 [02:16<10:48, 5.23s/it]\r\n 0/99 21.5G 0.1076 0.03996 0.04231 22 1280: 16% 24/148 [02:21<10:48, 5.23s/it]\r\n 0/99 21.5G 0.1076 0.03996 0.04231 22 1280: 17% 25/148 [02:21<10:42, 5.22s/it]\r\n 0/99 21.5G 0.1068 0.03924 0.04201 19 1280: 17% 25/148 [02:26<10:42, 5.22s/it]\r\n 0/99 21.5G 0.1068 0.03924 0.04201 19 1280: 18% 26/148 [02:26<10:34, 5.20s/it]\r\n 0/99 21.5G 0.1059 0.0387 0.04182 21 1280: 18% 26/148 [02:31<10:34, 5.20s/it]\r\n 0/99 21.5G 0.1059 0.0387 0.04182 21 1280: 18% 27/148 [02:31<10:26, 5.18s/it]\r\n 0/99 21.5G 0.1053 0.03812 0.04158 21 1280: 18% 27/148 [02:36<10:26, 5.18s/it]\r\n 0/99 21.5G 0.1053 0.03812 0.04158 21 1280: 19% 28/148 [02:36<10:21, 5.18s/it]\r\n 0/99 21.5G 0.1046 0.03767 0.04139 24 1280: 19% 28/148 [02:41<10:21, 5.18s/it]\r\n 0/99 21.5G 0.1046 0.03767 0.04139 24 1280: 20% 29/148 [02:41<10:15, 5.17s/it]\r\n 0/99 21.5G 0.1041 0.03737 0.04124 33 1280: 20% 29/148 [02:47<10:15, 5.17s/it]\r\n 0/99 21.5G 0.1041 0.03737 0.04124 33 1280: 20% 30/148 [02:47<10:12, 5.19s/it]\r\n 0/99 21.5G 0.1033 0.03673 0.04106 12 1280: 20% 30/148 [02:52<10:12, 5.19s/it]\r\n 0/99 21.5G 0.1033 0.03673 0.04106 12 1280: 21% 31/148 [02:52<10:07, 5.19s/it]\r\n 0/99 21.5G 0.1025 0.03623 0.04103 17 1280: 21% 31/148 [02:57<10:07, 5.19s/it]\r\n 0/99 21.5G 0.1025 0.03623 0.04103 17 1280: 22% 32/148 [02:57<09:59, 5.16s/it]\r\n 0/99 21.5G 0.102 0.03585 0.04075 24 1280: 22% 32/148 [03:02<09:59, 5.16s/it]\r\n 0/99 21.5G 0.102 0.03585 0.04075 24 1280: 22% 33/148 [03:02<09:57, 5.19s/it]\r\n 0/99 21.5G 0.1014 0.03556 0.04061 24 1280: 22% 33/148 [03:08<09:57, 5.19s/it]\r\n 0/99 21.5G 0.1014 0.03556 0.04061 24 1280: 23% 34/148 [03:08<09:58, 5.25s/it]\r\n 0/99 21.5G 0.1009 0.03512 0.04034 18 1280: 23% 34/148 [03:13<09:58, 5.25s/it]\r\n 0/99 21.5G 0.1009 0.03512 0.04034 18 1280: 24% 35/148 [03:13<09:49, 5.22s/it]\r\n 0/99 21.5G 0.1002 0.03484 0.04013 23 1280: 24% 35/148 [03:18<09:49, 5.22s/it]\r\n 0/99 21.5G 0.1002 0.03484 0.04013 23 1280: 24% 36/148 [03:18<09:49, 5.26s/it]\r\n 0/99 21.5G 0.09946 0.03466 0.03991 25 1280: 24% 36/148 [03:23<09:49, 5.26s/it]\r\n 0/99 21.5G 0.09946 0.03466 0.03991 25 1280: 25% 37/148 [03:23<09:42, 5.24s/it]\r\n 0/99 21.5G 0.09885 0.03448 0.03971 27 1280: 25% 37/148 [03:28<09:42, 5.24s/it]\r\n 0/99 21.5G 0.09885 0.03448 0.03971 27 1280: 26% 38/148 [03:28<09:35, 5.23s/it]\r\n 0/99 21.5G 0.09817 0.03415 0.03964 17 1280: 26% 38/148 [03:34<09:35, 5.23s/it]\r\n 0/99 21.5G 0.09817 0.03415 0.03964 17 1280: 26% 39/148 [03:34<09:32, 5.26s/it]\r\n 0/99 21.5G 0.09746 0.03382 0.0396 16 1280: 26% 39/148 [03:39<09:32, 5.26s/it]\r\n 0/99 21.5G 0.09746 0.03382 0.0396 16 1280: 27% 40/148 [03:39<09:25, 5.24s/it]\r\n 0/99 21.5G 0.09692 0.03345 0.0395 18 1280: 27% 40/148 [03:44<09:25, 5.24s/it]\r\n 0/99 21.5G 0.09692 0.03345 0.0395 18 1280: 28% 41/148 [03:44<09:18, 5.22s/it]\r\n 0/99 21.5G 0.09618 0.03334 0.0393 23 1280: 28% 41/148 [03:49<09:18, 5.22s/it]\r\n 0/99 21.5G 0.09618 0.03334 0.0393 23 1280: 28% 42/148 [03:49<09:14, 5.23s/it]\r\n 0/99 21.5G 0.09566 0.03295 0.03918 12 1280: 28% 42/148 [03:55<09:14, 5.23s/it]\r\n 0/99 21.5G 0.09566 0.03295 0.03918 12 1280: 29% 43/148 [03:55<09:10, 5.24s/it]\r\n 0/99 21.5G 0.09514 0.03277 0.03899 21 1280: 29% 43/148 [04:00<09:10, 5.24s/it]\r\n 0/99 21.5G 0.09514 0.03277 0.03899 21 1280: 30% 44/148 [04:00<09:05, 5.24s/it]\r\n 0/99 21.5G 0.09457 0.03248 0.03881 16 1280: 30% 44/148 [04:05<09:05, 5.24s/it]\r\n 0/99 21.5G 0.09457 0.03248 0.03881 16 1280: 30% 45/148 [04:05<08:58, 5.23s/it]\r\n 0/99 21.5G 0.09407 0.03232 0.03866 21 1280: 30% 45/148 [04:10<08:58, 5.23s/it]\r\n 0/99 21.5G 0.09407 0.03232 0.03866 21 1280: 31% 46/148 [04:10<08:55, 5.25s/it]\r\n 0/99 21.5G 0.09346 0.03217 0.03852 20 1280: 31% 46/148 [04:16<08:55, 5.25s/it]\r\n 0/99 21.5G 0.09346 0.03217 0.03852 20 1280: 32% 47/148 [04:16<08:47, 5.23s/it]\r\n 0/99 21.5G 0.09273 0.03227 0.03832 29 1280: 32% 47/148 [04:21<08:47, 5.23s/it]\r\n 0/99 21.5G 0.09273 0.03227 0.03832 29 1280: 32% 48/148 [04:21<08:43, 5.23s/it]\r\n 0/99 21.5G 0.09208 0.0321 0.03811 18 1280: 32% 48/148 [04:26<08:43, 5.23s/it]\r\n 0/99 21.5G 0.09208 0.0321 0.03811 18 1280: 33% 49/148 [04:26<08:41, 5.27s/it]\r\n 0/99 21.5G 0.09158 0.03196 0.03786 20 1280: 33% 49/148 [04:31<08:41, 5.27s/it]\r\n 0/99 21.5G 0.09158 0.03196 0.03786 20 1280: 34% 50/148 [04:31<08:36, 5.27s/it]\r\n 0/99 21.5G 0.09112 0.03185 0.03766 23 1280: 34% 50/148 [04:37<08:36, 5.27s/it]\r\n 0/99 21.5G 0.09112 0.03185 0.03766 23 1280: 34% 51/148 [04:37<08:30, 5.26s/it]\r\n 0/99 21.5G 0.09058 0.03176 0.03749 21 1280: 34% 51/148 [04:42<08:30, 5.26s/it]\r\n 0/99 21.5G 0.09058 0.03176 0.03749 21 1280: 35% 52/148 [04:42<08:20, 5.21s/it]\r\n 0/99 21.5G 0.0901 0.03162 0.03729 22 1280: 35% 52/148 [04:47<08:20, 5.21s/it]\r\n 0/99 21.5G 0.0901 0.03162 0.03729 22 1280: 36% 53/148 [04:47<08:16, 5.22s/it]\r\n 0/99 21.5G 0.08952 0.03155 0.03716 20 1280: 36% 53/148 [04:52<08:16, 5.22s/it]\r\n 0/99 21.5G 0.08952 0.03155 0.03716 20 1280: 36% 54/148 [04:52<08:09, 5.20s/it]\r\n 0/99 21.5G 0.08881 0.03129 0.03692 12 1280: 36% 54/148 [04:57<08:09, 5.20s/it]\r\n 0/99 21.5G 0.08881 0.03129 0.03692 12 1280: 37% 55/148 [04:57<08:00, 5.17s/it]\r\n 0/99 21.5G 0.08828 0.03122 0.03682 23 1280: 37% 55/148 [05:02<08:00, 5.17s/it]\r\n 0/99 21.5G 0.08828 0.03122 0.03682 23 1280: 38% 56/148 [05:02<07:54, 5.16s/it]\r\n 0/99 21.5G 0.0878 0.03132 0.03663 32 1280: 38% 56/148 [05:08<07:54, 5.16s/it]\r\n 0/99 21.5G 0.0878 0.03132 0.03663 32 1280: 39% 57/148 [05:08<07:48, 5.14s/it]\r\n 0/99 21.5G 0.08735 0.03124 0.03645 23 1280: 39% 57/148 [05:13<07:48, 5.14s/it]\r\n 0/99 21.5G 0.08735 0.03124 0.03645 23 1280: 39% 58/148 [05:13<07:41, 5.13s/it]\r\n 0/99 21.5G 0.0869 0.03122 0.03627 27 1280: 39% 58/148 [05:18<07:41, 5.13s/it]\r\n 0/99 21.5G 0.0869 0.03122 0.03627 27 1280: 40% 59/148 [05:18<07:39, 5.16s/it]\r\n 0/99 21.5G 0.08649 0.03112 0.03606 21 1280: 40% 59/148 [05:23<07:39, 5.16s/it]\r\n 0/99 21.5G 0.08649 0.03112 0.03606 21 1280: 41% 60/148 [05:23<07:34, 5.17s/it]\r\n 0/99 21.5G 0.08614 0.031 0.03593 19 1280: 41% 60/148 [05:28<07:34, 5.17s/it]\r\n 0/99 21.5G 0.08614 0.031 0.03593 19 1280: 41% 61/148 [05:28<07:34, 5.23s/it]\r\n 0/99 21.5G 0.08561 0.03092 0.03564 24 1280: 41% 61/148 [05:34<07:34, 5.23s/it]\r\n 0/99 21.5G 0.08561 0.03092 0.03564 24 1280: 42% 62/148 [05:34<07:28, 5.21s/it]\r\n 0/99 21.5G 0.08514 0.03078 0.03549 17 1280: 42% 62/148 [05:39<07:28, 5.21s/it]\r\n 0/99 21.5G 0.08514 0.03078 0.03549 17 1280: 43% 63/148 [05:39<07:22, 5.21s/it]\r\n 0/99 21.5G 0.08474 0.03067 0.03525 19 1280: 43% 63/148 [05:44<07:22, 5.21s/it]\r\n 0/99 21.5G 0.08474 0.03067 0.03525 19 1280: 43% 64/148 [05:44<07:13, 5.16s/it]\r\n 0/99 21.5G 0.08431 0.03063 0.03502 23 1280: 43% 64/148 [05:49<07:13, 5.16s/it]\r\n 0/99 21.5G 0.08431 0.03063 0.03502 23 1280: 44% 65/148 [05:49<07:09, 5.18s/it]\r\n 0/99 21.5G 0.08386 0.0306 0.03476 27 1280: 44% 65/148 [05:54<07:09, 5.18s/it]\r\n 0/99 21.5G 0.08386 0.0306 0.03476 27 1280: 45% 66/148 [05:54<07:03, 5.17s/it]\r\n 0/99 21.5G 0.0834 0.03056 0.03451 23 1280: 45% 66/148 [05:59<07:03, 5.17s/it]\r\n 0/99 21.5G 0.0834 0.03056 0.03451 23 1280: 45% 67/148 [05:59<06:58, 5.17s/it]\r\n 0/99 21.5G 0.08308 0.03052 0.0343 22 1280: 45% 67/148 [06:04<06:58, 5.17s/it]\r\n 0/99 21.5G 0.08308 0.03052 0.0343 22 1280: 46% 68/148 [06:04<06:50, 5.13s/it]\r\n 0/99 21.5G 0.08266 0.03037 0.03418 16 1280: 46% 68/148 [06:09<06:50, 5.13s/it]\r\n 0/99 21.5G 0.08266 0.03037 0.03418 16 1280: 47% 69/148 [06:09<06:42, 5.10s/it]\r\n 0/99 21.5G 0.08229 0.03032 0.03391 22 1280: 47% 69/148 [06:15<06:42, 5.10s/it]\r\n 0/99 21.5G 0.08229 0.03032 0.03391 22 1280: 47% 70/148 [06:15<06:41, 5.15s/it]\r\n 0/99 21.5G 0.08184 0.03027 0.03378 21 1280: 47% 70/148 [06:20<06:41, 5.15s/it]\r\n 0/99 21.5G 0.08184 0.03027 0.03378 21 1280: 48% 71/148 [06:20<06:37, 5.16s/it]\r\n 0/99 21.5G 0.08163 0.03012 0.03356 17 1280: 48% 71/148 [06:25<06:37, 5.16s/it]\r\n 0/99 21.5G 0.08163 0.03012 0.03356 17 1280: 49% 72/148 [06:25<06:32, 5.17s/it]\r\n 0/99 21.5G 0.08134 0.03009 0.03335 24 1280: 49% 72/148 [06:30<06:32, 5.17s/it]\r\n 0/99 21.5G 0.08134 0.03009 0.03335 24 1280: 49% 73/148 [06:30<06:30, 5.21s/it]\r\n 0/99 21.5G 0.08105 0.03004 0.03313 26 1280: 49% 73/148 [06:35<06:30, 5.21s/it]\r\n 0/99 21.5G 0.08105 0.03004 0.03313 26 1280: 50% 74/148 [06:36<06:22, 5.17s/it]\r\n 0/99 21.5G 0.08076 0.02988 0.03302 15 1280: 50% 74/148 [06:41<06:22, 5.17s/it]\r\n 0/99 21.5G 0.08076 0.02988 0.03302 15 1280: 51% 75/148 [06:41<06:18, 5.19s/it]\r\n 0/99 21.5G 0.08024 0.02988 0.03276 27 1280: 51% 75/148 [06:46<06:18, 5.19s/it]\r\n 0/99 21.5G 0.08024 0.02988 0.03276 27 1280: 51% 76/148 [06:46<06:13, 5.19s/it]\r\n 0/99 21.5G 0.07997 0.02982 0.03257 22 1280: 51% 76/148 [06:51<06:13, 5.19s/it]\r\n 0/99 21.5G 0.07997 0.02982 0.03257 22 1280: 52% 77/148 [06:51<06:05, 5.15s/it]\r\n 0/99 21.5G 0.07961 0.02976 0.03236 23 1280: 52% 77/148 [06:56<06:05, 5.15s/it]\r\n 0/99 21.5G 0.07961 0.02976 0.03236 23 1280: 53% 78/148 [06:56<05:58, 5.12s/it]\r\n 0/99 21.5G 0.07921 0.02966 0.03215 18 1280: 53% 78/148 [07:01<05:58, 5.12s/it]\r\n 0/99 21.5G 0.07921 0.02966 0.03215 18 1280: 53% 79/148 [07:01<05:53, 5.13s/it]\r\n 0/99 21.5G 0.07888 0.02955 0.03198 17 1280: 53% 79/148 [07:06<05:53, 5.13s/it]\r\n 0/99 21.5G 0.07888 0.02955 0.03198 17 1280: 54% 80/148 [07:06<05:48, 5.13s/it]\r\n 0/99 21.5G 0.0785 0.02954 0.03176 26 1280: 54% 80/148 [07:11<05:48, 5.13s/it]\r\n 0/99 21.5G 0.0785 0.02954 0.03176 26 1280: 55% 81/148 [07:11<05:44, 5.14s/it]\r\n 0/99 21.5G 0.07826 0.02949 0.03158 27 1280: 55% 81/148 [07:17<05:44, 5.14s/it]\r\n 0/99 21.5G 0.07826 0.02949 0.03158 27 1280: 55% 82/148 [07:17<05:39, 5.14s/it]\r\n 0/99 21.5G 0.07797 0.02942 0.03137 21 1280: 55% 82/148 [07:22<05:39, 5.14s/it]\r\n 0/99 21.5G 0.07797 0.02942 0.03137 21 1280: 56% 83/148 [07:22<05:34, 5.14s/it]\r\n 0/99 21.5G 0.07772 0.02932 0.03122 20 1280: 56% 83/148 [07:27<05:34, 5.14s/it]\r\n 0/99 21.5G 0.07772 0.02932 0.03122 20 1280: 57% 84/148 [07:27<05:29, 5.15s/it]\r\n 0/99 21.5G 0.07741 0.02926 0.03102 22 1280: 57% 84/148 [07:32<05:29, 5.15s/it]\r\n 0/99 21.5G 0.07741 0.02926 0.03102 22 1280: 57% 85/148 [07:32<05:24, 5.14s/it]\r\n 0/99 21.5G 0.07713 0.02919 0.03086 20 1280: 57% 85/148 [07:37<05:24, 5.14s/it]\r\n 0/99 21.5G 0.07713 0.02919 0.03086 20 1280: 58% 86/148 [07:37<05:19, 5.16s/it]\r\n 0/99 21.5G 0.07676 0.02913 0.03067 23 1280: 58% 86/148 [07:42<05:19, 5.16s/it]\r\n 0/99 21.5G 0.07676 0.02913 0.03067 23 1280: 59% 87/148 [07:42<05:14, 5.15s/it]\r\n 0/99 21.5G 0.07667 0.02898 0.03047 20 1280: 59% 87/148 [07:48<05:14, 5.15s/it]\r\n 0/99 21.5G 0.07667 0.02898 0.03047 20 1280: 59% 88/148 [07:48<05:09, 5.15s/it]\r\n 0/99 21.5G 0.07636 0.02885 0.03026 15 1280: 59% 88/148 [07:53<05:09, 5.15s/it]\r\n 0/99 21.5G 0.07636 0.02885 0.03026 15 1280: 60% 89/148 [07:53<05:05, 5.18s/it]\r\n 0/99 21.5G 0.07617 0.02875 0.03012 20 1280: 60% 89/148 [07:58<05:05, 5.18s/it]\r\n 0/99 21.5G 0.07617 0.02875 0.03012 20 1280: 61% 90/148 [07:58<05:01, 5.19s/it]\r\n 0/99 21.5G 0.07593 0.02859 0.02993 13 1280: 61% 90/148 [08:03<05:01, 5.19s/it]\r\n 0/99 21.5G 0.07593 0.02859 0.02993 13 1280: 61% 91/148 [08:03<04:55, 5.18s/it]\r\n 0/99 21.5G 0.07569 0.0285 0.02973 23 1280: 61% 91/148 [08:08<04:55, 5.18s/it]\r\n 0/99 21.5G 0.07569 0.0285 0.02973 23 1280: 62% 92/148 [08:08<04:48, 5.15s/it]\r\n 0/99 21.5G 0.07544 0.02852 0.02957 31 1280: 62% 92/148 [08:13<04:48, 5.15s/it]\r\n 0/99 21.5G 0.07544 0.02852 0.02957 31 1280: 63% 93/148 [08:13<04:42, 5.14s/it]\r\n 0/99 21.5G 0.07528 0.02857 0.02948 44 1280: 63% 93/148 [08:19<04:42, 5.14s/it]\r\n 0/99 21.5G 0.07528 0.02857 0.02948 44 1280: 64% 94/148 [08:19<04:38, 5.17s/it]\r\n 0/99 21.5G 0.07504 0.02846 0.02929 20 1280: 64% 94/148 [08:24<04:38, 5.17s/it]\r\n 0/99 21.5G 0.07504 0.02846 0.02929 20 1280: 64% 95/148 [08:24<04:33, 5.16s/it]\r\n 0/99 21.5G 0.07486 0.02841 0.02914 26 1280: 64% 95/148 [08:29<04:33, 5.16s/it]\r\n 0/99 21.5G 0.07486 0.02841 0.02914 26 1280: 65% 96/148 [08:29<04:29, 5.19s/it]\r\n 0/99 21.5G 0.07459 0.02831 0.02894 18 1280: 65% 96/148 [08:34<04:29, 5.19s/it]\r\n 0/99 21.5G 0.07459 0.02831 0.02894 18 1280: 66% 97/148 [08:34<04:24, 5.19s/it]\r\n 0/99 21.5G 0.07448 0.02822 0.02879 23 1280: 66% 97/148 [08:39<04:24, 5.19s/it]\r\n 0/99 21.5G 0.07448 0.02822 0.02879 23 1280: 66% 98/148 [08:39<04:19, 5.19s/it]\r\n 0/99 21.5G 0.07422 0.02813 0.02864 17 1280: 66% 98/148 [08:45<04:19, 5.19s/it]\r\n 0/99 21.5G 0.07422 0.02813 0.02864 17 1280: 67% 99/148 [08:45<04:14, 5.20s/it]\r\n 0/99 21.5G 0.07409 0.02812 0.02848 31 1280: 67% 99/148 [08:50<04:14, 5.20s/it]\r\n 0/99 21.5G 0.07409 0.02812 0.02848 31 1280: 68% 100/148 [08:50<04:09, 5.19s/it]\r\n 0/99 21.5G 0.07385 0.02799 0.0283 14 1280: 68% 100/148 [08:55<04:09, 5.19s/it]\r\n 0/99 21.5G 0.07385 0.02799 0.0283 14 1280: 68% 101/148 [08:55<04:03, 5.17s/it]\r\n 0/99 21.5G 0.07362 0.02784 0.02811 11 1280: 68% 101/148 [09:00<04:03, 5.17s/it]\r\n 0/99 21.5G 0.07362 0.02784 0.02811 11 1280: 69% 102/148 [09:00<03:58, 5.19s/it]\r\n 0/99 21.5G 0.07344 0.02783 0.02795 28 1280: 69% 102/148 [09:05<03:58, 5.19s/it]\r\n 0/99 21.5G 0.07344 0.02783 0.02795 28 1280: 70% 103/148 [09:05<03:53, 5.19s/it]\r\n 0/99 21.5G 0.07319 0.02775 0.02781 19 1280: 70% 103/148 [09:10<03:53, 5.19s/it]\r\n 0/99 21.5G 0.07319 0.02775 0.02781 19 1280: 70% 104/148 [09:10<03:47, 5.17s/it]\r\n 0/99 21.5G 0.07306 0.02766 0.02767 20 1280: 70% 104/148 [09:16<03:47, 5.17s/it]\r\n 0/99 21.5G 0.07306 0.02766 0.02767 20 1280: 71% 105/148 [09:16<03:42, 5.19s/it]\r\n 0/99 21.5G 0.07292 0.02756 0.02751 20 1280: 71% 105/148 [09:21<03:42, 5.19s/it]\r\n 0/99 21.5G 0.07292 0.02756 0.02751 20 1280: 72% 106/148 [09:21<03:37, 5.18s/it]\r\n 0/99 21.5G 0.07276 0.02755 0.02735 34 1280: 72% 106/148 [09:26<03:37, 5.18s/it]\r\n 0/99 21.5G 0.07276 0.02755 0.02735 34 1280: 72% 107/148 [09:26<03:32, 5.17s/it]\r\n 0/99 21.5G 0.07253 0.02748 0.02719 21 1280: 72% 107/148 [09:31<03:32, 5.17s/it]\r\n 0/99 21.5G 0.07253 0.02748 0.02719 21 1280: 73% 108/148 [09:31<03:26, 5.16s/it]\r\n 0/99 21.5G 0.0724 0.02734 0.02702 15 1280: 73% 108/148 [09:36<03:26, 5.16s/it]\r\n 0/99 21.5G 0.0724 0.02734 0.02702 15 1280: 74% 109/148 [09:36<03:22, 5.18s/it]\r\n 0/99 21.5G 0.0723 0.02721 0.02692 16 1280: 74% 109/148 [09:41<03:22, 5.18s/it]\r\n 0/99 21.5G 0.0723 0.02721 0.02692 16 1280: 74% 110/148 [09:41<03:16, 5.17s/it]\r\n 0/99 21.5G 0.07217 0.02713 0.02676 21 1280: 74% 110/148 [09:47<03:16, 5.17s/it]\r\n 0/99 21.5G 0.07217 0.02713 0.02676 21 1280: 75% 111/148 [09:47<03:12, 5.20s/it]\r\n 0/99 21.5G 0.07203 0.02699 0.02661 13 1280: 75% 111/148 [09:52<03:12, 5.20s/it]\r\n 0/99 21.5G 0.07203 0.02699 0.02661 13 1280: 76% 112/148 [09:52<03:06, 5.18s/it]\r\n 0/99 21.5G 0.07184 0.02689 0.02644 18 1280: 76% 112/148 [09:57<03:06, 5.18s/it]\r\n 0/99 21.5G 0.07184 0.02689 0.02644 18 1280: 76% 113/148 [09:57<03:01, 5.18s/it]\r\n 0/99 21.5G 0.07167 0.02679 0.0263 14 1280: 76% 113/148 [10:02<03:01, 5.18s/it]\r\n 0/99 21.5G 0.07167 0.02679 0.0263 14 1280: 77% 114/148 [10:02<02:56, 5.18s/it]\r\n 0/99 21.5G 0.07165 0.02674 0.02617 29 1280: 77% 114/148 [10:07<02:56, 5.18s/it]\r\n 0/99 21.5G 0.07165 0.02674 0.02617 29 1280: 78% 115/148 [10:07<02:50, 5.17s/it]\r\n 0/99 21.5G 0.0716 0.02665 0.02604 22 1280: 78% 115/148 [10:13<02:50, 5.17s/it]\r\n 0/99 21.5G 0.0716 0.02665 0.02604 22 1280: 78% 116/148 [10:13<02:46, 5.21s/it]\r\n 0/99 21.5G 0.07151 0.02662 0.02591 31 1280: 78% 116/148 [10:18<02:46, 5.21s/it]\r\n 0/99 21.5G 0.07151 0.02662 0.02591 31 1280: 79% 117/148 [10:18<02:41, 5.19s/it]\r\n 0/99 21.5G 0.07148 0.02655 0.02578 29 1280: 79% 117/148 [10:23<02:41, 5.19s/it]\r\n 0/99 21.5G 0.07148 0.02655 0.02578 29 1280: 80% 118/148 [10:23<02:34, 5.16s/it]\r\n 0/99 21.5G 0.07141 0.02648 0.02563 23 1280: 80% 118/148 [10:28<02:34, 5.16s/it]\r\n 0/99 21.5G 0.07141 0.02648 0.02563 23 1280: 80% 119/148 [10:28<02:29, 5.16s/it]\r\n 0/99 21.5G 0.07135 0.02639 0.0255 23 1280: 80% 119/148 [10:33<02:29, 5.16s/it]\r\n 0/99 21.5G 0.07135 0.02639 0.0255 23 1280: 81% 120/148 [10:33<02:23, 5.13s/it]\r\n 0/99 21.5G 0.07127 0.02628 0.02537 16 1280: 81% 120/148 [10:38<02:23, 5.13s/it]\r\n 0/99 21.5G 0.07127 0.02628 0.02537 16 1280: 82% 121/148 [10:38<02:18, 5.14s/it]\r\n 0/99 21.5G 0.07108 0.02623 0.02526 22 1280: 82% 121/148 [10:44<02:18, 5.14s/it]\r\n 0/99 21.5G 0.07108 0.02623 0.02526 22 1280: 82% 122/148 [10:44<02:14, 5.17s/it]\r\n 0/99 21.5G 0.07101 0.02615 0.02513 24 1280: 82% 122/148 [10:49<02:14, 5.17s/it]\r\n 0/99 21.5G 0.07101 0.02615 0.02513 24 1280: 83% 123/148 [10:49<02:08, 5.15s/it]\r\n 0/99 21.5G 0.07088 0.02601 0.02498 7 1280: 83% 123/148 [10:54<02:08, 5.15s/it]\r\n 0/99 21.5G 0.07088 0.02601 0.02498 7 1280: 84% 124/148 [10:54<02:03, 5.15s/it]\r\n 0/99 21.5G 0.07076 0.02593 0.02488 18 1280: 84% 124/148 [10:59<02:03, 5.15s/it]\r\n 0/99 21.5G 0.07076 0.02593 0.02488 18 1280: 84% 125/148 [10:59<01:59, 5.18s/it]\r\n 0/99 21.5G 0.07068 0.02587 0.02475 26 1280: 84% 125/148 [11:04<01:59, 5.18s/it]\r\n 0/99 21.5G 0.07068 0.02587 0.02475 26 1280: 85% 126/148 [11:04<01:53, 5.17s/it]\r\n 0/99 21.5G 0.07062 0.02578 0.02462 24 1280: 85% 126/148 [11:09<01:53, 5.17s/it]\r\n 0/99 21.5G 0.07062 0.02578 0.02462 24 1280: 86% 127/148 [11:09<01:48, 5.17s/it]\r\n 0/99 21.5G 0.07049 0.0257 0.02448 21 1280: 86% 127/148 [11:15<01:48, 5.17s/it]\r\n 0/99 21.5G 0.07049 0.0257 0.02448 21 1280: 86% 128/148 [11:15<01:44, 5.22s/it]\r\n 0/99 21.5G 0.07035 0.02562 0.0244 17 1280: 86% 128/148 [11:20<01:44, 5.22s/it]\r\n 0/99 21.5G 0.07035 0.02562 0.0244 17 1280: 87% 129/148 [11:20<01:38, 5.20s/it]\r\n 0/99 21.5G 0.0702 0.02554 0.02428 20 1280: 87% 129/148 [11:25<01:38, 5.20s/it]\r\n 0/99 21.5G 0.0702 0.02554 0.02428 20 1280: 88% 130/148 [11:25<01:33, 5.19s/it]\r\n 0/99 21.5G 0.07006 0.02554 0.02416 30 1280: 88% 130/148 [11:30<01:33, 5.19s/it]\r\n 0/99 21.5G 0.07006 0.02554 0.02416 30 1280: 89% 131/148 [11:30<01:28, 5.18s/it]\r\n 0/99 21.5G 0.07002 0.02543 0.02406 15 1280: 89% 131/148 [11:35<01:28, 5.18s/it]\r\n 0/99 21.5G 0.07002 0.02543 0.02406 15 1280: 89% 132/148 [11:35<01:23, 5.20s/it]\r\n 0/99 21.5G 0.06992 0.02534 0.02394 17 1280: 89% 132/148 [11:41<01:23, 5.20s/it]\r\n 0/99 21.5G 0.06992 0.02534 0.02394 17 1280: 90% 133/148 [11:41<01:18, 5.20s/it]\r\n 0/99 21.5G 0.06975 0.02527 0.02384 17 1280: 90% 133/148 [11:46<01:18, 5.20s/it]\r\n 0/99 21.5G 0.06975 0.02527 0.02384 17 1280: 91% 134/148 [11:46<01:12, 5.19s/it]\r\n 0/99 21.5G 0.06961 0.02519 0.02371 19 1280: 91% 134/148 [11:51<01:12, 5.19s/it]\r\n 0/99 21.5G 0.06961 0.02519 0.02371 19 1280: 91% 135/148 [11:51<01:07, 5.16s/it]\r\n 0/99 21.5G 0.06958 0.02512 0.0236 24 1280: 91% 135/148 [11:56<01:07, 5.16s/it]\r\n 0/99 21.5G 0.06958 0.02512 0.0236 24 1280: 92% 136/148 [11:56<01:01, 5.16s/it]\r\n 0/99 21.5G 0.06949 0.02507 0.02349 25 1280: 92% 136/148 [12:01<01:01, 5.16s/it]\r\n 0/99 21.5G 0.06949 0.02507 0.02349 25 1280: 93% 137/148 [12:01<00:57, 5.19s/it]\r\n 0/99 21.5G 0.0694 0.02504 0.02339 31 1280: 93% 137/148 [12:06<00:57, 5.19s/it]\r\n 0/99 21.5G 0.0694 0.02504 0.02339 31 1280: 93% 138/148 [12:06<00:51, 5.17s/it]\r\n 0/99 21.5G 0.0693 0.02499 0.02329 23 1280: 93% 138/148 [12:12<00:51, 5.17s/it]\r\n 0/99 21.5G 0.0693 0.02499 0.02329 23 1280: 94% 139/148 [12:12<00:46, 5.19s/it]\r\n 0/99 21.5G 0.06923 0.02492 0.02318 22 1280: 94% 139/148 [12:17<00:46, 5.19s/it]\r\n 0/99 21.5G 0.06923 0.02492 0.02318 22 1280: 95% 140/148 [12:17<00:41, 5.18s/it]\r\n 0/99 21.5G 0.06913 0.02487 0.02307 25 1280: 95% 140/148 [12:22<00:41, 5.18s/it]\r\n 0/99 21.5G 0.06913 0.02487 0.02307 25 1280: 95% 141/148 [12:22<00:36, 5.18s/it]\r\n 0/99 21.5G 0.06908 0.02482 0.02299 22 1280: 95% 141/148 [12:27<00:36, 5.18s/it]\r\n 0/99 21.5G 0.06908 0.02482 0.02299 22 1280: 96% 142/148 [12:27<00:30, 5.15s/it]\r\n 0/99 21.5G 0.06899 0.02473 0.02288 18 1280: 96% 142/148 [12:32<00:30, 5.15s/it]\r\n 0/99 21.5G 0.06899 0.02473 0.02288 18 1280: 97% 143/148 [12:32<00:25, 5.16s/it]\r\n 0/99 21.5G 0.06892 0.0247 0.0228 39 1280: 97% 143/148 [12:37<00:25, 5.16s/it]\r\n 0/99 21.5G 0.06892 0.0247 0.0228 39 1280: 97% 144/148 [12:37<00:20, 5.17s/it]\r\n 0/99 21.5G 0.06881 0.02464 0.0227 24 1280: 97% 144/148 [12:43<00:20, 5.17s/it]\r\n 0/99 21.5G 0.06881 0.02464 0.0227 24 1280: 98% 145/148 [12:43<00:15, 5.20s/it]\r\n 0/99 21.5G 0.06878 0.02462 0.02261 38 1280: 98% 145/148 [12:48<00:15, 5.20s/it]\r\n 0/99 21.5G 0.06878 0.02462 0.02261 38 1280: 99% 146/148 [12:48<00:10, 5.22s/it]\r\n 0/99 21.5G 0.06871 0.02453 0.0225 14 1280: 99% 146/148 [12:53<00:10, 5.22s/it]\r\n 0/99 21.5G 0.06871 0.02453 0.0225 14 1280: 99% 147/148 [12:53<00:05, 5.20s/it]\r\n 0/99 17.4G 0.06859 0.02448 0.0224 24 1280: 99% 147/148 [13:06<00:05, 5.20s/it]\r\n 0/99 17.4G 0.06859 0.02448 0.0224 24 1280: 100% 148/148 [13:06<00:00, 7.56s/it]\r\n 0/99 17.4G 0.06859 0.02448 0.0224 24 1280: 100% 148/148 [13:06<00:00, 5.32s/it]\r\n\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 0% 0/32 [00:00<?, ?it/s]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 3% 1/32 [00:05<02:42, 5.23s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 6% 2/32 [00:08<02:16, 4.57s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 9% 3/32 [00:11<01:56, 4.03s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 12% 4/32 [00:13<01:42, 3.65s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 16% 5/32 [00:16<01:31, 3.40s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 19% 6/32 [00:19<01:23, 3.22s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 22% 7/32 [00:22<01:17, 3.11s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 25% 8/32 [00:25<01:12, 3.02s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 28% 9/32 [00:27<01:07, 2.93s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 31% 10/32 [00:30<01:03, 2.89s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 34% 11/32 [00:33<01:00, 2.88s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 38% 12/32 [00:36<00:56, 2.85s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 41% 13/32 [00:39<00:53, 2.84s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 44% 14/32 [00:41<00:50, 2.82s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 47% 15/32 [00:44<00:47, 2.80s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 50% 16/32 [00:47<00:44, 2.80s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 53% 17/32 [00:50<00:42, 2.85s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 56% 18/32 [00:53<00:39, 2.81s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 59% 19/32 [00:55<00:36, 2.81s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 62% 20/32 [00:58<00:33, 2.80s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 66% 21/32 [01:01<00:30, 2.79s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 69% 22/32 [01:04<00:27, 2.80s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 72% 23/32 [01:07<00:25, 2.80s/it]\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 75% 24/32 [01:09<00:22, 2.83s/it][E ProcessGroupNCCL.cpp:566] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66853 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:566] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66854 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:566] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66664 milliseconds before timing out.\r\n\r\n Class Images Labels P R mAP@.5 mAP@.5:.95: 78% 25/32 [01:12<00:20, 2.86s/it][E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66664 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66854 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66853 milliseconds before timing out.\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 1 (pid: 174) of binary: /opt/conda/bin/python3\r\nERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 3/3 attempts left; will restart worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:\r\n restart_count=1\r\n master_addr=127.0.0.1\r\n master_port=29500\r\n group_rank=0\r\n group_world_size=1\r\n local_ranks=[0, 1, 2, 3]\r\n role_ranks=[0, 1, 2, 3]\r\n global_ranks=[0, 1, 2, 3]\r\n role_world_sizes=[4, 4, 4, 4]\r\n global_world_sizes=[4, 4, 4, 4]\r\n\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_1/0/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_1/1/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_1/2/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_1/3/error.json\r\n\u001b[34m\u001b[1mtrain: \u001b[0mweights=yolov5m.pt, cfg=, data=../logos/logo.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=100, batch_size=64, imgsz=1280, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1,2,3, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=0, project=results, entity=None, name=2021-08-13, exist_ok=True, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=0, freeze=0\r\n\u001b[34m\u001b[1mgithub: \u001b[0mskipping check (Docker image), for updates see https://github.com/ultralytics/yolov5\r\nYOLOv5 \ud83d\ude80 v5.0-360-gd9f23ed torch 1.9.0+cu102 CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n\r\nAdded key: store_based_barrier_key:1 to store for rank: 0\r\n/opt/conda/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown\r\n warnings.warn('resource_tracker: There appear to be %d '\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 2, for key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 3, for key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\n RuntimeErrorraise RuntimeError(: \r\nTimed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 298) of binary: /opt/conda/bin/python3\r\nERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 2/3 attempts left; will restart worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:\r\n restart_count=2\r\n master_addr=127.0.0.1\r\n master_port=29500\r\n group_rank=0\r\n group_world_size=1\r\n local_ranks=[0, 1, 2, 3]\r\n role_ranks=[0, 1, 2, 3]\r\n global_ranks=[0, 1, 2, 3]\r\n role_world_sizes=[4, 4, 4, 4]\r\n global_world_sizes=[4, 4, 4, 4]\r\n\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_2/0/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_2/1/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_2/2/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_2/3/error.json\r\n\u001b[34m\u001b[1mtrain: \u001b[0mweights=yolov5m.pt, cfg=, data=../logos/logo.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=100, batch_size=64, imgsz=1280, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1,2,3, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=0, project=results, entity=None, name=2021-08-13, exist_ok=True, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=0, freeze=0\r\n\u001b[34m\u001b[1mgithub: \u001b[0mskipping check (Docker image), for updates see https://github.com/ultralytics/yolov5\r\nYOLOv5 \ud83d\ude80 v5.0-360-gd9f23ed torch 1.9.0+cu102 CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n\r\nAdded key: store_based_barrier_key:1 to store for rank: 0\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 3, for key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)main(opt)\r\n\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n File \"train.py\", line 494, in main\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 2, for key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 346) of binary: /opt/conda/bin/python3\r\nERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 1/3 attempts left; will restart worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:\r\n restart_count=3\r\n master_addr=127.0.0.1\r\n master_port=29500\r\n group_rank=0\r\n group_world_size=1\r\n local_ranks=[0, 1, 2, 3]\r\n role_ranks=[0, 1, 2, 3]\r\n global_ranks=[0, 1, 2, 3]\r\n role_world_sizes=[4, 4, 4, 4]\r\n global_world_sizes=[4, 4, 4, 4]\r\n\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_3/0/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_3/1/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_3/2/error.json\r\nINFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_3/3/error.json\r\n\u001b[34m\u001b[1mtrain: \u001b[0mweights=yolov5m.pt, cfg=, data=../logos/logo.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=100, batch_size=64, imgsz=1280, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1,2,3, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=0, project=results, entity=None, name=2021-08-13, exist_ok=True, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=0, freeze=0\r\n\u001b[34m\u001b[1mgithub: \u001b[0mskipping check (Docker image), for updates see https://github.com/ultralytics/yolov5\r\nYOLOv5 \ud83d\ude80 v5.0-360-gd9f23ed torch 1.9.0+cu102 CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n\r\nAdded key: store_based_barrier_key:1 to store for rank: 0\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nWaiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 2, for key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 3, for key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nTraceback (most recent call last):\r\n File \"train.py\", line 600, in <module>\r\n main(opt)\r\n File \"train.py\", line 494, in main\r\n dist.init_process_group(backend=\"nccl\" if dist.is_nccl_available() else \"gloo\", timeout=timedelta(seconds=60))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 547, in init_process_group\r\n _store_based_barrier(rank, store, timeout)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 219, in _store_based_barrier\r\n raise RuntimeError(\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 390) of binary: /opt/conda/bin/python3\r\nERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed\r\nINFO:torch.distributed.elastic.agent.server.api:Local worker group finished (FAILED). Waiting 300 seconds for other agents to finish\r\n/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:70: FutureWarning: This is an experimental API and will be changed in future.\r\n warnings.warn(\r\nINFO:torch.distributed.elastic.agent.server.api:Done waiting for other agents. Elapsed: 0.0009250640869140625 seconds\r\n{\"name\": \"torchelastic.worker.status.FAILED\", \"source\": \"WORKER\", \"timestamp\": 0, \"metadata\": {\"run_id\": \"none\", \"global_rank\": 0, \"group_rank\": 0, \"worker_id\": \"390\", \"role\": \"default\", \"hostname\": \"job-155b782d-12d0-457b-ada3-ee678ed0e091\", \"state\": \"FAILED\", \"total_run_time\": 1071, \"rdzv_backend\": \"static\", \"raw_error\": \"{\\\"message\\\": \\\"<NONE>\\\"}\", \"metadata\": \"{\\\"group_world_size\\\": 1, \\\"entry_point\\\": \\\"python3\\\", \\\"local_rank\\\": [0], \\\"role_rank\\\": [0], \\\"role_world_size\\\": [4]}\", \"agent_restarts\": 3}}\r\n{\"name\": \"torchelastic.worker.status.FAILED\", \"source\": \"WORKER\", \"timestamp\": 0, \"metadata\": {\"run_id\": \"none\", \"global_rank\": 1, \"group_rank\": 0, \"worker_id\": \"391\", \"role\": \"default\", \"hostname\": \"job-155b782d-12d0-457b-ada3-ee678ed0e091\", \"state\": \"FAILED\", \"total_run_time\": 1071, \"rdzv_backend\": \"static\", \"raw_error\": \"{\\\"message\\\": \\\"<NONE>\\\"}\", \"metadata\": \"{\\\"group_world_size\\\": 1, \\\"entry_point\\\": \\\"python3\\\", \\\"local_rank\\\": [1], \\\"role_rank\\\": [1], \\\"role_world_size\\\": [4]}\", \"agent_restarts\": 3}}\r\n{\"name\": \"torchelastic.worker.status.FAILED\", \"source\": \"WORKER\", \"timestamp\": 0, \"metadata\": {\"run_id\": \"none\", \"global_rank\": 2, \"group_rank\": 0, \"worker_id\": \"392\", \"role\": \"default\", \"hostname\": \"job-155b782d-12d0-457b-ada3-ee678ed0e091\", \"state\": \"FAILED\", \"total_run_time\": 1071, \"rdzv_backend\": \"static\", \"raw_error\": \"{\\\"message\\\": \\\"<NONE>\\\"}\", \"metadata\": \"{\\\"group_world_size\\\": 1, \\\"entry_point\\\": \\\"python3\\\", \\\"local_rank\\\": [2], \\\"role_rank\\\": [2], \\\"role_world_size\\\": [4]}\", \"agent_restarts\": 3}}\r\n{\"name\": \"torchelastic.worker.status.FAILED\", \"source\": \"WORKER\", \"timestamp\": 0, \"metadata\": {\"run_id\": \"none\", \"global_rank\": 3, \"group_rank\": 0, \"worker_id\": \"393\", \"role\": \"default\", \"hostname\": \"job-155b782d-12d0-457b-ada3-ee678ed0e091\", \"state\": \"FAILED\", \"total_run_time\": 1071, \"rdzv_backend\": \"static\", \"raw_error\": \"{\\\"message\\\": \\\"<NONE>\\\"}\", \"metadata\": \"{\\\"group_world_size\\\": 1, \\\"entry_point\\\": \\\"python3\\\", \\\"local_rank\\\": [3], \\\"role_rank\\\": [3], \\\"role_world_size\\\": [4]}\", \"agent_restarts\": 3}}\r\n{\"name\": \"torchelastic.worker.status.SUCCEEDED\", \"source\": \"AGENT\", \"timestamp\": 0, \"metadata\": {\"run_id\": \"none\", \"global_rank\": null, \"group_rank\": 0, \"worker_id\": null, \"role\": \"default\", \"hostname\": \"job-155b782d-12d0-457b-ada3-ee678ed0e091\", \"state\": \"SUCCEEDED\", \"total_run_time\": 1071, \"rdzv_backend\": \"static\", \"raw_error\": null, \"metadata\": \"{\\\"group_world_size\\\": 1, \\\"entry_point\\\": \\\"python3\\\"}\", \"agent_restarts\": 3}}\r\n/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py:354: UserWarning: \r\n\r\n**********************************************************************\r\n CHILD PROCESS FAILED WITH NO ERROR_FILE \r\n**********************************************************************\r\nCHILD PROCESS FAILED WITH NO ERROR_FILE\r\nChild process 390 (local_rank 0) FAILED (exitcode 1)\r\nError msg: Process failed with exitcode 1\r\nWithout writing an error file to <N/A>.\r\nWhile this DOES NOT affect the correctness of your application,\r\nno trace information about the error will be available for inspection.\r\nConsider decorating your top level entrypoint function with\r\ntorch.distributed.elastic.multiprocessing.errors.record. Example:\r\n\r\n from torch.distributed.elastic.multiprocessing.errors import record\r\n\r\n @record\r\n def trainer_main(args):\r\n # do train\r\n**********************************************************************\r\n warnings.warn(_no_error_file_warning_msg(rank, failure))\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/opt/conda/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py\", line 173, in <module>\r\n main()\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py\", line 169, in main\r\n run(args)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py\", line 621, in run\r\n elastic_launch(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 116, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 348, in wrapper\r\n*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\n*****************************************\r\n return f(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 245, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n***************************************\r\n train.py FAILED \r\n=======================================\r\nRoot Cause:\r\n[0]:\r\n time: 2021-08-13_15:41:42\r\n rank: 0 (local_rank: 0)\r\n exitcode: 1 (pid: 390)\r\n error_file: <N/A>\r\n msg: \"Process failed with exitcode 1\"\r\n=======================================\r\nOther Failures:\r\n[1]:\r\n time: 2021-08-13_15:41:42\r\n rank: 1 (local_rank: 1)\r\n exitcode: 1 (pid: 391)\r\n error_file: <N/A>\r\n msg: \"Process failed with exitcode 1\"\r\n[2]:\r\n time: 2021-08-13_15:41:42\r\n rank: 2 (local_rank: 2)\r\n exitcode: 1 (pid: 392)\r\n error_file: <N/A>\r\n msg: \"Process failed with exitcode 1\"\r\n[3]:\r\n time: 2021-08-13_15:41:42\r\n rank: 3 (local_rank: 3)\r\n exitcode: 1 (pid: 393)\r\n error_file: <N/A>\r\n msg: \"Process failed with exitcode 1\"\r\n***************************************\r\n```\r\n\r\n## Expected behavior\r\n\r\nThe training should keep going after the first epoch is over and the first validation step is over.\r\n\r\n## Environment\r\n\r\nI'm using OVH AI Cloud to train. Using the Docker image described above (basically the official one).\r\n\r\n- Yolo v5.0-360-gd9f23ed torch 1.9.0+cu102 \r\n- CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n- CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n - CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n- CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)\r\n\r\nRessources for the job:\r\n Cpu: 13\r\n Memory: 40.0 GiB\r\n Public Network: 1.5 Gbps\r\n Private Network: 0 bps\r\n Ephemeral Storage: 650.0 GiB\r\n Gpu Model: Tesla-V100S\r\n Gpu Brand: NVIDIA\r\n Gpu Memory: 32.0 GiB\r\n Flavor: ai1-1-gpu\r\n\r\n\r\n## Additional context\r\n\r\nI didn't encounter this problem with only 500 images a few days ago, with 4 GPUs. I encountered multiple problems today due to the `cache` argument being used but not that is gone, I can't find the reason why it's failing at the end of the first validation step (around 900 images).", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/4422", "file_loc": {"base_commit": "4e65052f28b1184b9d463c1e44b3a79b95113904", "files": [{"path": "train.py", "status": "modified", "Loc": {"(None, 'main', 461)": {"mod": [496]}}}, {"path": "utils/torch_utils.py", "status": "modified", "Loc": {"(None, 'torch_distributed_zero_first', 33)": {"mod": [38, 41]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["train.py", "utils/torch_utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "554f782537b9af336c02c013468b78fe16ce092d", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/5916", "iss_label": "enhancement", "title": "onnxruntime-gpu 1.10", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.\n\n\n### Description\n\nUsing onnxruntime-gpu 1.10, the following error will occur.\r\n```\r\nraise ValueError(\"This ORT build has {} enabled. \".format(available_providers) +\r\nValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)\r\n```\n\n### Use case\n\nonnxruntime-gpu 1.10 requires providers\r\n```\r\nelif onnx: # ONNX Runtime\r\n LOGGER.info(f'Loading {w} for ONNX Runtime inference...')\r\n check_requirements(('onnx', 'onnxruntime-gpu' if torch.cuda.is_available() else 'onnxruntime'))\r\n import onnxruntime\r\n if torch.cuda.is_available():\r\n session = onnxruntime.InferenceSession(w, None, providers=[\"CUDAExecutionProvider\"])\r\n else:\r\n session = onnxruntime.InferenceSession(w, None)\r\n```\n\n### Additional\n\n_No response_\n\n### Are you willing to submit a PR?\n\n- [ ] Yes I'd like to help by submitting a PR!", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/5918", "file_loc": {"base_commit": "554f782537b9af336c02c013468b78fe16ce092d", "files": [{"path": "models/common.py", "status": "modified", "Loc": {"('DetectMultiBackend', '__init__', 279)": {"mod": [323, 325]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["models/common.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "b510957650c890dee876146c43dcda1fdfc279d6", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/8641", "iss_label": "bug\nTODO", "title": "Albumentations-Pipeline is applied to BGR not to RGB", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.\n\n\n### YOLOv5 Component\n\n_No response_\n\n### Bug\n\nAs written [here ](https://albumentations.ai/docs/getting_started/image_augmentation/) in step 3, Albumentations internally uses the RGB format and not the BGR format of opencv. However, the data is currently passed internally as BGR:\r\nhttps://github.com/ultralytics/yolov5/blob/92e47b85d952274480c8c5efa5900e686241a96b/utils/dataloaders.py#L626-L628\r\nhttps://github.com/ultralytics/yolov5/blob/92e47b85d952274480c8c5efa5900e686241a96b/utils/dataloaders.py#L654\r\n Or am I missing something?\n\n### Environment\n\nYOLOv5 torch 1.11 (cuda 11.3) and 1.12 (cuda 11.6)\n\n### Minimal Reproducible Example\n\n_No response_\n\n### Additional\n\n_No response_\n\n### Are you willing to submit a PR?\n\n- [X] Yes I'd like to help by submitting a PR!", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/8695", "file_loc": {"base_commit": "b367860196a2590a5f44c9b18401dedfc0543077", "files": [{"path": "utils/augmentations.py", "status": "modified", "Loc": {"('Albumentations', '__call__', 40)": {"mod": [42, 43]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["utils/augmentations.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "0b6266f5e0eab11218871d5560bf9b93f7547aac", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1816", "iss_label": "question", "title": "time_synchronized() when using CPU for inference on a GPU enabled workstation?", "body": "Im trying to measure time taken for inference using a CPU vs GPU.\r\n\r\nI set --device as cpu when i run detect.py but the method time_synchronized() checks torch.cuda.is_available() which obviously is True as the gpu is available but not used.\r\nAlso i've noticed that when i comment `torch.cuda.synchronize() if torch.cuda.is_available() else None` (in time_synchronized() method)while using --device as cpu the inference speeds up.\r\n\r\nShouldn't the time_synchronize() be connected to the --device parameter?\r\n\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/1826", "file_loc": {"base_commit": "0b6266f5e0eab11218871d5560bf9b93f7547aac", "files": [{"path": "utils/torch_utils.py", "status": "modified", "Loc": {"(None, 'init_torch_seeds', 35)": {"mod": [39, 40, 42, 43]}, "(None, 'select_device', 46)": {"mod": [48, 49, 51, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 66, 68]}, "(None, 'time_synchronized', 72)": {"mod": [74]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["utils/torch_utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "e2b7bc0b32ecf306fc179bb87bad82216a470b37", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1945", "iss_label": "bug\nStale", "title": "CoreML export failure: unexpected number of inputs for node x.2 (_convolution): 13", "body": "\r\n## Additional context\r\nissue occur:\r\nConverting Frontend ==> MIL Ops: 0%| | 0/970 [00:00<?, ? ops/s]Converting op 221 : constant\r\nAdding op '221' of type const\r\nConverting op 222 : constant\r\nAdding op '222' of type const\r\nConverting op 223 : constant\r\nAdding op '223' of type const\r\nConverting op 224 : constant\r\nAdding op '224' of type const\r\nConverting op 225 : constant\r\nConverting op 226 : constant\r\nAdding op '226' of type const\r\nConverting op 227 : constant\r\nAdding op '227' of type const\r\nConverting op 228 : slice\r\nAdding op '228' of type slice_by_index\r\nAdding op '228_begin_0' of type const\r\nAdding op '228_end_0' of type const\r\nAdding op '228_stride_0' of type const\r\nAdding op '228_end_mask_0' of type const\r\nConverting op 229 : slice\r\nAdding op '229' of type slice_by_index\r\nAdding op '229_begin_0' of type const\r\nAdding op '229_end_0' of type const\r\nAdding op '229_stride_0' of type const\r\nAdding op '229_end_mask_0' of type const\r\nConverting op 230 : slice\r\nAdding op '230' of type slice_by_index\r\nAdding op '230_begin_0' of type const\r\nAdding op '230_end_0' of type const\r\nAdding op '230_stride_0' of type const\r\nAdding op '230_end_mask_0' of type const\r\nConverting op 231 : slice\r\nAdding op '231' of type slice_by_index\r\nAdding op '231_begin_0' of type const\r\nAdding op '231_end_0' of type const\r\nAdding op '231_stride_0' of type const\r\nAdding op '231_end_mask_0' of type const\r\nConverting op 232 : slice\r\nAdding op '232' of type slice_by_index\r\nAdding op '232_begin_0' of type const\r\nAdding op '232_end_0' of type const\r\nAdding op '232_stride_0' of type const\r\nAdding op '232_end_mask_0' of type const\r\nConverting op 233 : slice\r\nAdding op '233' of type slice_by_index\r\nAdding op '233_begin_0' of type const\r\nAdding op '233_end_0' of type const\r\nAdding op '233_stride_0' of type const\r\nAdding op '233_end_mask_0' of type const\r\nConverting op 234 : slice\r\nAdding op '234' of type slice_by_index\r\nAdding op '234_begin_0' of type const\r\nAdding op '234_end_0' of type const\r\nAdding op '234_stride_0' of type const\r\nAdding op '234_end_mask_0' of type const\r\nConverting op 235 : slice\r\nAdding op '235' of type slice_by_index\r\nAdding op '235_begin_0' of type const\r\nAdding op '235_end_0' of type const\r\nAdding op '235_stride_0' of type const\r\nAdding op '235_end_mask_0' of type const\r\nConverting op 236 : listconstruct\r\nConverting op input.1 : cat\r\nAdding op 'input.1' of type concat\r\nAdding op 'input.1_interleave_0' of type const\r\nConverting op 238 : listconstruct\r\nAdding op '238' of type const\r\nConverting op 239 : listconstruct\r\nAdding op '239' of type const\r\nConverting op 240 : listconstruct\r\nAdding op '240' of type const\r\nConverting op 241 : listconstruct\r\nAdding op '241' of type const\r\nConverting op x.2 : _convolution\r\nConverting Frontend ==> MIL Ops: 2%|\u2588 | 21/970 [00:00<00:00, 1017.80 ops/s]\r\nCoreML export failure: unexpected number of inputs for node x.2 (_convolution): 13\r\n\r\nExport complete (11.10s). Visualize with https://github.com/lutzroeder/netron\r\n\r\n\r\n\r\nwhen I use command: python models/export.py --weights \"yolov5l.pt\" --img 640 --batch 1\r\nI see #1667\r\nmy torch = 1.7.1 and torchvision= 0.8.2 torchaudio=0.7.2 coremltools=4.0\r\n\r\nwhat wrong happen?", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2762", "file_loc": {"base_commit": "e2b7bc0b32ecf306fc179bb87bad82216a470b37", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [14, 16], "mod": [9, 20, 21, 26, 27, 28, 29, 30, 31, 32, 33, 35, 36, 37, 38, 47, 88]}}}, {"path": "hubconf.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [70]}, "(None, 'yolov5s', 58)": {"mod": [58, 59, 61, 62, 63, 64, 69]}, "(None, 'yolov5m', 72)": {"mod": [72, 73, 75, 76, 77, 78, 80, 81, 82]}, "(None, 'yolov5l', 86)": {"mod": [87, 89, 90, 91, 92, 94, 95, 96]}, "(None, 'yolov5x', 100)": {"mod": [101, 103, 104, 105, 106, 108, 109, 110, 111]}, "(None, 'custom', 114)": {"mod": [114, 115, 117, 118, 119, 120, 122, 123, 124, 125, 126, 127, 129, 130, 131, 132, 133, 134, 135]}}}, {"path": "utils/plots.py", "status": "modified", "Loc": {"(None, 'plot_study_txt', 240)": {"mod": [246, 256, 264]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["hubconf.py", "utils/plots.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "fd1679975bf55325f606631b28d5d3feb47fbda5", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2332", "iss_label": "question", "title": "Label smoothing in training option", "body": "Hi, I could not find any questions about label smoothing, so I wonder if there is `label smoothing` option in the training script?\r\nI think it would be useful, as the authors (from [this](https://arxiv.org/pdf/1902.04103.pdf) paper) demonstrated the performance boost.\r\n![image](https://user-images.githubusercontent.com/36766404/109591796-c64d2500-7b40-11eb-97a9-74b9d909aa7d.png)\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2344", "file_loc": {"base_commit": "fd1679975bf55325f606631b28d5d3feb47fbda5", "files": [{"path": "train.py", "status": "modified", "Loc": {"(None, 'train', 41)": {"add": [226]}, "(None, None, None)": {"add": [483]}}}, {"path": "utils/loss.py", "status": "modified", "Loc": {"('ComputeLoss', '__init__', 90)": {"mod": [100]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["train.py", "utils/loss.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "1e1687743a0c2b1f8027076ffc3651a61bbc8b66", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/94", "iss_label": "", "title": "Toolbox & sounddevice: Invalid sample rate on playback, microphone not recognized", "body": "I've tried using the toolbox and using the play button throws the following exception on Arch Linux with PulseAudio\r\n\r\n```\r\nsounddevice.PortAudioError: Error opening OutputStream: Invalid sample rate [PaErrorCode -9997]\r\nTraceback (most recent call last):\r\n File \"/home/dash/programs/Real-Time-Voice-Cloning/toolbox/__init__.py\", line 81, in <lambda>\r\n func = lambda: self.ui.play(self.ui.selected_utterance.wav, Synthesizer.sample_rate)\r\n File \"/home/dash/programs/Real-Time-Voice-Cloning/toolbox/ui.py\", line 142, in play\r\n sd.play(wav, sample_rate)\r\n File \"/usr/lib/python3.7/site-packages/sounddevice.py\", line 154, in play\r\n **kwargs)\r\n File \"/usr/lib/python3.7/site-packages/sounddevice.py\", line 2417, in start_stream\r\n **kwargs)\r\n File \"/usr/lib/python3.7/site-packages/sounddevice.py\", line 1374, in __init__\r\n **_remove_self(locals()))\r\n File \"/usr/lib/python3.7/site-packages/sounddevice.py\", line 780, in __init__\r\n 'Error opening {0}'.format(self.__class__.__name__))\r\n File \"/usr/lib/python3.7/site-packages/sounddevice.py\", line 2572, in _check\r\n raise PortAudioError(errormsg, err)\r\nsounddevice.PortAudioError: Error opening OutputStream: Invalid sample rate [PaErrorCode -9997]\r\n```\r\n\r\nUsing the recording function throws a similar exception:\r\n\r\n`Error opening InputStream: Invalid sample rate [PaErrorCode -9997]`", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/390", "file_loc": {"base_commit": "1e1687743a0c2b1f8027076ffc3651a61bbc8b66", "files": [{"path": "toolbox/__init__.py", "status": "modified", "Loc": {"('Toolbox', 'setup_events', 57)": {"add": [85]}}}, {"path": "toolbox/ui.py", "status": "modified", "Loc": {"('UI', 'draw_umap_projections', 98)": {"add": [138]}, "('UI', None, 52)": {"add": [139]}, "(None, None, None)": {"mod": [16]}, "('UI', 'record_one', 147)": {"mod": [168]}, "('UI', '__init__', 342)": {"mod": [429]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["toolbox/__init__.py", "toolbox/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5d6d9ff499912c32a331f3bb5ed9e1b77db4c7e6", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/303", "iss_label": "", "title": "Tensorflow import error in the Google Colab notebook", "body": "When installing the requirements with pip, I get the following errors which causes tensorflow to not be installed.\r\n\r\n`ERROR: tensorflow 2.2.0rc1 has requirement tensorboard<2.2.0,>=2.1.0, but you'll have tensorboard 1.14.0 which is incompatible.`\r\n`ERROR: tensorflow 2.2.0rc1 has requirement tensorflow-estimator<2.3.0,>=2.2.0rc0, but you'll have tensorflow-estimator 1.14.0 which is incompatible.`", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/366", "file_loc": {"base_commit": "5d6d9ff499912c32a331f3bb5ed9e1b77db4c7e6", "files": [{"path": "demo_cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 32], "mod": [41, 42, 44, 45, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 175, 177, 178, 180]}}}, {"path": "encoder/inference.py", "status": "modified", "Loc": {"(None, 'load_model', 15)": {"mod": [33]}}}, {"path": "encoder/train.py", "status": "modified", "Loc": {"(None, 'sync', 9)": {"add": [14], "mod": [10, 11]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "synthesizer/feeder.py", "status": "modified", "Loc": {"('Feeder', '__init__', 17)": {"mod": [73, 74, 75, 77, 78, 79, 83, 88, 103]}}}, {"path": "synthesizer/inference.py", "status": "modified", "Loc": {"('Synthesizer', 'load', 50)": {"mod": [57]}, "('Synthesizer', '_one_shot_synthesize_spectrograms', 89)": {"mod": [91]}}}, {"path": "synthesizer/models/attention.py", "status": "modified", "Loc": {"(None, '_location_sensitive_score', 38)": {"mod": [63, 66]}, "('LocationSensitiveAttention', '__init__', 111)": {"mod": [158, 161]}}}, {"path": "synthesizer/models/helpers.py", "status": "modified", "Loc": {"('TacoTrainingHelper', 'next_inputs', 115)": {"mod": [122]}}}, {"path": "synthesizer/models/modules.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}, "('HighwayNet', '__init__', 5)": {"mod": [9, 10]}, "('HighwayNet', '__call__', 13)": {"mod": [14]}, "('CBHG', '__call__', 40)": {"mod": [41, 42, 74]}, "('ZoneoutLSTMCell', None, 91)": {"mod": [91]}, "('ZoneoutLSTMCell', '__init__', 102)": {"mod": [112]}, "('ZoneoutLSTMCell', '__call__', 126)": {"mod": [147, 148, 149, 150, 156]}, "('EncoderConvolutions', '__call__', 186)": {"mod": [187]}, "('EncoderRNN', '__call__', 228)": {"mod": [229, 230]}, "('Prenet', '__call__', 263)": {"mod": [266, 268, 272]}, "('DecoderRNN', '__init__', 281)": {"mod": [305]}, "('DecoderRNN', '__call__', 307)": {"mod": [308]}, "('FrameProjection', '__init__', 316)": {"mod": [330]}, "('FrameProjection', '__call__', 333)": {"mod": [334, 337]}, "('StopProjection', '__call__', 364)": {"mod": [365]}, "('Postnet', '__call__', 401)": {"mod": [402]}, "(None, 'conv1d', 414)": {"mod": [415, 416, 422, 424]}}}, {"path": "synthesizer/models/tacotron.py", "status": "modified", "Loc": {"('Tacotron', 'initialize', 31)": {"mod": [86, 87, 88, 89, 90, 123, 124, 125, 135, 286]}, "('Tacotron', 'add_loss', 312)": {"mod": [334, 335, 336, 359, 360, 362, 363]}, "('Tacotron', 'add_optimizer', 427)": {"mod": [442, 451, 452, 457, 458, 460, 493]}, "('Tacotron', '_learning_rate_decay', 497)": {"mod": [513, 514, 515, 516, 517, 518]}}}, {"path": "synthesizer/tacotron2.py", "status": "modified", "Loc": {"('Tacotron2', '__init__', 12)": {"mod": [15, 16, 17, 19, 20, 21, 55, 59, 60, 62]}}}, {"path": "synthesizer/train.py", "status": "modified", "Loc": {"(None, 'add_train_stats', 35)": {"mod": [36, 38, 39, 40, 41, 44, 46, 47, 49, 50, 51, 52, 54, 56, 57, 58, 60]}, "(None, 'add_eval_stats', 63)": {"mod": [66, 67, 68, 69, 70, 71, 72, 75, 76, 77]}, "(None, 'model_train_mode', 85)": {"mod": [86]}, "(None, 'model_test_mode', 98)": {"mod": [99]}, "(None, 'train', 110)": {"mod": [139, 143, 167, 172, 177, 179, 181]}}}, {"path": "vocoder/inference.py", "status": "modified", "Loc": {"(None, 'load_model', 8)": {"mod": [9, 26, 30]}}}, {"path": "vocoder/models/fatchord_version.py", "status": "modified", "Loc": {"('WaveRNN', 'generate', 149)": {"mod": [160, 171, 172, 173]}, "('WaveRNN', 'pad_tensor', 258)": {"mod": [263]}, "('WaveRNN', 'fold_with_overlap', 270)": {"mod": [309]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["encoder/inference.py", "synthesizer/inference.py", "synthesizer/models/modules.py", "vocoder/inference.py", "synthesizer/models/tacotron.py", "synthesizer/train.py", "synthesizer/models/attention.py", "vocoder/models/fatchord_version.py", "synthesizer/feeder.py", "synthesizer/models/helpers.py", "synthesizer/tacotron2.py", "demo_cli.py", "encoder/train.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "1b8d2e794b32039aa7ecc6367dabb64a3e5e6467", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/89", "iss_label": "", "title": "Charmap codec can't encode", "body": "Hi,\r\n\r\nI'm running the cli demo and I got all the way to \"Write a sentence..\" and I get this error. Could you please help? I've been trying to get this to work for me since 8/8 and working through many setbacks...I'm finally close.\r\n\r\n`\r\nCreated the mel spectrogram\r\nSynthesizing the waveform:\r\nTraceback (most recent call last):\r\n File \"demo_cli.py\", line 161, in <module>\r\n generated_wav = vocoder.infer_waveform(spec)\r\n File \"C:\\Users\\selinakvle\\Real-Time-Voice-Cloning\\vocoder\\inference.py\", line 57, in infer_waveform\r\n wav = _model.generate(mel, batched, target, overlap, hp.mu_law, progress_callback)\r\n File \"C:\\Users\\selinakvle\\Real-Time-Voice-Cloning\\vocoder\\models\\fatchord_version.py\", line 219, in generate\r\n progress_callback(i, seq_len, b_size, gen_rate)\r\n File \"C:\\Users\\selinakvle\\Real-Time-Voice-Cloning\\vocoder\\models\\fatchord_version.py\", line 248, in gen_display\r\n stream(msg)\r\n File \"C:\\Users\\selinakvle\\Real-Time-Voice-Cloning\\vocoder\\display.py\", line 16, in stream\r\n sys.stdout.write(\"\\r{%s}\" % message)\r\n File \"C:\\Users\\selinakvle\\AppData\\Local\\Programs\\Python\\Python36\\lib\\encodings\\cp1252.py\", line 19, in encode\r\n return codecs.charmap_encode(input,self.errors,encoding_table)[0]\r\nUnicodeEncodeError: 'charmap' codec can't encode characters in position 4-19: character maps to <undefined>\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"demo_cli.py\", line 184, in <module>\r\n print(\"Caught exception: %s\" % repr(e))\r\n File \"C:\\Users\\selinakvle\\AppData\\Local\\Programs\\Python\\Python36\\lib\\encodings\\cp1252.py\", line 19, in encode\r\n return codecs.charmap_encode(input,self.errors,encoding_table)[0]\r\nUnicodeEncodeError: 'charmap' codec can't encode characters in position 54-69: character maps to <undefined>\r\n`", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/372", "file_loc": {"base_commit": "1b8d2e794b32039aa7ecc6367dabb64a3e5e6467", "files": [{"path": "vocoder/display.py", "status": "modified", "Loc": {"(None, 'stream', 15)": {"mod": [16]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["vocoder/display.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "eaf5ec4467795344e7d9601515b017fd8c46e44b", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/413", "iss_label": "enhancement\nhelp wanted", "title": "Updates for synthesizer training using LibriTTS", "body": "I am certain someone has done this before (such as @sberryman in #126). Would someone please share the code modifications needed to train the synthesizer on LibriTTS?\r\n\r\nIf we can improve the [training process](https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Training) to use LibriTTS in place of LibriSpeech, we can also generate a new set of pretrained models for better output quality.\r\n\r\nHere are some questions to get it started... but feel free to skip ahead and share finished code if it's already available.\r\n* Can `preprocess_librispeech` be reused for TTS? See [synthesizer/preprocess.py](https://github.com/CorentinJ/Real-Time-Voice-Cloning/blob/master/synthesizer/preprocess.py#L13)\r\n* Are LibriTTS alignments available? I see [LibriTTSLabel](https://github.com/kan-bayashi/LibriTTSLabel) and [Montreal-Forced-Aligner](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner). But not sure what else is needed to get it in a form that the RTVC repo can use.", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/441", "file_loc": {"base_commit": "eaf5ec4467795344e7d9601515b017fd8c46e44b", "files": [{"path": "synthesizer/preprocess.py", "status": "modified", "Loc": {"(None, 'preprocess_librispeech', 13)": {"mod": [13, 14, 16, 17, 18, 33, 35]}, "(None, 'preprocess_speaker', 54)": {"mod": [54, 57, 58, 59, 60, 61, 62, 63, 64, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 78]}}}, {"path": "synthesizer_preprocess_audio.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [28], "mod": [1, 52]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["synthesizer/preprocess.py", "synthesizer_preprocess_audio.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "1b8d2e794b32039aa7ecc6367dabb64a3e5e6467", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/235", "iss_label": "", "title": "I keep getting TypeError: Invalid file: WindowsPath", "body": "Hi guys I keep getting this error when running python demo_toolbox.py\r\nException: Invalid file: WindowsPath('D:/ai/LibriSpeech/train-clean-360/6157/40556/6157-40556-0111.flac')\r\n\r\nAlso I get this error using eg. python synthesizer_preprocess_audio.py\r\nTypeError: Invalid file: WindowsPath('D:/ai/LibriSpeech/train-clean-100/103/1240/103-1240-0000.flac')\r\n\r\nAny help with this would be fantastic it may be something simple I have only just started with python a few days ago. \r\n\r\nI am running Windows 10 and using Anaconda and have downloaded all the files required. I just cant seem to load any voices in the toolbox through Voxceleb, librispeech or custom audio files in any format but i can record my own voice in toolbox. Thanks guys hopefully someone can help me out.\r\n\r\nCheers\r\nGlenn\r\n\r\n", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/371", "file_loc": {"base_commit": "1b8d2e794b32039aa7ecc6367dabb64a3e5e6467", "files": [{"path": "demo_cli.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [138]}}}, {"path": "encoder/audio.py", "status": "modified", "Loc": {"(None, 'preprocess_wav', 13)": {"mod": [28]}}}, {"path": "synthesizer/inference.py", "status": "modified", "Loc": {"('Synthesizer', 'load_preprocess_wav', 106)": {"mod": [111]}}}, {"path": "synthesizer/preprocess.py", "status": "modified", "Loc": {"(None, 'split_on_silences', 83)": {"mod": [85]}}}, {"path": "vocoder/audio.py", "status": "modified", "Loc": {"(None, 'load_wav', 18)": {"mod": [19]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["synthesizer/inference.py", "encoder/audio.py", "synthesizer/preprocess.py", "vocoder/audio.py", "demo_cli.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "070a3c187f87136ebe92aa72766f8343772d414e", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/375", "iss_label": "", "title": "Make webrtcvad optional for inference", "body": "> Second thing: webrtcvad. That package is hell to install on windows. There are alternatives for noise removal out there. There's also the possibility of not using it at all, but for both LibriSpeech and LibriTTS I would recommend it.\r\n\r\nPropose making webrtcvad completely optional for running demo_cli.py. This would make it a lot easier for Windows users who just want to try cloning a voice with the pretrained models. It would continue to be used when preprocessing audio files for training.\r\n\r\nAn optional import of webrtcvad could be done using something like this: [https://stackoverflow.com/a/52826085](https://stackoverflow.com/a/52826085)", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/376", "file_loc": {"base_commit": "070a3c187f87136ebe92aa72766f8343772d414e", "files": [{"path": "encoder/audio.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 9], "mod": [6]}, "(None, 'preprocess_wav', 13)": {"mod": [38]}}}, {"path": "encoder_preprocess.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [39, 41]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8]}}}, {"path": "synthesizer_preprocess_audio.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [26, 36]}}}, {"path": "vocoder_preprocess.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [30, 39]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["vocoder_preprocess.py", "synthesizer_preprocess_audio.py", "encoder_preprocess.py", "encoder/audio.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "6944770f678f0545ef503efd6ec87ac65db0a016", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/395", "iss_label": "", "title": "Can't load voice in", "body": "Hey guys, whenever I try to load my voice sample, I keep getting either just\r\n`Exception:`\r\nor\r\n`Exception: expected str, bytes or os.PathLike object, not Nonetype`\r\nPlease help!\r\n", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/414", "file_loc": {"base_commit": "6944770f678f0545ef503efd6ec87ac65db0a016", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [65], "mod": [34, 36, 37, 39, 41, 43, 45, 48, 55, 58]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "ae6b30907db2060962c533de79ab4bd2c6b12297", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7021", "iss_label": "bug-report", "title": "[Bug]: Inpainting color correctioin", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nEven wiith color correction turned on in the settings, while inpainting, the final render still ends up with a bluish color on human subjects in the area that was in painted\n\n### Steps to reproduce the problem\n\n1. Go to .... \r\n2. Press ....\r\n3. ...\r\n\n\n### What should have happened?\n\nless bluing and more of a skintone match\n\n### Commit where the problem happens\n\ne33cace2c2074ef342d027c1f31ffc4b3c3e877e\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Command Line Arguments\n\n```Shell\n--xformers\n```\n\n\n### Additional information, context and logs\n\n_No response_", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12480", "file_loc": {"base_commit": "ae6b30907db2060962c533de79ab4bd2c6b12297", "files": [{"path": "modules/processing.py", "status": "modified", "Loc": {"(None, 'apply_color_correction', 47)": {"mod": [60]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/processing.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "9d5becb4decb27683af749058f61e40842fe9c93", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1364", "iss_label": "bug-report", "title": "LDSR: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte", "body": "**Describe the bug**\r\n\r\nAfter today's refactoring commits, using LDSR upscaling produces an error:\r\n\r\n`UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte`\r\n\r\nThis is on Linux, even after a fresh download (I moved the old LDSR related models aside). It looks like an issue with encodings, as utf-8 is involved. I guess it could even possibly work on Windows but not on Linux?\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to extras\r\n2. Click on LDSR\r\n3. Add an image\r\n4. Click Generate\r\n\r\n**Expected behavior**\r\nLDSR should work\r\n\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Fedora Linux 37 beta\r\n - Browser: Firefox\r\n - Commit revision: 5c0c778a65c8f89a85395fb10e32d3b35ea57196\r\n\r\n**Additional context**\r\n\r\nIt works in git commit 498515e7a19bb3e8ab36aab2e628eb6be7464401 (a commit from last night, before all the refactoring). Well, \"works\". Sometimes there's a black edge with missing pixels on the right and bottom. Other times, it's fine. (I think it's related to resolution and/or aspect ratio?)\r\n\r\nComplete traceback:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/ui.py\", line 153, in f\r\n res = list(func(*args, **kwargs))\r\n File \"/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/webui.py\", line 63, in f\r\n res = func(*args, **kwargs)\r\n File \"/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/extras.py\", line 85, in run_extras\r\n res = upscale(image, extras_upscaler_1, upscaling_resize)\r\n File \"/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/extras.py\", line 79, in upscale\r\n c = upscaler.scaler.upscale(image, resize, upscaler.data_path)\r\n File \"/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/upscaler.py\", line 61, in upscale\r\n img = self.do_upscale(img, selected_model)\r\n File \"/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/ldsr_model.py\", line 45, in do_upscale\r\n return ldsr.super_resolution(img, ddim_steps, self.scale)\r\n File \"/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/ldsr_model_arch.py\", line 87, in super_resolution\r\n model = self.load_model_from_config(half_attention)\r\n File \"/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/ldsr_model_arch.py\", line 24, in load_model_from_config\r\n config = OmegaConf.load(self.yamlPath)\r\n File \"/var/home/garrett/.local/lib/python3.10/site-packages/omegaconf/omegaconf.py\", line 188, in load\r\n obj = yaml.load(f, Loader=get_yaml_loader())\r\n File \"/var/home/garrett/.local/lib/python3.10/site-packages/yaml/__init__.py\", line 79, in load\r\n loader = Loader(stream)\r\n File \"/var/home/garrett/.local/lib/python3.10/site-packages/yaml/loader.py\", line 34, in __init__\r\n Reader.__init__(self, stream)\r\n File \"/var/home/garrett/.local/lib/python3.10/site-packages/yaml/reader.py\", line 85, in __init__\r\n self.determine_encoding()\r\n File \"/var/home/garrett/.local/lib/python3.10/site-packages/yaml/reader.py\", line 124, in determine_encoding\r\n self.update_raw()\r\n File \"/var/home/garrett/.local/lib/python3.10/site-packages/yaml/reader.py\", line 178, in update_raw\r\n data = self.stream.read(size)\r\n File \"/usr/lib64/python3.10/codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte\r\n```", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/1371", "file_loc": {"base_commit": "2b03f0bbda1229dff6e7ab6f656b28587eba8308", "files": [{"path": "modules/bsrgan_model.py", "status": "modified", "Loc": {"('UpscalerBSRGAN', 'load_model', 63)": {"mod": [72]}}}, {"path": "modules/ldsr_model.py", "status": "modified", "Loc": {"('UpscalerLDSR', None, 13)": {"add": [24]}, "('UpscalerLDSR', 'load_model', 24)": {"mod": [26]}, "('UpscalerLDSR', 'do_upscale', 38)": {"mod": [44]}}}, {"path": "modules/ldsr_model_arch.py", "status": "modified", "Loc": {"('LDSR', 'super_resolution', 86)": {"mod": [101, 103, 114]}}}, {"path": "modules/modelloader.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0]}, "(None, 'load_models', 13)": {"mod": [44]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/bsrgan_model.py", "modules/modelloader.py", "modules/ldsr_model.py", "modules/ldsr_model_arch.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "f92d61497a426a19818625c3ccdaae9beeb82b31", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14024", "iss_label": "enhancement", "title": "[Feature Request]: Img2Img inpainting/sketching - Non-binary/alpha weighted denoising mask", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues and checked the recent builds/commits\r\n\r\n### What would your feature do ?\r\n\r\n#### Problem to solve\r\n\r\nIt appears that the denoiser only considers a binary mask (with a hard boundary) with respect to what pixels should be denoised, even with extreme blurring values. Specifically, only if the mask/sketch opacity is greater than 50% does the region under that pixel get denoised. The resulting image and the original image are simply alpha-blended together using the mask opacity values.\r\n\r\n#### Why this is a problem\r\n\r\n- When inpainting, even with a very high mask blur, a seam will appear at the 50% opacity threshold.\r\n- When inpaint-sketching, with any amount of mask blur, the colors of the sketch will bleed into regions of the image that do not recieve denoising. (Without mask blur the results are full of seams.)\r\n- Inpaint sketching with 50% mask transparency or more is pointless as nothing is inpainted.\r\n- It is difficult to inpaint objects with indefinite boundaries like dust clouds, or in any situation where some kind of gradual seamless transition in texture is needed. In these cases, the original texture is destroyed when it should be partially preserved.\r\n\r\n#### What possibilities solving it brings\r\n\r\n- Brushes with feathered edges\r\n- Compositing images with alpha channels\r\n- Depth-related effects if the mask represents a depth map\r\n\r\n#### Proposed solution\r\n\r\n**Interpret the mask opacity as a per-pixel multiplier for the denoising strength.**\r\nAFAIK there are a few ways one could achieve this effect:\r\n\r\n- Perhaps existing models support this implicitly - when any part of the pipeline (noising and denoising) considers the denoising strength parameter, have it examine a denoising value assigned to each 8x8 block of pixels (instead of a single global parameter). E.g. scale the amount of latent noise added, and scale the change to the latent block created by the denoiser at each iteration.\r\n- Modify the latent image before and after noising steps - The initial noise that is added to the latent image can be scaled according to each 8x8 block's denoising strength. Then after each step, \"pull\" each 8x8 block's latent vector back to what it was originally. The amount it gets pulled back depends on the denoising strength of that block.\r\n\r\nI believe either of these would allow inpainting objects with partial opacity or very gradual transitions, where content in a transition region is preserved.\r\n\r\n##### Alternate solution: dithering\r\n\r\nA simpler option could be to use dithering to decide whether a given pixel/block is masked. In other words, using some kind of dithering pattern (Beyer, blue noise, Floyd\u2013Steinberg) the mask opacity represents a probability a given element of the image is affected by the denoiser.\r\n\r\n##### Alternate solution: adjust mask threshold\r\n\r\nAn even simpler solution could be to change the mask opacity threshold at which denoising occurs from >=50% to >0%. In other words, if the mask has opacity greater than 0, it is included in the denoising.\r\nThen, the original content could be blended over-top to completely hide the seam at the point where the mask has 0 opacity.\r\n\r\nHowever, the main drawback is that ghosting artifacts will appear where both the original and modified image are visible. (Though this is an issue with the current implementation anyway.)\r\n\r\n### Proposed workflow\r\n\r\n1. Open Img2Img -> inpaint/inpaint sketch, load an image\r\n2. Select a brush with options for opacity, force/flow and softness. (Mask blur and transparency may be made obsolete by this feature.)\r\n3. (Optional) Tweak the alpha power slider. Repeated iterations may cause partially masked latent blocks to still have strong modifications, pushing the transition zone to regions with almost no masking. Bringing the mask opacity to a power could help make the transitions more perceptually gradual.\r\n4. When ready, regenerate the image to observe no seams, gradual transitions and partial preservation of partially masked content, and no color leakage from blurred/soft sketch strokes.", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14208", "file_loc": {"base_commit": "f92d61497a426a19818625c3ccdaae9beeb82b31", "files": [{"path": "modules/images.py", "status": "modified", "Loc": {}}, {"path": "modules/processing.py", "status": "modified", "Loc": {"(None, 'process_images_inner', 750)": {"add": [869, 924], "mod": [927, 931, 941, 943, 950]}, "('StableDiffusionProcessingImg2Img', None, 1345)": {"add": [1353]}, "(None, 'apply_overlay', 65)": {"mod": [65, 66, 67, 69, 72, 73, 74, 75, 76]}, "(None, 'create_binary_mask', 84)": {"mod": [84, 86]}, "('StableDiffusionProcessing', None, 116)": {"mod": [311, 348]}, "('StableDiffusionProcessing', 'inpainting_image_conditioning', 311)": {"mod": [323, 324]}, "('StableDiffusionProcessing', 'img2img_image_conditioning', 348)": {"mod": [360]}, "('StableDiffusionProcessingImg2Img', 'init', 1388)": {"mod": [1399, 1506, 1518]}, "('StableDiffusionProcessingImg2Img', 'sample', 1520)": {"mod": [1530]}}}, {"path": "modules/scripts.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13, 18]}, "('Script', None, 30)": {"add": [208, 215]}, "('ScriptRunner', None, 496)": {"add": [769, 777]}}}, {"path": "modules/sd_samplers_cfg_denoiser.py", "status": "modified", "Loc": {"('CFGDenoiser', '__init__', 41)": {"add": [58]}, "('CFGDenoiser', 'forward', 91)": {"add": [107, 209], "mod": [109, 211]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/sd_samplers_cfg_denoiser.py", "modules/processing.py", "modules/images.py", "modules/scripts.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "09c1be96748584b08b6299024bb7b64bafb09d09", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12139", "iss_label": "enhancement", "title": "[Feature Request]: command-line argument to disable extensions", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What would your feature do ?\n\nNew command-line option to disable all extensions. This would make it easier to troubleshoot during upgrades or development. It would also be quicker than starting the UI, clicking the disable extensions option within the extensions tab, and then restarting. And sometimes an extension might prevent the UI from even starting, making that impossible anyway. When this flag is set at runtime, that should override the similar feature within the Extensions tab, to indicate that it's not possible to run extensions in this mode. I would suggest graying out or otherwise indicate in the extension tab that we are running in no-extensions mode.\r\n\r\nSuggested command-line argument name: \"--disable-all-extensions\" to align with \"--update-all-extensions\".\n\n### Proposed workflow\n\n1. Add option --disable-all-extensions to launch script\r\n2. Start webui\r\n3. No extensions will be loaded\n\n### Additional information\n\n_No response_", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12294", "file_loc": {"base_commit": "09c1be96748584b08b6299024bb7b64bafb09d09", "files": [{"path": "modules/cmd_args.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [113]}}}, {"path": "modules/extensions.py", "status": "modified", "Loc": {"(None, 'list_extensions', 138)": {"add": [145], "mod": [144]}, "(None, 'active', 13)": {"mod": [14, 16]}}}, {"path": "modules/ui_extensions.py", "status": "modified", "Loc": {"(None, 'extension_table', 136)": {"mod": [167]}, "(None, 'create_ui', 520)": {"mod": [540, 541, 542, 543, 544, 545]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/extensions.py", "modules/cmd_args.py", "modules/ui_extensions.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "67c884196d4627903f6598989251ec5b2c46a4ce", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10036", "iss_label": "cannot-reproduce\nbug-report", "title": "[Bug]: LoRa's wont work", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nI have this error code when I use a LoRa, and they are not applied to the prompt\r\n\n\n### Steps to reproduce the problem\n\nUsing any lora \n\n### What should have happened?\n\nLoRa's should be used \n\n### Commit where the problem happens\n\n5ab7f21\n\n### What platforms do you use to access the UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Command Line Arguments\n\n```Shell\n--deepdanbooru --api --no-half-vae --xformers\n```\n\n\n### List of extensions\n\n<html><body>\r\n<!--StartFragment-->\r\n\r\nDreamArtist-sd-webui-extension | https://github.com/7eu7d7/DreamArtist-sd-webui-extension.git | 12f80775 (Mon Apr 24 05:53:26 2023) | unknown\r\n-- | -- | -- | --\r\n\u00a0\r\n\r\n<!--EndFragment-->\r\n</body>\r\n</html>DreamArtist-sd-webui-extension \thttps://github.com/7eu7d7/DreamArtist-sd-webui-extension.git \t[12f80775 (Mon Apr 24 05:53:26 2023)](https://github.com/7eu7d7/DreamArtist-sd-webui-extension.git/commit/12f8077517b11199802f8d448d36ea573debae96) \tunknown\r\na1111-sd-webui-tagcomplete \thttps://github.com/DominikDoom/a1111-sd-webui-tagcomplete \t[a2e7b6bf (Tue May 2 10:30:04 2023)](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/commit/a2e7b6bf6c8cbdff031b5b5929de150bf548c582) \tunknown\r\nmulti-subject-render \thttps://github.com/Extraltodeus/multi-subject-render.git \t[03427e26 (Mon Mar 6 14:11:30 2023)](https://github.com/Extraltodeus/multi-subject-render.git/commit/03427e26bebdc6da0ccfb749bf3c4e7e33d7458b) \tunknown\r\nopenOutpaint-webUI-extension \thttps://github.com/zero01101/openOutpaint-webUI-extension \t[5e84d6d5 (Mon Apr 10 23:01:41 2023)](https://github.com/zero01101/openOutpaint-webUI-extension/commit/5e84d6d5b1057f837eeecaa49a92a235dd589bc5) \tunknown\r\nsd-webui-ar \thttps://github.com/alemelis/sd-webui-ar.git \t[9df49dc2 (Wed Apr 12 09:23:17 2023)](https://github.com/alemelis/sd-webui-ar.git/commit/9df49dc2d7da7333ac918fbce926c2370a3b8b53) \tunknown\r\nsd-webui-controlnet \thttps://github.com/Mikubill/sd-webui-controlnet \t[a482867e (Tue May 2 23:13:18 2023)](https://github.com/Mikubill/sd-webui-controlnet/commit/a482867ee5e82b08b221c53662ff0c70c2f18d09) \tunknown\r\nsd-webui-infinite-image-browsing \thttps://github.com/zanllp/sd-webui-infinite-image-browsing.git \t[6bc7f4ca (Tue May 2 19:52:50 2023)](https://github.com/zanllp/sd-webui-infinite-image-browsing.git/commit/6bc7f4ca1e10e932e34453fb744d1bd006640b09) \tunknown\n\n### Console logs\n\n```Shell\nTraceback (most recent call last):\r\n File \"C:\\Users\\jesus\\stable-diffusion-webui\\extensions-builtin\\Lora\\lora.py\", line 215, in load_loras\r\n lora = load_lora(name, lora_on_disk.filename)\r\n File \"C:\\Users\\jesus\\stable-diffusion-webui\\extensions-builtin\\Lora\\lora.py\", line 176, in load_lora\r\n module.weight.copy_(weight)\r\nRuntimeError: output with shape [32, 320, 1, 1] doesn't match the broadcast shape [32, 320, 3, 3]\n```\n\n\n### Additional information\n\n_No response_", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/10089", "file_loc": {"base_commit": "67c884196d4627903f6598989251ec5b2c46a4ce", "files": [{"path": "extensions-builtin/Lora/lora.py", "status": "modified", "Loc": {"(None, 'load_lora', 130)": {"add": [169], "mod": [168]}, "(None, 'lora_calc_updown', 229)": {"add": [234]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["extensions-builtin/Lora/lora.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "132fd38f13e127d87dc83c065bf14bf80a0a0c30", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/67206", "iss_label": "docs\nstdlib\ntopic-unicode", "title": "string.printable.isprintable() returns False", "body": "BPO | [23017](https://bugs.python.org/issue23017)\n--- | :---\nNosy | @birkenfeld, @vstinner, @ezio-melotti, @stevendaprano, @bitdancer, @4kir4, @iritkatriel\nFiles | <li>[bug-string-ascii.py](https://bugs.python.org/file37391/bug-string-ascii.py \"Uploaded as text/plain at 2014-12-09.03:51:59 by planet36\"): Test case shows that string.printable has control characters</li><li>[0001-Fix-string.printable-respect-POSIX-spec.patch](https://bugs.python.org/file37398/0001-Fix-string.printable-respect-POSIX-spec.patch \"Uploaded as text/plain at 2014-12-09.14:42:29 by bru\")</li><li>[docs-string.printable.diff](https://bugs.python.org/file37441/docs-string.printable.diff \"Uploaded as text/plain at 2014-12-13.15:30:05 by @4kir4\")</li>\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = None\nclosed_at = None\ncreated_at = <Date 2014-12-09.03:52:01.009>\nlabels = ['type-bug', '3.9', '3.10', '3.11', 'library', 'expert-unicode', 'docs']\ntitle = 'string.printable.isprintable() returns False'\nupdated_at = <Date 2021-11-29.16:17:13.755>\nuser = 'https://bugs.python.org/planet36'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2021-11-29.16:17:13.755>\nactor = 'iritkatriel'\nassignee = 'docs@python'\nclosed = False\nclosed_date = None\ncloser = None\ncomponents = ['Documentation', 'Library (Lib)', 'Unicode']\ncreation = <Date 2014-12-09.03:52:01.009>\ncreator = 'planet36'\ndependencies = []\nfiles = ['37391', '37398', '37441']\nhgrepos = []\nissue_num = 23017\nkeywords = ['patch']\nmessage_count = 5.0\nmessages = ['232343', '232376', '232382', '232613', '407290']\nnosy_count = 10.0\nnosy_names = ['georg.brandl', 'vstinner', 'ezio.melotti', 'steven.daprano', 'r.david.murray', 'docs@python', 'akira', 'planet36', 'bru', 'iritkatriel']\npr_nums = []\npriority = 'normal'\nresolution = None\nstage = None\nstatus = 'open'\nsuperseder = None\ntype = 'behavior'\nurl = 'https://bugs.python.org/issue23017'\nversions = ['Python 3.9', 'Python 3.10', 'Python 3.11']\n```\n\n</p></details>\n\n\n<!-- gh-linked-prs -->\n### Linked PRs\n* gh-128820\n* gh-128867\n* gh-128868\n<!-- /gh-linked-prs -->\n", "pr_html_url": "https://github.com/python/cpython/pull/128820", "file_loc": {"base_commit": "eefd4a0bc764c0272c560f26dd10fb8fba0fb7d4", "files": [{"path": "Doc/library/string.rst", "status": "modified", "Loc": {"(None, None, 64)": {"mod": [64, 65, 66]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["Doc/library/string.rst"], "test": [], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "0d1cbff833f761f80383f4ce5fe31f686f3f04eb", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/111259", "iss_label": "performance\ntopic-regex", "title": "Complementary re patterns such as [\\s\\S] or [\\w\\W] are much slower than . with DOTALL ", "body": "# Bug report\n\n### Bug description:\n\n```python\nimport re\nfrom time import perf_counter as time\n\np1 = re.compile(r\"[\\s\\S]*\")\np2 = re.compile(\".*\", re.DOTALL)\n\ns = \"a\"*10000\nfor p in (p1,p2):\n t0 = time()\n for i in range(10000): _=p.match(s)\n print(time()-t0)\n```\nRuntimes are 0.44 s vs 0.0016 s on my system. Instead of simplification, the [\\s\\S] is stepped through one after another. \\s does not match so then \\S is checked (the order [\\S\\s] is twice as fast for the string here). This is not solely an issue for larger matches. A 40 char string is processed half as fast when using [\\s\\S]. Even 10 chars take about 25% longer to process. I'm not completely sure whether this qualifies as a bug or an issue with documentation. Other languages don't have the DOTALL option and always rely on the first option. Plenty of posts on SO and elsewhere will thus advocate using [\\s\\S] as an all-matching regex pattern. Unsuspecting Python programmers such as @barneygale may expect [\\s\\S] to be identical to using a dot with DOTALL as seen below.\n\n@serhiy-storchaka\n\nhttps://github.com/python/cpython/blob/9bb202a1a90ef0edce20c495c9426d9766df11bb/Lib/pathlib.py#L126-L133\n\n### CPython versions tested on:\n\n3.11, 3.13\n\n### Operating systems tested on:\n\nLinux, Windows\n\n<!-- gh-linked-prs -->\n### Linked PRs\n* gh-111303\n* gh-120742\n* gh-120745\n* gh-120813\n* gh-120814\n<!-- /gh-linked-prs -->\n", "pr_html_url": "https://github.com/python/cpython/pull/111303", "file_loc": {"base_commit": "0d1cbff833f761f80383f4ce5fe31f686f3f04eb", "files": [{"path": "Lib/pathlib.py", "status": "modified", "Loc": {"(None, '_compile_pattern_lines', 105)": {"mod": [127, 130, 133]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["Lib/pathlib.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "4219074127221fdbf545f908361da4ad98437b45", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/103971", "iss_label": "type-bug\ninterpreter-core\n3.11\neasy\ntriaged", "title": "Incorrect locations for code following `case` blocks", "body": "# Bug report\r\n\r\nIn the following example, the debugger hits a breakpoint that is set in the `aVariable = ...` line, which is in an if-statement whose condition is `False` and which should therefore not be executed. When I run the example with coverage (under PyCharm 2023.1), that line turns green. The print statement is _not_ executed, which matches the expectation.\r\n\r\nThe assignment does not actually happen. It somehow just _hits_ the line without really executing it.\r\n\r\nMinimal reproducible example:\r\n\r\n```\r\nmatch 1:\r\n case 1:\r\n if False:\r\n print('this should not be executed')\r\n aVariable = 'somehow, we can hit a breakpoint here'\r\n```\r\n\r\nThe same happens, if the last statement in the unreachable code is a _pass_. If I replace it with e.g. a `print()` statement, then everything behaves as expected.\r\n\r\nIf we extend the example a little bit, that behavior is reproducible for an unreachable _else_ block, too:\r\n\r\n```\r\nmatch 1:\r\n case 1:\r\n if True:\r\n pass\r\n else:\r\n anotherVariable = 'somehow, we can hit a breakpoint here, too'\r\n```\r\n\r\n# Your environment\r\n\r\n```\r\npython --version\r\nPython 3.11.3\r\n```\r\n\r\n```\r\nlsb_release -a\r\nNo LSB modules are available.\r\nDistributor ID:\tUbuntu\r\nDescription:\tUbuntu 22.04.2 LTS\r\nRelease:\t22.04\r\nCodename:\tjammy\r\n```\r\n\r\nI initially encountered that behavior in a 3.10 version. Due to the fact that I thought, I could fix it with an upgrade to 3.11, I don't know the exact minor version of 3.10.\r\n\r\nI double-checked this with the first online Python debugger that I could find and it behaves the same way.\r\n\n\n<!-- gh-linked-prs -->\n### Linked PRs\n* gh-103980\n* gh-103984\n<!-- /gh-linked-prs -->\n", "pr_html_url": "https://github.com/python/cpython/pull/103980", "file_loc": {"base_commit": "4219074127221fdbf545f908361da4ad98437b45", "files": [{"path": "Lib/test/test_patma.py", "status": "modified", "Loc": {"('TestTracing', None, 3073)": {"add": [3153]}}}, {"path": "Python/compile.c", "status": "modified", "Loc": {"(None, 'compiler_match_inner', 7011)": {"add": [7059, 7083]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["Python/compile.c"], "doc": [], "test": ["Lib/test/test_patma.py"], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "63289b9dfbc7d87e81f1517422ee91b6b6d19531", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/117089", "iss_label": "", "title": "Sync with importlib_metadata for Python 3.13", "body": "This issue tracks incorporating updates from importlib_metadata into CPython for Python 3.13, including:\n\n<!-- gh-linked-prs -->\n### Linked PRs\n* gh-117092\n* gh-117094\n<!-- /gh-linked-prs -->\n", "pr_html_url": "https://github.com/python/cpython/pull/117092", "file_loc": {"base_commit": "63289b9dfbc7d87e81f1517422ee91b6b6d19531", "files": [{"path": ".github/CODEOWNERS", "status": "modified", "Loc": {"(None, None, 122)": {"mod": [122]}}}, {"path": "Lib/test/test_importlib/fixtures.py", "status": "renamed", "Loc": {"(None, None, None)": {"add": [11]}, "('OnSysPath', 'setUp', 85)": {"add": [87]}, "('ZipFixtures', None, 350)": {"mod": [351]}}}, {"path": "Makefile.pre.in", "status": "modified", "Loc": {"(None, None, 2357)": {"add": [2357]}, "(None, None, 2354)": {"mod": [2354]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["Lib/test/test_importlib/fixtures.py"], "doc": [], "test": [], "config": [".github/CODEOWNERS", "Makefile.pre.in"], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "4c3b283e83459cf7224bbf353300099eba7a2c1c", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/87192", "iss_label": "type-bug\ndocs\n3.10\n3.9", "title": "Missing words renders meaning unclear in fcntl.html", "body": "BPO | [43026](https://bugs.python.org/issue43026)\n--- | :---\nNosy | @EzraBC\nFiles | <li>[meaning_unclear.png](https://bugs.python.org/file49766/meaning_unclear.png \"Uploaded as image/png at 2021-01-25.23:46:46 by @EzraBC\")</li>\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = None\nclosed_at = None\ncreated_at = <Date 2021-01-25.23:46:46.269>\nlabels = ['type-bug', '3.9', '3.10', 'docs']\ntitle = 'Missing words renders meaning unclear in fcntl.html'\nupdated_at = <Date 2021-01-26.01:22:47.436>\nuser = 'https://github.com/EzraBC'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2021-01-26.01:22:47.436>\nactor = 'EzraBC'\nassignee = 'docs@python'\nclosed = False\nclosed_date = None\ncloser = None\ncomponents = ['Documentation']\ncreation = <Date 2021-01-25.23:46:46.269>\ncreator = 'EzraBC'\ndependencies = []\nfiles = ['49766']\nhgrepos = []\nissue_num = 43026\nkeywords = []\nmessage_count = 1.0\nmessages = ['385680']\nnosy_count = 2.0\nnosy_names = ['docs@python', 'EzraBC']\npr_nums = []\npriority = 'normal'\nresolution = None\nstage = None\nstatus = 'open'\nsuperseder = None\ntype = 'behavior'\nurl = 'https://bugs.python.org/issue43026'\nversions = ['Python 3.9', 'Python 3.10']\n```\n\n</p></details>\n", "pr_html_url": "https://github.com/python/cpython/pull/91658", "file_loc": {"base_commit": "4c3b283e83459cf7224bbf353300099eba7a2c1c", "files": [{"path": "Doc/library/fcntl.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [40]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["Doc/library/fcntl.rst"], "test": [], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "733e15f1707ddec502a69c8c324c77e02ca11fa9", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/93735", "iss_label": "type-feature\ndocs\n3.11\n3.10\n3.12", "title": "Run documentation CI from pre-built Python", "body": "https://github.com/python/core-workflow/issues/459\r\n\r\nThere seemed to be general agreement.\r\n\r\nA", "pr_html_url": "https://github.com/python/cpython/pull/93736", "file_loc": {"base_commit": "733e15f1707ddec502a69c8c324c77e02ca11fa9", "files": [{"path": ".github/workflows/doc.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [25, 34], "mod": [43, 44, 45, 46, 49, 50, 51, 52, 53, 54, 55, 56]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [".github/workflows/doc.yml"], "test": [], "config": [], "asset": []}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "2873a6f452340565ff3cd130d5f7009a35c12154", "iss_has_pr": 1, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/493", "iss_label": "", "title": "[BUG/Help] \u8fd0\u884ccli_demo.py\u65f6\u62a5\u9519UnicodeDecodeError: 'utf-8' codec can't decode byte ", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nTraceback (most recent call last):\r\n File \"cli_demo.py\", line 57, in <module>\r\n main()\r\n File \"cli_demo.py\", line 33, in main\r\n query = input(\"\\n\u7528\u6237\uff1a\")\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xe6 in position 6: invalid continuation byte\r\n\r\n### Expected Behavior\r\n\r\n_No response_\r\n\r\n### Steps To Reproduce\r\n\r\npython cli_demo.py", "pr_html_url": "https://github.com/THUDM/ChatGLM-6B/pull/934", "file_loc": {"base_commit": "2873a6f452340565ff3cd130d5f7009a35c12154", "files": [{"path": "cli_demo.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["cli_demo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "b9af152efb748b1bff8f6fe0130e62ebb8e11a53", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/21330", "iss_label": "New model\nGood First Issue", "title": "Add XLM-V", "body": "### Model description\n\n[XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472)\r\n\r\nLarge multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).\r\n\r\nShould work as [XLM-RoBERTa](https://twitter.com/LiangDavis/status/1618738467315531777?s=20&t=nObyGbBEqmBZr9rmTEAeVg)\n\n### Open source status\n\n- [X] The model implementation is available\n- [X] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "pr_html_url": "https://github.com/huggingface/transformers/pull/21498", "file_loc": {"base_commit": "b9af152efb748b1bff8f6fe0130e62ebb8e11a53", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [444]}}}, {"path": "README_es.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [437]}}}, {"path": "README_hd.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [409]}}}, {"path": "README_ja.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [471]}}}, {"path": "README_ko.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [386]}}}, {"path": "README_zh-hans.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [410]}}}, {"path": "README_zh-hant.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [422]}}}, {"path": "docs/source/de/index.mdx", "status": "modified", "Loc": {"(None, None, None)": {"add": [184]}}}, {"path": "docs/source/en/_toctree.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [393]}}}, {"path": "docs/source/en/index.mdx", "status": "modified", "Loc": {"(None, None, None)": {"add": [223]}}}, {"path": "src/transformers/models/auto/configuration_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [535]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/models/auto/configuration_auto.py"], "doc": ["README_hd.md", "docs/source/en/_toctree.yml", "README_zh-hans.md", "README.md", "README_es.md", "docs/source/de/index.mdx", "README_zh-hant.md", "README_ko.md", "README_ja.md", "docs/source/en/index.mdx"], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "b8378b658e9846e647d15a8fd85ad1421326b1e5", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/28007", "iss_label": "", "title": "Can't do word timestamps and beam search at the same time (whisper)", "body": "### System Info\n\nTested on python 3.8.10, transformers 4.36.0.dev0\r\n\r\n\n\n### Who can help?\n\n@ArthurZucker @sanchit-gandhi (suggested by peregilk)\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```\r\nfrom transformers import pipeline\r\nimport torch\r\nmodel = \"NbAiLabBeta/nb-whisper-base\"\r\ndevice = \"cuda:0\"\r\n\r\np = pipeline(\"automatic-speech-recognition\",\r\n model,\r\n torch_dtype=torch.float16,\r\n device=device,\r\n return_timestamps=\"word\")\r\nargs = {\"language\": \"norwegian\", \"task\": \"transcribe\", \"num_beams\": 3}\r\noutputs = p(audiofile,\r\n chunk_length_s=28,\r\n batch_size=6,\r\n generate_kwargs=args)\r\n```\r\n\r\nFails with:\r\n\r\n> Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py\", line 357, in __call__\r\n return super().__call__(inputs, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py\", line 1132, in __call__\r\n return next(\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py\", line 124, in __next__\r\n item = next(self.iterator)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py\", line 266, in __next__\r\n processed = self.infer(next(self.iterator), **self.params)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py\", line 1046, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py\", line 552, in _forward\r\n generate_kwargs[\"num_frames\"] = stride[0] // self.feature_extractor.hop_length\r\nTypeError: unsupported operand type(s) for //: 'tuple' and 'int'\r\n\r\nIt works with *either* num_beams:1 OR return_timestamps=True/False, but not combined.\n\n### Expected behavior\n\nIt should return processed data. :)", "pr_html_url": "https://github.com/huggingface/transformers/pull/28114", "file_loc": {"base_commit": "b8378b658e9846e647d15a8fd85ad1421326b1e5", "files": [{"path": "src/transformers/models/whisper/modeling_whisper.py", "status": "modified", "Loc": {"('WhisperForConditionalGeneration', 'generate', 1859)": {"add": [2226]}, "('WhisperForConditionalGeneration', '_extract_token_timestamps', 2539)": {"add": [2557], "mod": [2559, 2561, 2562, 2563, 2564, 2566, 2567, 2569, 2572, 2573]}}}, {"path": "src/transformers/pipelines/automatic_speech_recognition.py", "status": "modified", "Loc": {"('AutomaticSpeechRecognitionPipeline', '_forward', 533)": {"mod": [562]}}}, {"path": "tests/models/whisper/test_modeling_whisper.py", "status": "modified", "Loc": {"('WhisperModelIntegrationTests', None, 1447)": {"add": [1852]}}}, {"path": "tests/pipelines/test_pipelines_automatic_speech_recognition.py", "status": "modified", "Loc": {"('AutomaticSpeechRecognitionPipelineTests', None, 60)": {"add": [676]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/models/whisper/modeling_whisper.py", "src/transformers/pipelines/automatic_speech_recognition.py"], "doc": [], "test": ["tests/models/whisper/test_modeling_whisper.py", "tests/pipelines/test_pipelines_automatic_speech_recognition.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "b231a413f5d58592bb4d98304c3d3b668c5d4a42", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/4657", "iss_label": "PyTorch", "title": "--fp causes an issue when running example scripts in distributed mode", "body": "# \ud83d\udc1b Bug\r\n\r\n## Information\r\n\r\nModel I am using (Bert, XLNet ...):\r\n`roberta-large`\r\nLanguage I am using the model on (English, Chinese ...):\r\n`English`\r\n\r\nThe problem arises when using:\r\n* the official example scripts\r\n\r\nThe tasks I am working on is:\r\n* Finetuning a LM with `run_language_modeling.py` and the SST-2 task with `run_glue.py`\r\n* my own dataset\r\n\r\n## To reproduce\r\nIf I run either of the following commands, I get the error included below. However, if I remove `--fp`, everything works normally. Also, if I add `--fp`, but run it non-distributed, everything works normally. So, it appears there is an issue with my running `-fp` in a distributed fashion. I haven't had an issue with this before; so, I'm not sure what the problem is. Any ideas? Thanks in advance.\r\n\r\nI installed apex in two different way, but still get the same results.\r\n```\r\n#Install package required for fp16 computations\r\nRUN git clone https://github.com/NVIDIA/apex.git \\\r\n && cd apex \\\r\n && python3 setup.py install --cuda_ext --cpp_ext\r\n```\r\n```\r\nInstall package required for fp16 computations\r\nRUN git clone https://github.com/NVIDIA/apex.git \\\r\n && cd apex \\\r\n && pip3 install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./\r\n```\r\n```\r\npython3 -m torch.distributed.launch --nproc_per_node 2 run_language_modeling.py --output_dir=/ptcc/shared/lm_roberta_20200528_164228 --model_type=roberta --do_train --train_data_file=/ptcc/data/train.txt --do_eval --eval_data_file=/ptcc/data/test.txt --evaluate_during_training --per_gpu_train_batch_size=2 --per_gpu_eval_batch_size=2 --learning_rate=5e-06 --model_name_or_path=roberta-large --mlm --max_steps=120000 --warmup_steps=10000 --save_steps=12000 --seed=42 --fp16 --logging_dir=/ptcc/shared/roberta_20200528_164228_tf_logs'\r\n```\r\n```\r\npython3 -m torch.distributed.launch --nproc_per_node 2 run_glue.py --model_type roberta --task_name SST-2 --do_train --do_eval --evaluate_during_training --data_dir /ptcc/data/ --per_gpu_train_batch_size 2 --per_gpu_eval_batch_size 2 --learning_rate 1e-06 --output_dir clf_roberta_20200528_162937 --model_name_or_path /ptcc/shared/lm_roberta_20200528_113420 --num_train_epochs 2.0 --save_steps 1000 --seed 42 --fp16 --logging_dir=/ptcc/shared/roberta_20200528_162937_tf_logs\r\n```\r\n\r\n```\r\nptcc_1 | 05/28/2020 20:30:38 - INFO - transformers.trainer - Starting fine-tuning.\r\nEpoch: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last):\r\nptcc_1 | File \"/ptcc/run_glue.py\", line 228, in <module>\r\nptcc_1 | main()\r\nptcc_1 | File \"/ptcc/run_glue.py\", line 160, in main\r\nptcc_1 | model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\nptcc_1 | File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 470, in train\r\nptcc_1 | tr_loss += self._training_step(model, inputs, optimizer)\r\nptcc_1 | File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 577, in _training_step\r\nptcc_1 | scaled_loss.backward()\r\nptcc_1 | File \"/usr/lib/python3.6/contextlib.py\", line 88, in __exit__\r\nptcc_1 | next(self.gen)\r\nptcc_1 | File \"/usr/local/lib/python3.6/dist-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/handle.py\", line 127, in scale_loss\r\nptcc_1 | should_skip = False if delay_overflow_check else loss_scaler.update_scale()\r\nptcc_1 | File \"/usr/local/lib/python3.6/dist-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/scaler.py\", line 200, in update_scale\r\nptcc_1 | self._has_overflow = self._overflow_buf.item()\r\nptcc_1 | RuntimeError: CUDA error: an illegal memory access was encountered\r\nptcc_1 | /usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:114: UserWarning: Seems like `optimizer.step()` has been overridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\r\nptcc_1 | \"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\", UserWarning)\r\nptcc_1 | terminate called after throwing an instance of 'c10::Error'\r\nptcc_1 | what(): CUDA error: an illegal memory access was encountered (insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:771)\r\nptcc_1 | frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7f69777f6536 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)\r\nptcc_1 | frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x7ae (0x7f6977a39fbe in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so)\r\nptcc_1 | frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f69777e6abd in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)\r\nptcc_1 | frame #3: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x1d9 (0x7f69c3926ef9 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #4: c10d::Reducer::~Reducer() + 0x23a (0x7f69c391c84a in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #5: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7f69c38fb7c2 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f69c32be466 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #7: <unknown function> + 0x87146b (0x7f69c38fc46b in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #8: <unknown function> + 0x240500 (0x7f69c32cb500 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #9: <unknown function> + 0x24174e (0x7f69c32cc74e in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #10: /usr/bin/python3() [0x572a27]\r\nptcc_1 | frame #11: /usr/bin/python3() [0x54eef2]\r\nptcc_1 | frame #12: /usr/bin/python3() [0x588948]\r\nptcc_1 | frame #13: /usr/bin/python3() [0x5ad438]\r\nptcc_1 | frame #14: /usr/bin/python3() [0x5ad44e]\r\nptcc_1 | frame #15: /usr/bin/python3() [0x5ad44e]\r\nptcc_1 | frame #16: /usr/bin/python3() [0x56b276]\r\nptcc_1 | frame #17: PyDict_SetItemString + 0x153 (0x5709f3 in /usr/bin/python3)\r\nptcc_1 | frame #18: PyImport_Cleanup + 0x76 (0x4f2fc6 in /usr/bin/python3)\r\nptcc_1 | frame #19: Py_FinalizeEx + 0x5e (0x637e2e in /usr/bin/python3)\r\nptcc_1 | frame #20: Py_Main + 0x395 (0x638e95 in /usr/bin/python3)\r\nptcc_1 | frame #21: main + 0xe0 (0x4b0d00 in /usr/bin/python3)\r\nptcc_1 | frame #22: __libc_start_main + 0xe7 (0x7f69e4727b97 in /lib/x86_64-linux-gnu/libc.so.6)\r\nptcc_1 | frame #23: _start + 0x2a (0x5b250a in /usr/bin/python3)\r\n```\r\n\r\n## Environment info\r\n- `transformers` version: 2.10.0\r\n- Platform: Linux-5.3.0-26-generic-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.9\r\n- PyTorch version (GPU?): 1.5.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: Y, 2 Tesla V100-SXM2\r\n- Using distributed or parallel set-up in script?: Y, 2 Tesla V100-SXM2\r\n", "pr_html_url": "https://github.com/huggingface/transformers/pull/4728", "file_loc": {"base_commit": "b231a413f5d58592bb4d98304c3d3b668c5d4a42", "files": [{"path": "src/transformers/training_args.py", "status": "modified", "Loc": {"('TrainingArguments', '_setup_devices', 158)": {"add": [176], "mod": [169]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/training_args.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "85a1269e19af022e04bc2aad82572cd5a9e8cdd9", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/31778", "iss_label": "Audio", "title": "Bug in whisper word-level timestamps (`tokenizer._decode_asr`)", "body": "### System Info\n\n- `transformers` version: 4.42.3\r\n- Platform: Linux-6.1.85+-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.23.4\r\n- Safetensors version: 0.4.3\r\n- Accelerate version: not installed\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.3.0+cu121 (False)\r\n- Tensorflow version (GPU?): 2.15.0 (False)\r\n- Flax version (CPU?/GPU?/TPU?): 0.8.4 (cpu)\r\n- Jax version: 0.4.26\r\n- JaxLib version: 0.4.26\r\n- Using distributed or parallel set-up in script?: no\n\n### Who can help?\n\n@sanchit-gandhi\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nMinimal reproduction:\r\n\r\n```py\r\nimport torch\r\n\r\nmodel_outputs = [\r\n {\r\n 'stride': [30, 0, 5],\r\n 'tokens': torch.tensor([[\r\n 50257, 50362, 8410, 7283, 0, 2329,\r\n 8410, 7283, 0, 2094, 470, 1309,\r\n 534, 10625, 307, 10625, 13, 34668,\r\n 11, 345, 531, 9439, 11, 523,\r\n 655, 8410, 7283, 0, 39134, 16592,\r\n 10560, 3955, 50, 0, 7102, 5446,\r\n 46, 0, 25848, 8410, 7283, 0,\r\n 2773, 661, 4320, 1943, 981, 345,\r\n 821, 8066, 7765, 510, 290, 670,\r\n 1327, 379, 340, 13, 10528, 318,\r\n 5340, 13, 50256\r\n ]]),\r\n 'token_timestamps': torch.tensor([[\r\n 0, 0, 0, 3.78, 4.22, 5.26, 6.04,\r\n 6.54, 7, 7.94, 8.58, 8.58, 8.88, 9.16,\r\n 9.54, 9.94, 10.6, 11.38, 11.88, 12.38, 12.44,\r\n 12.62, 13, 13.36, 13.64, 14.24, 14.74, 15.12,\r\n 15.4, 15.74, 16.1, 16.54, 16.54, 16.78, 17.08,\r\n 17.2, 17.36, 17.56, 18.08, 18.58, 19.38, 19.88,\r\n 22.54, 22.9, 23.24, 23.5, 24.14, 24.56, 24.7,\r\n 24.94, 24.94, 25.18, 25.54, 25.72, 26.04, 26.34,\r\n 26.46, 26.84, 27.04, 27.14, 27.54, 28.06, 29.92\r\n ]])\r\n },\r\n {\r\n 'stride': [30, 5, 5],\r\n 'tokens': torch.tensor([[\r\n 50257, 50362, 2773, 661, 4320, 1943, 981,\r\n 345, 821, 8066, 7765, 510, 290, 670,\r\n 1327, 379, 340, 13, 10528, 318, 5340,\r\n 13, 921, 815, 651, 284, 262, 966,\r\n 810, 2687, 2073, 561, 11238, 290, 345,\r\n 821, 407, 8066, 2245, 612, 13, 1400,\r\n 11, 644, 389, 345, 4953, 329, 30,\r\n 2141, 340, 0, 2329, 466, 340, 0,\r\n 3363, 11, 345, 460, 0, 2329, 466,\r\n 340, 0, 50256\r\n ]]),\r\n 'token_timestamps': torch.tensor([[\r\n 0, 0, 0, 2.92, 3.24, 3.5, 4.14,\r\n 4.56, 4.7, 4.74, 4.92, 5.18, 5.54, 5.74,\r\n 6.04, 6.34, 6.46, 6.84, 7.04, 7.18, 7.56,\r\n 8.12, 9.68, 10.7, 10.88, 11.1, 11.24, 11.48,\r\n 11.82, 12.46, 12.82, 13.2, 13.46, 13.72, 14.08,\r\n 14.28, 14.34, 14.56, 14.82, 15.16, 15.72, 16.42,\r\n 16.82, 16.86, 17, 17.1, 17.2, 17.56, 18.06,\r\n 19.28, 19.6, 20.28, 21.96, 22.64, 24.28, 24.76,\r\n 25.18, 25.56, 25.56, 25.84, 26.36, 27.12, 27.54,\r\n 27.82, 28.16, 29.48\r\n ]])\r\n },\r\n {\r\n 'stride': [23.7728125, 5, 0],\r\n 'tokens': torch.tensor([[\r\n 50257, 50362, 2329, 466,\r\n 340, 0, 3363, 345,\r\n 460, 0, 2329, 466,\r\n 340, 0, 1002, 534,\r\n 15867, 318, 3599, 625,\r\n 11, 2245, 3501, 510,\r\n 13, 50256\r\n ]]),\r\n 'token_timestamps': torch.tensor([[\r\n 0, 0, 0, 2.44, 4.3,\r\n 5.04, 5.06, 5.56, 5.8, 6.32,\r\n 7.12, 7.56, 7.8, 8.72, 10.04,\r\n 12.96, 13.3, 13.44, 13.72, 13.98,\r\n 14.86, 15.5, 16, 16.88, 17.76,\r\n 20.9\r\n ]])\r\n }\r\n]\r\n\r\n\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained('onnx-community/whisper-tiny.en_timestamped')\r\ntokenizer._decode_asr(model_outputs, return_timestamps='word', return_language=False, time_precision=0.02)\r\n```\r\n\r\nproduces the following **incorrect** transcript:\r\n\r\n```py\r\n(\" DO IT! Just DO IT! Don't let your dreams be dreams. Yesterday, you said tomorrow, so just DO IT! MAKE YOUR DRIMS! CONTRO! JUST DO IT! Some people dream success while you're gonna wake up and work hard at it. Nothing is impossible. You should get to the point where anyone else would quit and you're not gonna stop there. No, what are you waiting for? Do it! Just do it! Yes, you can! Just do it! Yes you can! Just do it! If your tire is starting over, stop giving up.\",\r\n {'chunks': [{'text': ' DO', 'timestamp': (0.0, 3.78)},\r\n {'text': ' IT!', 'timestamp': (3.78, 5.26)},\r\n {'text': ' Just', 'timestamp': (5.26, 6.04)},\r\n {'text': ' DO', 'timestamp': (6.04, 6.54)},\r\n {'text': ' IT!', 'timestamp': (6.54, 7.94)},\r\n {'text': \" Don't\", 'timestamp': (7.94, 8.58)},\r\n {'text': ' let', 'timestamp': (8.58, 8.88)},\r\n {'text': ' your', 'timestamp': (8.88, 9.16)},\r\n {'text': ' dreams', 'timestamp': (9.16, 9.54)},\r\n {'text': ' be', 'timestamp': (9.54, 9.94)},\r\n {'text': ' dreams.', 'timestamp': (9.94, 11.38)},\r\n {'text': ' Yesterday,', 'timestamp': (11.38, 12.38)},\r\n {'text': ' you', 'timestamp': (12.38, 12.44)},\r\n {'text': ' said', 'timestamp': (12.44, 12.62)},\r\n {'text': ' tomorrow,', 'timestamp': (12.62, 13.36)},\r\n {'text': ' so', 'timestamp': (13.36, 13.64)},\r\n {'text': ' just', 'timestamp': (13.64, 14.24)},\r\n {'text': ' DO', 'timestamp': (14.24, 14.74)},\r\n {'text': ' IT!', 'timestamp': (14.74, 15.4)},\r\n {'text': ' MAKE', 'timestamp': (15.4, 15.74)},\r\n {'text': ' YOUR', 'timestamp': (15.74, 16.1)},\r\n {'text': ' DRIMS!', 'timestamp': (16.1, 17.08)},\r\n {'text': ' CONTRO!', 'timestamp': (17.08, 18.08)},\r\n {'text': ' JUST', 'timestamp': (18.08, 18.58)},\r\n {'text': ' DO', 'timestamp': (18.58, 19.38)},\r\n {'text': ' IT!', 'timestamp': (19.38, 22.54)},\r\n {'text': ' Some', 'timestamp': (22.54, 22.9)},\r\n {'text': ' people', 'timestamp': (22.9, 23.24)},\r\n {'text': ' dream', 'timestamp': (23.24, 23.5)},\r\n {'text': ' success', 'timestamp': (23.5, 24.14)},\r\n {'text': ' while', 'timestamp': (24.14, 24.56)},\r\n {'text': \" you're\", 'timestamp': (24.56, 24.94)},\r\n {'text': ' gonna', 'timestamp': (24.94, 24.94)},\r\n {'text': ' wake', 'timestamp': (24.94, 25.18)},\r\n {'text': ' up', 'timestamp': (25.18, 25.54)},\r\n {'text': ' and', 'timestamp': (25.54, 25.74)},\r\n {'text': ' work', 'timestamp': (25.74, 26.04)},\r\n {'text': ' hard', 'timestamp': (26.04, 26.34)},\r\n {'text': ' at', 'timestamp': (26.34, 26.46)},\r\n {'text': ' it.', 'timestamp': (26.46, 27.04)},\r\n {'text': ' Nothing', 'timestamp': (27.04, 27.18)},\r\n {'text': ' is', 'timestamp': (27.18, 27.56)},\r\n {'text': ' impossible.', 'timestamp': (27.56, 29.68)},\r\n {'text': ' You', 'timestamp': (29.68, 30.7)},\r\n {'text': ' should', 'timestamp': (30.7, 30.88)},\r\n {'text': ' get', 'timestamp': (30.88, 31.1)},\r\n {'text': ' to', 'timestamp': (31.1, 31.24)},\r\n {'text': ' the', 'timestamp': (31.24, 31.48)},\r\n {'text': ' point', 'timestamp': (31.48, 31.82)},\r\n {'text': ' where', 'timestamp': (31.82, 32.46)},\r\n {'text': ' anyone', 'timestamp': (32.46, 32.82)},\r\n {'text': ' else', 'timestamp': (32.82, 33.2)},\r\n {'text': ' would', 'timestamp': (33.2, 33.46)},\r\n {'text': ' quit', 'timestamp': (33.46, 33.72)},\r\n {'text': ' and', 'timestamp': (33.72, 34.08)},\r\n {'text': \" you're\", 'timestamp': (34.08, 34.34)},\r\n {'text': ' not', 'timestamp': (34.34, 34.56)},\r\n {'text': ' gonna', 'timestamp': (34.56, 34.82)},\r\n {'text': ' stop', 'timestamp': (34.82, 35.16)},\r\n {'text': ' there.', 'timestamp': (35.16, 36.42)},\r\n {'text': ' No,', 'timestamp': (36.42, 36.86)},\r\n {'text': ' what', 'timestamp': (36.86, 37.0)},\r\n {'text': ' are', 'timestamp': (37.0, 37.1)},\r\n {'text': ' you', 'timestamp': (37.1, 37.2)},\r\n {'text': ' waiting', 'timestamp': (37.2, 37.56)},\r\n {'text': ' for?', 'timestamp': (37.56, 39.28)},\r\n {'text': ' Do', 'timestamp': (39.28, 39.6)},\r\n {'text': ' it!', 'timestamp': (39.6, 41.96)},\r\n {'text': ' Just', 'timestamp': (41.96, 42.64)},\r\n {'text': ' do', 'timestamp': (42.64, 44.28)},\r\n {'text': ' it!', 'timestamp': (44.28, 45.18)},\r\n {'text': ' Yes,', 'timestamp': (45.18, 45.56)},\r\n {'text': ' you', 'timestamp': (45.56, 45.84)},\r\n {'text': ' can!', 'timestamp': (45.84, 47.12)},\r\n {'text': ' Just', 'timestamp': (47.12, 47.54)},\r\n {'text': ' do', 'timestamp': (47.54, 47.82)},\r\n {'text': ' it!', 'timestamp': (44.3, 45.06)},\r\n {'text': ' Yes', 'timestamp': (45.06, 45.56)},\r\n {'text': ' you', 'timestamp': (45.56, 45.8)},\r\n {'text': ' can!', 'timestamp': (45.8, 47.12)},\r\n {'text': ' Just', 'timestamp': (47.12, 47.56)},\r\n {'text': ' do', 'timestamp': (47.56, 47.8)},\r\n {'text': ' it!', 'timestamp': (47.8, 50.04)},\r\n {'text': ' If', 'timestamp': (50.04, 52.96)},\r\n {'text': ' your', 'timestamp': (52.96, 53.3)},\r\n {'text': ' tire', 'timestamp': (53.3, 53.44)},\r\n {'text': ' is', 'timestamp': (53.44, 53.72)},\r\n {'text': ' starting', 'timestamp': (53.72, 53.98)},\r\n {'text': ' over,', 'timestamp': (53.98, 55.5)},\r\n {'text': ' stop', 'timestamp': (55.5, 56.0)},\r\n {'text': ' giving', 'timestamp': (56.0, 56.88)},\r\n {'text': ' up.', 'timestamp': (56.88, 60.9)}]})\r\n```\r\n\r\n(Notice at ~46 seconds, it goes back in time):\r\n```py\r\n {'text': ' Yes,', 'timestamp': (45.18, 45.56)},\r\n {'text': ' you', 'timestamp': (45.56, 45.84)},\r\n {'text': ' can!', 'timestamp': (45.84, 47.12)},\r\n {'text': ' Just', 'timestamp': (47.12, 47.54)},\r\n {'text': ' do', 'timestamp': (47.54, 47.82)},\r\n {'text': ' it!', 'timestamp': (44.3, 45.06)},\r\n {'text': ' Yes', 'timestamp': (45.06, 45.56)},\r\n {'text': ' you', 'timestamp': (45.56, 45.8)},\r\n {'text': ' can!', 'timestamp': (45.8, 47.12)},\r\n {'text': ' Just', 'timestamp': (47.12, 47.56)},\r\n {'text': ' do', 'timestamp': (47.56, 47.8)},\r\n {'text': ' it!', 'timestamp': (47.8, 50.04)},\r\n```\r\n\r\nFor reference, [this](https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/whisper-timestamps-demo.mp4?download=true) is the media I am transcribing.\n\n### Expected behavior\n\n1. The transcript times should be increasing.\r\n2. If you watch the [video](https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/whisper-timestamps-demo.mp4?download=true), it's clear that the repeated phrasing messes something up, duplicating this in the merged output.\r\n3. Result should be something like:\r\n```diff\r\n {'text': ' Do', 'timestamp': (39.28, 39.6)},\r\n {'text': ' it!', 'timestamp': (39.6, 41.96)},\r\n {'text': ' Just', 'timestamp': (41.96, 42.64)},\r\n {'text': ' do', 'timestamp': (42.64, 44.28)},\r\n {'text': ' it!', 'timestamp': (44.28, 45.18)},\r\n- {'text': ' Yes,', 'timestamp': (45.18, 45.56)},\r\n- {'text': ' you', 'timestamp': (45.56, 45.84)},\r\n- {'text': ' can!', 'timestamp': (45.84, 47.12)},\r\n- {'text': ' Just', 'timestamp': (47.12, 47.54)},\r\n- {'text': ' do', 'timestamp': (47.54, 47.82)},\r\n- {'text': ' it!', 'timestamp': (44.3, 45.06)},\r\n- {'text': ' Yes', 'timestamp': (45.06, 45.56)},\r\n+ {'text': ' Yes', 'timestamp': (45.18, 45.56)},\r\n {'text': ' you', 'timestamp': (45.56, 45.8)},\r\n {'text': ' can!', 'timestamp': (45.8, 47.12)},\r\n {'text': ' Just', 'timestamp': (47.12, 47.56)},\r\n {'text': ' do', 'timestamp': (47.56, 47.8)},\r\n {'text': ' it!', 'timestamp': (47.8, 50.04)},\r\n```", "pr_html_url": "https://github.com/huggingface/transformers/pull/32197", "file_loc": {"base_commit": "85a1269e19af022e04bc2aad82572cd5a9e8cdd9", "files": [{"path": "src/transformers/models/whisper/tokenization_whisper.py", "status": "modified", "Loc": {"(None, '_find_longest_common_sequence', 1107)": {"mod": [1177]}}}, {"path": "tests/models/whisper/test_tokenization_whisper.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [340]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/models/whisper/tokenization_whisper.py"], "doc": [], "test": ["tests/models/whisper/test_tokenization_whisper.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "1681a6d452b60ff3652a96f03541dfa491124192", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/20650", "iss_label": "New model", "title": "[New Model] UDOP: Unifying Vision, Text, and Layout for Universal Document Processing", "body": "### Model description\r\n\r\nWe propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).\r\n\r\n### Open source status\r\n\r\n- [x] The model implementation is available\r\n- [x] The model weights are available\r\n\r\n### Provide useful links for the implementation\r\nUDOP Paper: https://arxiv.org/abs/2212.02623\r\nUDOP Repo: https://github.com/microsoft/UDOP\r\n\r\nUDOP Model Weights: https://huggingface.co/ZinengTang/Udop/tree/main", "pr_html_url": "https://github.com/huggingface/transformers/pull/22940", "file_loc": {"base_commit": "1681a6d452b60ff3652a96f03541dfa491124192", "files": [{"path": ".circleci/create_circleci_config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [477, 487]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [513]}}}, {"path": "README_es.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [486]}}}, {"path": "README_fr.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [507]}}}, {"path": "README_hd.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [460]}}}, {"path": "README_ja.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [520]}}}, {"path": "README_ko.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [435]}}}, {"path": "README_zh-hans.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [459]}}}, {"path": "README_zh-hant.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [471]}}}, {"path": "docs/source/en/_toctree.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [772]}}}, {"path": "docs/source/en/index.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [281]}}}, {"path": "src/transformers/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [858, 1137, 1216, 3413, 5642, 5917, 5989, 7829]}}}, {"path": "src/transformers/convert_slow_tokenizer.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1041, 1473]}}}, {"path": "src/transformers/models/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [222]}}}, {"path": "src/transformers/models/auto/configuration_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [233, 456, 717]}}}, {"path": "src/transformers/models/auto/image_processing_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [110]}}}, {"path": "src/transformers/models/auto/modeling_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [221]}}}, {"path": "src/transformers/models/auto/tokenization_auto.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [420]}}}, {"path": "src/transformers/utils/dummy_pt_objects.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8343]}}}, {"path": "src/transformers/utils/dummy_sentencepiece_objects.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [221]}}}, {"path": "src/transformers/utils/dummy_tokenizers_objects.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [410]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/utils/dummy_pt_objects.py", "src/transformers/__init__.py", "src/transformers/models/auto/tokenization_auto.py", "src/transformers/models/__init__.py", "src/transformers/models/auto/configuration_auto.py", ".circleci/create_circleci_config.py", "src/transformers/utils/dummy_tokenizers_objects.py", "src/transformers/convert_slow_tokenizer.py", "src/transformers/models/auto/modeling_auto.py", "src/transformers/utils/dummy_sentencepiece_objects.py", "src/transformers/models/auto/image_processing_auto.py"], "doc": ["docs/source/en/_toctree.yml", "README_fr.md", "README_hd.md", "README_zh-hans.md", "README_zh-hant.md", "README_ja.md", "README.md", "README_es.md", "README_ko.md", "docs/source/en/index.md"], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "4b423e607455a7aca1edc4beaa713da58e78ef0b", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/18068", "iss_label": "bug", "title": "StoppingCriteria \"scores\" is always None", "body": "### System Info\n\nI've written a custom StoppingCriteria subclass and I'm trying to utilize the `scores` in my decision logic, but I'm finding that `scores` is always `None`. Is that intentional?\n\n### Who can help?\n\n@patrickvonplaten, @Narsil, @gante\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n```\r\nclass TopPredictionOutsideTargetSetStoppingCriteria(StoppingCriteria):\r\n def __init__(self, priority_tokens_ids: list):\r\n self.priority_token_ids = priority_tokens_ids\r\n\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\r\n print(f\"TopPred SCORES? {scores}, input_ids: {input_ids}\") # <--- \"scores\" is None but \"input_ids\" is correct\r\n top = torch.topk(scores, 1, dim=1).indices[0]\r\n if not top in self.priority_token_ids:\r\n return True\r\n return False\r\n```\n\n### Expected behavior\n\nSince the function indicates `scores` as an input, I'd expect it to be a non-null value.", "pr_html_url": "https://github.com/huggingface/transformers/pull/26863", "file_loc": {"base_commit": "4b423e607455a7aca1edc4beaa713da58e78ef0b", "files": [{"path": "src/transformers/generation/stopping_criteria.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [26]}, "('StoppingCriteria', None, 36)": {"mod": [37]}}}, {"path": "src/transformers/generation/utils.py", "status": "modified", "Loc": {"('GenerationMixin', 'generate', 1351)": {"mod": [1400]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/generation/utils.py", "src/transformers/generation/stopping_criteria.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "88ac60f7b5f6d4b62245dc21653ea3d5db7d4935", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/11357", "iss_label": "", "title": "possible mistake in documentation", "body": "Looking at description of the parameter \"decoder_input_ids\" in \"forward\" method of BartForConditionalGeneration/T5ForConditionalGeneration, I see following:\r\n\r\nBartForConditionalGeneration:\r\ndecoder_input_ids - ... For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the !!INPUT_IDS!! to the right for denoising pretraining following the paper.\r\n\r\nT5ForConditionalGeneration:\r\ndecoder_input_ids - ... To know more on how to prepare decoder_input_ids for pretraining take a look at T5 Training. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of !!INPUT_IDS!!.\r\n\r\nLooks like there should be LABELS instead of INPUT_IDS.\r\n\r\nThanks,\r\n@patrickvonplaten, @patil-suraj\r\n", "pr_html_url": "https://github.com/huggingface/transformers/pull/11466", "file_loc": {"base_commit": "88ac60f7b5f6d4b62245dc21653ea3d5db7d4935", "files": [{"path": "src/transformers/models/bart/modeling_bart.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [585]}}}, {"path": "src/transformers/models/bart/modeling_tf_bart.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [562]}}}, {"path": "src/transformers/models/blenderbot/modeling_blenderbot.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [549]}}}, {"path": "src/transformers/models/blenderbot/modeling_tf_blenderbot.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [563]}}}, {"path": "src/transformers/models/blenderbot_small/modeling_blenderbot_small.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [550]}}}, {"path": "src/transformers/models/blenderbot_small/modeling_tf_blenderbot_small.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [568]}}}, {"path": "src/transformers/models/fsmt/modeling_fsmt.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [243]}}}, {"path": "src/transformers/models/m2m_100/modeling_m2m_100.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [598]}}}, {"path": "src/transformers/models/marian/modeling_marian.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [562]}}}, {"path": "src/transformers/models/marian/modeling_tf_marian.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [597]}}}, {"path": "src/transformers/models/mbart/modeling_mbart.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [585]}}}, {"path": "src/transformers/models/mbart/modeling_tf_mbart.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [536]}}}, {"path": "src/transformers/models/pegasus/modeling_pegasus.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [561]}}}, {"path": "src/transformers/models/pegasus/modeling_tf_pegasus.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [597]}}}, {"path": "src/transformers/models/prophetnet/modeling_prophetnet.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [98]}}}, {"path": "src/transformers/models/speech_to_text/modeling_speech_to_text.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [619]}}}, {"path": "src/transformers/models/t5/modeling_t5.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1066]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/models/blenderbot/modeling_tf_blenderbot.py", "src/transformers/models/marian/modeling_tf_marian.py", "src/transformers/models/blenderbot/modeling_blenderbot.py", "src/transformers/models/blenderbot_small/modeling_blenderbot_small.py", "src/transformers/models/marian/modeling_marian.py", "src/transformers/models/pegasus/modeling_tf_pegasus.py", "src/transformers/models/bart/modeling_bart.py", "src/transformers/models/mbart/modeling_tf_mbart.py", "src/transformers/models/speech_to_text/modeling_speech_to_text.py", "src/transformers/models/m2m_100/modeling_m2m_100.py", "src/transformers/models/blenderbot_small/modeling_tf_blenderbot_small.py", "src/transformers/models/t5/modeling_t5.py", "src/transformers/models/prophetnet/modeling_prophetnet.py", "src/transformers/models/mbart/modeling_mbart.py", "src/transformers/models/bart/modeling_tf_bart.py", "src/transformers/models/pegasus/modeling_pegasus.py", "src/transformers/models/fsmt/modeling_fsmt.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "8bbb53e20b7873ba7f63be70d4d798e0c3568bfa", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/13826", "iss_label": "", "title": "Tokenizer - Raises wrong \"UserWarning: `max_length` is ignored when `padding`=`True`\"", "body": "In the newest version of transformers (4.11.2 & 4.12.0.dev0) I get the following warning:\r\n```\r\nC:\\Anaconda3\\envs\\sbert\\lib\\site-packages\\transformers\\tokenization_utils_base.py:2227: UserWarning: `max_length` is ignored when `padding`=`True`.\r\n warnings.warn(\"`max_length` is ignored when `padding`=`True`.\")\r\n```\r\n\r\n\r\nCode to re-produce:\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\r\ntexts = [\"Short sentence\", \"A really really really really really long sentence to test max length\"]\r\n\r\noutput = tokenizer(texts, padding=True, truncation=True, max_length=5, return_tensors='pt')\r\nprint(output['input_ids'].shape)\r\n\r\noutput = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')\r\nprint(output['input_ids'].shape)\r\n```\r\n\r\nOutput:\r\n```\r\nC:\\Anaconda3\\envs\\sbert\\lib\\site-packages\\transformers\\tokenization_utils_base.py:2227: UserWarning: `max_length` is ignored when `padding`=`True`.\r\n warnings.warn(\"`max_length` is ignored when `padding`=`True`.\")\r\ntorch.Size([2, 5])\r\ntorch.Size([2, 14])\r\n```` \r\n\r\n\r\nAs we see, max_length is not ignored when padding = True. It truncates the text as expected to a max_length of 5.\r\n\r\nI would say that the warning is incorrect and should not be raised. \r\n\r\nShould I fix it?\r\n\r\nOr is it really intended that max_length is ignored when padding=True? This would be horrible, I want to truncate my text to a certain max_length.", "pr_html_url": "https://github.com/huggingface/transformers/pull/13829", "file_loc": {"base_commit": "8bbb53e20b7873ba7f63be70d4d798e0c3568bfa", "files": [{"path": "src/transformers/tokenization_utils_base.py", "status": "modified", "Loc": {"('PreTrainedTokenizerBase', '_get_padding_truncation_strategies', 2183)": {"mod": [2226, 2227]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/tokenization_utils_base.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "010e0460b22ddd7f74e31163f69ab3da2e9741ba", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/3227", "iss_label": "Core: Pipeline\nVersion mismatch", "title": "An Error report about pipeline", "body": "# \ud83d\udc1b Bug\r\n\r\n## Information\r\n\r\nThis may be an easy question, but it has been bothering me all day.\r\n\r\nWhen I run the code: \r\nnlp = pipeline(\"question-answering\")\r\n\r\nIt always tells me: \r\nCouldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-modelcard.json' to download model card file.\r\nCreating an empty model card.\r\n\r\nIf I ignore it and continue to run the rest of the code: \r\nnlp({\r\n 'question': 'What is the name of the repository ?',\r\n 'context': 'Pipeline have been included in the huggingface/transformers repository'\r\n})\r\n\r\nThe error will appear:\r\nKeyError: 'token_type_ids'", "pr_html_url": "https://github.com/huggingface/transformers/pull/3439", "file_loc": {"base_commit": "010e0460b22ddd7f74e31163f69ab3da2e9741ba", "files": [{"path": "examples/utils_multiple_choice.py", "status": "modified", "Loc": {"(None, 'convert_examples_to_features', 294)": {"mod": [323]}}}, {"path": "src/transformers/data/processors/squad.py", "status": "modified", "Loc": {"(None, 'squad_convert_example_to_features', 86)": {"add": [141]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/data/processors/squad.py", "examples/utils_multiple_choice.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "ba1b3db70907b975b5ca52b9957c5ed7a186a0fa", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/12762", "iss_label": "", "title": "t5 fast tokenizer save_vocabulary fails without sentencepiece file", "body": "## Environment info\r\n\r\n- `transformers` version: 4.9.0.dev0\r\n- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyTorch version (GPU?): 1.9.0+cu102 (False)\r\n- Tensorflow version (GPU?): 2.5.0 (False)\r\n- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)\r\n- Jax version: 0.2.16\r\n- JaxLib version: 0.1.68\r\n- Using GPU in script?: no (tpu)\r\n- Using distributed or parallel set-up in script?: I guess data parallel\r\n\r\n### Who can help\r\n\r\nModels:\r\n- t5: @patrickvonplaten\r\n\r\nLibrary:\r\n- tokenizers: @LysandreJik\r\n\r\n## Information\r\n\r\nModel I am using (Bert, XLNet ...):\r\n\r\nThe problem arises when using:\r\n* [x] the official example scripts: (give details below)\r\n* [ ] my own modified scripts: (give details below)\r\n\r\nThe tasks I am working on is:\r\n* [x] an official GLUE/SQUaD task: (give the name)\r\n* [] my own task or dataset: (give details below)\r\n\r\nTask is summarization\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Use the [summarization example code](https://github.com/huggingface/transformers/blob/3cd15c1dd62c5c9a9202fae9f00b8eba3eb2b95d/examples/pytorch/summarization/run_summarization.py) and fine tune a pre-trained t5 tokenizer and model created according to the flax mlm example scripts and [t5 tokenizer](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/t5_tokenizer_model.py) -- for instance [t5-base-norwegian](https://huggingface.co/patrickvonplaten/t5-base-norwegian/tree/main)\r\n\r\nWhen the finetuning-summary-trainer saves the model, it will also attempt to save the vocabulary. This will fail with the following stack trace, because the tokenizers `self.vocab_file` is None, where it is expected to point at a sentencepiece file:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/yeb/Developer/yhavinga/t5-base-dutch-summarization/run_summarization.py\", line 620, in <module>\r\n main()\r\n File \"/home/yeb/Developer/yhavinga/t5-base-dutch-summarization/run_summarization.py\", line 545, in main\r\n trainer.save_model() # Saves the tokenizer too for easy upload\r\n File \"/home/yeb/Developer/yhavinga/t5-base-dutch-summarization/transformers/src/transformers/trainer.py\", line 1883, in save_model\r\n self._save(output_dir)\r\n File \"/home/yeb/Developer/yhavinga/t5-base-dutch-summarization/transformers/src/transformers/trainer.py\", line 1933, in _save\r\n self.tokenizer.save_pretrained(output_dir)\r\n File \"/home/yeb/Developer/yhavinga/t5-base-dutch-summarization/transformers/src/transformers/tokenization_utils_base.py\", line 1958, in save_pretrained\r\n save_files = self._save_pretrained(\r\n File \"/home/yeb/Developer/yhavinga/t5-base-dutch-summarization/transformers/src/transformers/tokenization_utils_fast.py\", line 567, in _save_pretrained\r\n vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix)\r\n File \"/home/yeb/Developer/yhavinga/t5-base-dutch-summarization/transformers/src/transformers/models/t5/tokenization_t5_fast.py\", line 150, in save_vocabulary\r\n if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):\r\n File \"/usr/lib/python3.8/posixpath.py\", line 374, in abspath\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\nProcess finished with exit code 1\r\n```\r\n\r\nThe following hack works around the problem:\r\n```\r\ndiff --git a/src/transformers/models/t5/tokenization_t5_fast.py b/src/transformers/models/t5/tokenization_t5_fast.py\r\nindex 3f972b006..cc238a119 100644\r\n--- a/src/transformers/models/t5/tokenization_t5_fast.py\r\n+++ b/src/transformers/models/t5/tokenization_t5_fast.py\r\n@@ -147,9 +147,10 @@ class T5TokenizerFast(PreTrainedTokenizerFast):\r\n save_directory, (filename_prefix + \"-\" if filename_prefix else \"\") + VOCAB_FILES_NAMES[\"vocab_file\"]\r\n )\r\n \r\n- if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):\r\n- copyfile(self.vocab_file, out_vocab_file)\r\n- logger.info(f\"Copy vocab file to {out_vocab_file}\")\r\n+ if self.vocab_file:\r\n+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file):\r\n+ copyfile(self.vocab_file, out_vocab_file)\r\n+ logger.info(f\"Copy vocab file to {out_vocab_file}\")\r\n \r\n return (out_vocab_file,)\r\n ```\r\n\r\n## Expected behavior\r\n\r\nNo error.\r\n", "pr_html_url": "https://github.com/huggingface/transformers/pull/12806", "file_loc": {"base_commit": "ba1b3db70907b975b5ca52b9957c5ed7a186a0fa", "files": [{"path": "src/transformers/models/albert/tokenization_albert_fast.py", "status": "modified", "Loc": {"('AlbertTokenizerFast', '__init__', 122)": {"add": [160]}, "('AlbertTokenizerFast', None, 73)": {"add": [218]}}}, {"path": "src/transformers/models/barthez/tokenization_barthez_fast.py", "status": "modified", "Loc": {"('BarthezTokenizerFast', '__init__', 110)": {"add": [139]}, "('BarthezTokenizerFast', None, 59)": {"add": [189]}}}, {"path": "src/transformers/models/big_bird/tokenization_big_bird_fast.py", "status": "modified", "Loc": {"('BigBirdTokenizerFast', '__init__', 104)": {"add": [140]}, "('BigBirdTokenizerFast', None, 59)": {"add": [229]}}}, {"path": "src/transformers/models/camembert/tokenization_camembert_fast.py", "status": "modified", "Loc": {"('CamembertTokenizerFast', '__init__', 106)": {"add": [137]}, "('CamembertTokenizerFast', None, 54)": {"add": [188]}}}, {"path": "src/transformers/models/herbert/tokenization_herbert_fast.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [25, 26, 27, 28]}}}, {"path": "src/transformers/models/mbart50/tokenization_mbart50_fast.py", "status": "modified", "Loc": {"('MBart50TokenizerFast', '__init__', 111)": {"add": [147]}, "('MBart50TokenizerFast', None, 57)": {"add": [260]}}}, {"path": "src/transformers/models/pegasus/tokenization_pegasus_fast.py", "status": "modified", "Loc": {"('PegasusTokenizerFast', '__init__', 99)": {"add": [150]}, "('PegasusTokenizerFast', None, 52)": {"add": [194]}}}, {"path": "src/transformers/models/reformer/tokenization_reformer_fast.py", "status": "modified", "Loc": {"('ReformerTokenizerFast', '__init__', 88)": {"add": [106]}, "('ReformerTokenizerFast', None, 54)": {"add": [108]}}}, {"path": "src/transformers/models/t5/tokenization_t5_fast.py", "status": "modified", "Loc": {"('T5TokenizerFast', '__init__', 105)": {"add": [139]}, "('T5TokenizerFast', None, 63)": {"add": [142]}}}, {"path": "src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py", "status": "modified", "Loc": {"('XLMRobertaTokenizerFast', '__init__', 118)": {"add": [147]}, "('XLMRobertaTokenizerFast', None, 67)": {"add": [200]}}}, {"path": "src/transformers/models/xlnet/tokenization_xlnet_fast.py", "status": "modified", "Loc": {"('XLNetTokenizerFast', '__init__', 125)": {"add": [166]}, "('XLNetTokenizerFast', None, 64)": {"add": [224]}}}, {"path": "src/transformers/tokenization_utils_fast.py", "status": "modified", "Loc": {"('PreTrainedTokenizerFast', None, 76)": {"add": [89]}, "('PreTrainedTokenizerFast', '_save_pretrained', 535)": {"mod": [554]}}}, {"path": "tests/test_tokenization_common.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [40, 58, 3391]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/models/barthez/tokenization_barthez_fast.py", "src/transformers/models/mbart50/tokenization_mbart50_fast.py", "src/transformers/models/pegasus/tokenization_pegasus_fast.py", "src/transformers/models/big_bird/tokenization_big_bird_fast.py", "src/transformers/models/camembert/tokenization_camembert_fast.py", "src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py", "src/transformers/models/reformer/tokenization_reformer_fast.py", "src/transformers/tokenization_utils_fast.py", "src/transformers/models/herbert/tokenization_herbert_fast.py", "src/transformers/models/xlnet/tokenization_xlnet_fast.py", "src/transformers/models/albert/tokenization_albert_fast.py", "src/transformers/models/t5/tokenization_t5_fast.py"], "doc": [], "test": ["tests/test_tokenization_common.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "edb314ae2ba4ac0e89d6a31d48037b8943978bff", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/28286", "iss_label": "", "title": "`contrastive-image-text/run_clip.py` example problems", "body": "### System Info\n\n- `transformers` version: 4.37.0.dev0\r\n- Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31\r\n- Python version: 3.11.5\r\n- Huggingface_hub version: 0.20.1\r\n- Safetensors version: 0.4.1\r\n- Accelerate version: 0.25.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.1.2+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n@amyeroberts\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nThe following example script has some issues: https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py\r\n\r\n#### Minor issue:\r\nWhen using `--train_file dataset.csv`, the tokenizer fails if the caption is \"None\", \"null\" or \"NA\"\r\n\r\n#### Curiosity:\r\n- There seems to be no parameter to specify the hub repository to push to.\r\n- Also, there seems to be no place to track the experiment (like wandb)\r\n\r\n#### Actual issue\r\n\r\nWith the following parameters\r\n```bash\r\n --model_name_or_path \"openai/clip-vit-base-patch32\" \\\r\n --freeze_text_model \\\r\n --train_file \"train.csv\" \\\r\n --image_column \"image_path\" \\\r\n --caption_column \"caption\" \\\r\n --remove_unused_columns=False \\\r\n --do_train \\\r\n --per_device_train_batch_size=\"64\" \\\r\n --per_device_eval_batch_size=\"64\" \\\r\n --learning_rate=\"5e-5\" --warmup_steps=\"0\" --weight_decay 0.1 \\\r\n --overwrite_output_dir \\\r\n --push_to_hub\r\n```\r\n\r\nI get the following error:\r\n```bash\r\n[INFO|trainer.py:1712] 2023-12-30 18:16:36,697 >> ***** Running training *****\r\n[INFO|trainer.py:1713] 2023-12-30 18:16:36,697 >> Num examples = 348,784\r\n[INFO|trainer.py:1714] 2023-12-30 18:16:36,697 >> Num Epochs = 3\r\n[INFO|trainer.py:1715] 2023-12-30 18:16:36,698 >> Instantaneous batch size per device = 64\r\n[INFO|trainer.py:1718] 2023-12-30 18:16:36,698 >> Total train batch size (w. parallel, distributed & accumulation) = 64\r\n[INFO|trainer.py:1719] 2023-12-30 18:16:36,698 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1720] 2023-12-30 18:16:36,698 >> Total optimization steps = 16,350\r\n[INFO|trainer.py:1721] 2023-12-30 18:16:36,698 >> Number of trainable parameters = 88,111,361\r\n 0%| | 0/16350 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"/home/amoryo/sign-language/signwriting-clip/signwriting_clip/transformers/examples/pytorch/contrastive-image-text/run_clip.py\", line 590, in <module>\r\n main()\r\n File \"/home/amoryo/sign-language/signwriting-clip/signwriting_clip/transformers/examples/pytorch/contrastive-image-text/run_clip.py\", line 559, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py\", line 1534, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py\", line 1860, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py\", line 2737, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py\", line 2760, in compute_loss\r\n outputs = model(**inputs)\r\n ^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py\", line 1108, in forward\r\n text_outputs = self.text_model(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py\", line 691, in forward\r\n hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py\", line 219, in forward\r\n embeddings = inputs_embeds + position_embeddings\r\n ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~\r\nRuntimeError: The size of tensor a (128) must match the size of tensor b (77) at non-singleton dimension 1\r\n```\n\n### Expected behavior\n\nExample script should train, and push to hub correctly", "pr_html_url": "https://github.com/huggingface/transformers/pull/28482", "file_loc": {"base_commit": "edb314ae2ba4ac0e89d6a31d48037b8943978bff", "files": [{"path": "examples/pytorch/contrastive-image-text/run_clip.py", "status": "modified", "Loc": {"(None, 'main', 241)": {"mod": [562]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["examples/pytorch/contrastive-image-text/run_clip.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "6d00033e97e1751a897f2317fdfd35dd853cee29", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/1801", "iss_label": "wontfix", "title": "run_glue.py RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:3", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- Important information -->\r\n\r\nModel I am using (Bert, XLNet....): Bert\r\n\r\nLanguage I am using the model on (English, Chinese....): English\r\n\r\nThe problem arise when using:\r\n* [ ] the official example scripts: (give details) : transformers/examples/run_glue.py\r\n* [ ] my own modified scripts: (give details)\r\n\r\nThe tasks I am working on is:\r\n* [ ] an official GLUE/SQUaD task: (give the name) : MRPC\r\n* [ ] my own task or dataset: (give details)\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.\r\nI've tested using\r\npython -m pytest -sv ./transformers/tests/\r\npython -m pytest -sv ./examples/\r\nand it works fine without couple of tesks.\r\n\r\n2.\r\nafter test, i downloaded glue datafile via\r\nhttps://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e\r\nand tried run_glue.py\r\n\r\npip install -r ./examples/requirements.txt\r\nexport GLUE_DIR=/path/to/glue\r\nexport TASK_NAME=MRPC\r\n\r\n\r\n3.\r\npython ./examples/run_glue.py \\\r\n --model_type bert \\\r\n --model_name_or_path bert-base-uncased \\\r\n --task_name $TASK_NAME \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --data_dir $GLUE_DIR/$TASK_NAME \\\r\n --max_seq_length 128 \\\r\n --per_gpu_eval_batch_size=8 \\\r\n --per_gpu_train_batch_size=8 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir /tmp/$TASK_NAME/\r\n\r\nand i got this error.\r\n\r\n`11/11/2019 21:10:50 - INFO - __main__ - Total optimization steps = 345\r\nEpoch: 0%| | 0/3 [00:00<?, ?it/sTraceback (most recent call last): | 0/115 [00:00<?, ?it/s]\r\n File \"./examples/run_glue.py\", line 552, in <module>\r\n main()\r\n File \"./examples/run_glue.py\", line 503, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"./examples/run_glue.py\", line 146, in train\r\n outputs = model(**inputs)\r\n File \"/home/insublee/anaconda3/envs/py_torch4/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/insublee/anaconda3/envs/py_torch4/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 146, in forward\r\n \"them on device: {}\".format(self.src_device_obj, t.device))\r\nRuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:3`\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\n\r\n* OS: ubuntu16.04LTS\r\n* Python version: 3.7.5\r\n* PyTorch version: 1.2.0\r\n* PyTorch Transformers version (or branch): 2.1.1\r\n* Using GPU ? 4-way 2080ti\r\n* Distributed of parallel setup ? cuda10.0 cudnn 7.6.4\r\n* Any other relevant information:\r\n\r\n## Additional context\r\nthank you.", "pr_html_url": "https://github.com/huggingface/transformers/pull/3842", "file_loc": {"base_commit": "6d00033e97e1751a897f2317fdfd35dd853cee29", "files": [{"path": "examples/hans/test_hans.py", "status": "modified", "Loc": {"(None, 'evaluate', 240)": {"mod": [258]}}}, {"path": "examples/mm-imdb/run_mmimdb.py", "status": "modified", "Loc": {"(None, 'evaluate', 265)": {"mod": [281]}}}, {"path": "examples/ner/run_ner.py", "status": "modified", "Loc": {"(None, 'evaluate', 247)": {"mod": [256]}}}, {"path": "examples/run_language_modeling.py", "status": "modified", "Loc": {"(None, 'evaluate', 407)": {"mod": [430]}}}, {"path": "examples/run_multiple_choice.py", "status": "modified", "Loc": {"(None, 'evaluate', 242)": {"mod": [259]}}}, {"path": "examples/run_xnli.py", "status": "modified", "Loc": {"(None, 'evaluate', 252)": {"mod": [269]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["examples/mm-imdb/run_mmimdb.py", "examples/run_multiple_choice.py", "examples/run_xnli.py", "examples/run_language_modeling.py", "examples/ner/run_ner.py"], "doc": [], "test": ["examples/hans/test_hans.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "43b9d93875cbf6756baf402a4720ca23d8c75015", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/6193", "iss_label": "", "title": "Some weights not initialized in pre-trained RobertaForMaskedLM", "body": "The bug is similar to #2202.\r\n\r\nI am trying to evaluate MLM perplexity (without training/finetuning) using Roberta with `run_language_modeling.py` (from the [official example](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)). However, some weights seems to be reinitialized instead of getting loading from the pretrained Roberta checkpoint.\r\n\r\n## To Reproduce (~~with master branch~~):\r\n\r\n```\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\nfrom transformers import RobertaForMaskedLM\r\n_ = RobertaForMaskedLM.from_pretrained('roberta-base')\r\n```\r\n\r\nIt gives the following warning message:\r\n```\r\nWARNING:transformers.modeling_utils:Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.embeddings.position_ids', 'lm_head.decoder.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nThe perplexities I get on direct evaluation on Wikitext-2/103 datasets are also much higher than the official Roberta implementation from fairseq. I suspect this could be the reason.", "pr_html_url": "https://github.com/huggingface/transformers/pull/7282", "file_loc": {"base_commit": "43b9d93875cbf6756baf402a4720ca23d8c75015", "files": [{"path": "src/transformers/modeling_roberta.py", "status": "modified", "Loc": {"('RobertaForMaskedLM', None, 303)": {"add": [305]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/modeling_roberta.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "836e88caee95eb37a860a6c82bbd2becc6b9dc7b", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/30073", "iss_label": "Feature request\nAudio", "title": "SPDA/FA2 Attention for the Wav2Vec2 Family of Models", "body": "### Feature request\n\nAddition of [PyTorch SDPA](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) and [Flash Attention 2](https://github.com/Dao-AILab/flash-attention) to the Wav2Vec2 modelling code.\n\n### Motivation\n\nWav2Vec2 and its derived models remain some of the most popular speech recognition and audio classification models in the library. However, only one [attention implementation](https://github.com/huggingface/transformers/blob/9b5a6450d481b0f02834684ffd8b3ba4cbbd6fe0/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L487) is available to users: the slowest and most memory consuming \"eager\" mode. We should update the modelling code to provide two newer attention implementations: SDPA and FA2, both of which are faster and more memory efficient.\r\n\r\nSince Wav2Vec2 copies its attention from BART, and SDPA & FA2 were added for BART in [this PR](https://github.com/huggingface/transformers/pull/27203), this should be quite a straightforward PR, mostly copying out the logic from the BART PR and pasting it into Wav2Vec2. We should then be sure to add two fast tests (one for each of SDPA and FA2), e.g. in the style of the test [here](https://github.com/huggingface/transformers/blob/9b5a6450d481b0f02834684ffd8b3ba4cbbd6fe0/tests/models/whisper/test_modeling_whisper.py#L891), and two slow integration tests, e.g. in the style of the tests [here](https://github.com/huggingface/transformers/blob/9b5a6450d481b0f02834684ffd8b3ba4cbbd6fe0/tests/models/gemma/test_modeling_gemma.py#L657-L659).\n\n### Your contribution\n\nWant to take this one @kamilakesbi?", "pr_html_url": "https://github.com/huggingface/transformers/pull/30121", "file_loc": {"base_commit": "836e88caee95eb37a860a6c82bbd2becc6b9dc7b", "files": [{"path": "docs/source/en/model_doc/hubert.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [46]}}}, {"path": "docs/source/en/model_doc/wav2vec2.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [41]}}}, {"path": "docs/source/en/perf_infer_gpu_one.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [66, 196]}}}, {"path": "src/transformers/models/data2vec/modeling_data2vec_audio.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [22, 41, 47, 67, 480], "mod": [506]}, "('Data2VecAudioEncoder', '__init__', 543)": {"add": [550]}, "('Data2VecAudioPreTrainedModel', None, 674)": {"add": [683]}, "('Data2VecAudioEncoderLayer', '__init__', 508)": {"mod": [510]}, "('Data2VecAudioEncoder', 'forward', 552)": {"mod": [568, 569, 570, 571, 572, 573]}}}, {"path": "src/transformers/models/hubert/modeling_hubert.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21, 33, 39, 63, 543], "mod": [569, 630]}, "('HubertEncoder', '__init__', 678)": {"add": [685]}, "('HubertEncoderStableLayerNorm', '__init__', 760)": {"add": [769]}, "('HubertPreTrainedModel', None, 844)": {"add": [853]}, "('HubertEncoderLayer', '__init__', 571)": {"mod": [573]}, "('HubertEncoderLayerStableLayerNorm', '__init__', 632)": {"mod": [634]}, "('HubertEncoder', 'forward', 687)": {"mod": [703, 704, 705, 706, 707, 708]}, "('HubertEncoderStableLayerNorm', 'forward', 771)": {"mod": [787, 788, 789, 790, 791, 792]}}}, {"path": "src/transformers/models/sew/modeling_sew.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [22, 34, 61, 538], "mod": [31, 564]}, "('SEWEncoder', '__init__', 600)": {"add": [609]}, "('SEWPreTrainedModel', None, 703)": {"add": [712]}, "('SEWEncoderLayer', '__init__', 566)": {"mod": [568]}, "('SEWEncoder', 'forward', 611)": {"mod": [623, 624, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635]}}}, {"path": "src/transformers/models/unispeech/modeling_unispeech.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23, 36, 42, 62, 579], "mod": [605, 666]}, "('UniSpeechEncoder', '__init__', 714)": {"add": [721]}, "('UniSpeechEncoderStableLayerNorm', '__init__', 796)": {"add": [805]}, "('UniSpeechPreTrainedModel', None, 950)": {"add": [959]}, "('UniSpeechEncoderLayer', '__init__', 607)": {"mod": [609]}, "('UniSpeechEncoderLayerStableLayerNorm', '__init__', 668)": {"mod": [670]}, "('UniSpeechEncoder', 'forward', 723)": {"mod": [739, 740, 741, 742, 743, 744]}, "('UniSpeechEncoderStableLayerNorm', 'forward', 807)": {"mod": [823, 824, 825, 826, 827, 828]}}}, {"path": "src/transformers/models/unispeech_sat/modeling_unispeech_sat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23, 43, 50, 78, 596], "mod": [622, 683]}, "('UniSpeechSatEncoder', '__init__', 731)": {"add": [738]}, "('UniSpeechSatEncoderStableLayerNorm', '__init__', 813)": {"add": [822]}, "('UniSpeechSatPreTrainedModel', None, 967)": {"add": [976]}, "('UniSpeechSatEncoderLayer', '__init__', 624)": {"mod": [626]}, "('UniSpeechSatEncoderLayerStableLayerNorm', '__init__', 685)": {"mod": [687]}, "('UniSpeechSatEncoder', 'forward', 740)": {"mod": [756, 757, 758, 759, 760, 761]}, "('UniSpeechSatEncoderStableLayerNorm', 'forward', 824)": {"mod": [840, 841, 842, 843, 844, 845]}}}, {"path": "src/transformers/models/wav2vec2/modeling_wav2vec2.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23, 46, 61, 94, 644]}, "('Wav2Vec2Encoder', '__init__', 749)": {"add": [756]}, "('Wav2Vec2EncoderStableLayerNorm', '__init__', 830)": {"add": [839]}, "('Wav2Vec2PreTrainedModel', None, 1064)": {"add": [1073]}, "('Wav2Vec2ForPreTraining', 'forward', 1649)": {"add": [1744]}, "('Wav2Vec2EncoderLayer', '__init__', 670)": {"mod": [672]}, "('Wav2Vec2EncoderLayerStableLayerNorm', '__init__', 704)": {"mod": [706]}, "('Wav2Vec2Encoder', 'forward', 758)": {"mod": [774, 775, 776, 777, 778, 779]}, "('Wav2Vec2EncoderStableLayerNorm', 'forward', 841)": {"mod": [857, 858, 859, 860, 861, 862]}}}, {"path": "src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py", "status": "modified", "Loc": {"('Wav2Vec2ConformerForPreTraining', 'forward', 1422)": {"add": [1517]}}}, {"path": "tests/models/wav2vec2/test_modeling_wav2vec2.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27, 35, 38]}, "('Wav2Vec2ModelIntegrationTest', 'test_inference_mms_1b_all', 1958)": {"add": [1997]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/models/hubert/modeling_hubert.py", "src/transformers/models/data2vec/modeling_data2vec_audio.py", "src/transformers/models/unispeech_sat/modeling_unispeech_sat.py", "src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py", "src/transformers/models/wav2vec2/modeling_wav2vec2.py", "src/transformers/models/unispeech/modeling_unispeech.py", "src/transformers/models/sew/modeling_sew.py"], "doc": ["docs/source/en/model_doc/wav2vec2.md", "docs/source/en/model_doc/hubert.md", "docs/source/en/perf_infer_gpu_one.md"], "test": ["tests/models/wav2vec2/test_modeling_wav2vec2.py"], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "95ffbe168690d34e385cdd16c69e9a3f8d877abf", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/11294", "iss_label": "", "title": "serious bug with trainer.py when restarting the training from a checkpoint", "body": "## Environment info\r\n<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.\r\n Don't forget to fill out the missing fields in that output! -->\r\n\r\n- `transformers` version: 4.5.1\r\n- Platform: Linux\r\n- Python version: 3.8\r\n- PyTorch version (GPU?): 1.8\r\n- Tensorflow version (GPU?): - \r\n- Using GPU in script?: - \r\n- Using distributed or parallel set-up in script?: - \r\n\r\n### Who can help\r\n<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @\r\n If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.\r\n Please tag fewer than 3 people.\r\n\r\nModels:\r\n\r\n- albert, bert, xlm: @LysandreJik\r\n- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj\r\n- longformer, reformer, transfoxl, xlnet: @patrickvonplaten\r\n- fsmt: @stas00\r\n- funnel: @sgugger\r\n- gpt2: @patrickvonplaten, @LysandreJik\r\n- rag: @patrickvonplaten, @lhoestq\r\n- tensorflow: @Rocketknight1\r\n\r\nLibrary:\r\n\r\n- benchmarks: @patrickvonplaten\r\n- deepspeed: @stas00\r\n- ray/raytune: @richardliaw, @amogkam\r\n- text generation: @patrickvonplaten\r\n- tokenizers: @LysandreJik\r\n- trainer: @sgugger\r\n- pipelines: @LysandreJik\r\n\r\nDocumentation: @sgugger\r\n\r\nModel hub:\r\n\r\n- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.\r\n\r\nHF projects:\r\n\r\n- datasets: [different repo](https://github.com/huggingface/datasets)\r\n- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)\r\n\r\nExamples:\r\n\r\n- maintained examples (not research project or legacy): @sgugger, @patil-suraj\r\n- research_projects/bert-loses-patience: @JetRunner\r\n- research_projects/distillation: @VictorSanh\r\n\r\n -->\r\n\r\ntrainer: @sgugger, @patil-suraj\r\n\r\n## Information\r\n\r\nHi, I see this serious issue with trainer.py class, let please consider run_translation.py script [1] after you define the model, let freeze the encoder, or wrap the model in a class. So one can modify the model after this line https://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/examples/seq2seq/run_translation.py#L331 \r\n\r\nThen, during the training, one can stop the training, and now would like to continue the training from the place it is stopped, if you print the number of parameters inside trainer.py, right before this line:\r\n\r\nhttps://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/src/transformers/trainer.py#L1062\r\n\r\nlike this \r\n```\r\nfor n,p in model.named_parameters():\r\n if p.requires_grad:\r\n print(n)\r\n```\r\n\r\nwhat would we see? We see all parameters are there, even the ones we made frozen, this is a serious bug that if the user modify the model after creation, those modifications are not considered when restarting the training, could you kindly have a look?\r\nthanks \r\n\r\n[1] https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_translation.py \r\n\r\n\r\n## Expected behavior\r\n\r\nThe user should be able to continue training the modified model as they are modified. ", "pr_html_url": "https://github.com/huggingface/transformers/pull/11318", "file_loc": {"base_commit": "95ffbe168690d34e385cdd16c69e9a3f8d877abf", "files": [{"path": "src/transformers/configuration_utils.py", "status": "modified", "Loc": {"('PretrainedConfig', '__init__', 196)": {"mod": [274]}}}, {"path": "src/transformers/trainer.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [55, 58]}, "('Trainer', 'train', 933)": {"add": [999], "mod": [1003, 1004, 1005, 1007, 1284, 1285, 1286, 1287, 1288, 1289, 1290]}}}, {"path": "tests/test_trainer.py", "status": "modified", "Loc": {"('TrainerIntegrationTest', None, 287)": {"add": [727]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/transformers/configuration_utils.py", "src/transformers/trainer.py"], "doc": [], "test": ["tests/test_trainer.py"], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "61cbee18ae0ea0c20773f7257dc62923d9a42240", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1014", "iss_label": "bug", "title": "[IPKernelApp] WARNING | Parent appears to have exited, shutting down.", "body": "**Bug description**\r\n<!-- Clearly and directly describe the current bug -->\r\n\r\nI got the error \"[IPKernelApp] WARNING | Parent appears to have exited, shutting down.\" while I'm running the example.\r\n\r\n**Bug solved method**\r\n<!-- If you solved the bug, describe the idea or process to solve the current bug. Of course, you can also paste the URL address of your Pull Request. -->\r\n<!-- If not, provide more auxiliary information to facilitate our further positioning and investigation -->\r\n\r\n**Environment information**\r\n<!-- Environment\uff1aSystem version (like ubuntu 22.04), Python version (conda python 3.7), LLM type and model (OpenAI gpt-4-1106-preview) -->\r\n\r\n- LLM type and model name:\r\n- System version: MacOS\r\n- Python version: Python 3.11.7\r\n\r\n<!-- Dependent packagess\uff1athe packages version cause the bug(like `pydantic 1.10.8`), installation method\uff08like `pip install metagpt` or `pip install from source` or `run in docker`\uff09 -->\r\n\r\n- packages version:\r\n- installation method: \r\n\r\n**Screenshots or logs**\r\n<!-- Screenshots or logs of the bug can help us understand the problem more quickly -->\r\n```bash\r\nMacBook-Pro mytest % python solve_math_problems.py\r\n2024-03-15 22:03:02.242 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to /Users/jason/git/github/MetaGPT/workspace/mytest\r\n-```json\r\n[\r\n {\r\n \"task_id\": \"1\",\r\n \"dependent_task_ids\": [],\r\n \"instruction\": \"Find the prime factorization of 6 and 126.\"\r\n },\r\n {\r\n \"task_id\": \"2\",\r\n \"dependent_task_ids\": [\"1\"],\r\n \"instruction\": \"Determine the values of m and n based on the prime factorization and the given conditions.\"\r\n },\r\n {\r\n \"task_id\": \"3\",\r\n \"dependent_task_ids\": [\"2\"],\r\n \"instruction\": \"Calculate the least possible value of m + n.\"\r\n }\r\n]\r\n-```\r\n2024-03-15 22:03:07.142 | INFO | metagpt.utils.cost_manager:update_cost:52 - Total running cost: $0.001 | Max budget: $10.000 | Current cost: $0.001, prompt_tokens: 265, completion_tokens: 123\r\n2024-03-15 22:03:07.142 | INFO | metagpt.roles.role:_plan_and_act:494 - ready to take on task task_id='1' dependent_task_ids=[] instruction='Find the prime factorization of 6 and 126.' task_type='' code='' result='' is_success=False is_finished=False\r\n2024-03-15 22:03:07.142 | INFO | metagpt.roles.di.data_interpreter:_write_code:79 - ready to WriteCodeWithoutTools\r\n2024-03-15 22:03:09.482 | INFO | metagpt.utils.cost_manager:update_cost:52 - Total running cost: $0.001 | Max budget: $10.000 | Current cost: $0.001, prompt_tokens: 557, completion_tokens: 66\r\n 1 import math\r\n 2\r\n 3 # Prime factorization of 6\r\n 4 prime_factors_6 = [2, 3]\r\n 5\r\n 6 # Prime factorization of 126\r\n 7 prime_factors_126 = [2, 3, 3, 7]\r\n0.00s - Debugger warning: It seems that frozen modules are being used, which may\r\n0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off\r\n0.00s - to python to disable frozen modules.\r\n0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.\r\n0.00s - Debugger warning: It seems that frozen modules are being used, which may\r\n0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off\r\n0.00s - to python to disable frozen modules.\r\n0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.\r\n\r\n2024-03-15 22:03:10.305 | INFO | metagpt.roles.role:_plan_and_act:494 - ready to take on task task_id='2' dependent_task_ids=['1'] instruction='Determine the values of m and n based on the prime factorization and the given conditions.' task_type='' code='' result='' is_success=False is_finished=False\r\n2024-03-15 22:03:10.306 | INFO | metagpt.roles.di.data_interpreter:_write_code:79 - ready to WriteCodeWithoutTools\r\n2024-03-15 22:03:11.936 | INFO | metagpt.utils.cost_manager:update_cost:52 - Total running cost: $0.001 | Max budget: $10.000 | Current cost: $0.001, prompt_tokens: 615, completion_tokens: 21\r\n 1 from sympy import *\r\n\r\n2024-03-15 22:03:12.123 | INFO | metagpt.roles.role:_plan_and_act:494 - ready to take on task task_id='3' dependent_task_ids=['2'] instruction='Calculate the least possible value of m + n.' task_type='' code='' result='' is_success=False is_finished=False\r\n2024-03-15 22:03:12.123 | INFO | metagpt.roles.di.data_interpreter:_write_code:79 - ready to WriteCodeWithoutTools\r\n2024-03-15 22:03:34.325 | INFO | metagpt.utils.cost_manager:update_cost:52 - Total running cost: $0.001 | Max budget: $10.000 | Current cost: $0.001, prompt_tokens: 612, completion_tokens: 21\r\n 1 from sympy import *\r\n\r\nMacBook-Pro mytest % [IPKernelApp] WARNING | Parent appears to have exited, shutting down.\r\n```\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1141", "file_loc": {"base_commit": "61cbee18ae0ea0c20773f7257dc62923d9a42240", "files": [{"path": "metagpt/roles/di/data_interpreter.py", "status": "modified", "Loc": {"('DataInterpreter', '_plan_and_act', 88)": {"mod": [89, 90, 91]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/roles/di/data_interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "5e8bd105177e08848990d32b9ea636daa639be19", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1290", "iss_label": "", "title": "Validation Error ", "body": "ValidationError: 1 validation error for Config llm Field required [type=missing, input_value={'PATH': '/Users/psyb0rg/..._INIT_AT_FORK': 'FALSE'}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/missing", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1324", "file_loc": {"base_commit": "5e8bd105177e08848990d32b9ea636daa639be19", "files": [{"path": "metagpt/configs/llm_config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15]}, "('LLMConfig', 'check_llm_key', 95)": {"mod": [97]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/configs/llm_config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "b17846401ec7d12b73079fb21f3939ad9e9e2d70", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/476", "iss_label": "", "title": "How to use FaissStore?", "body": "I saw an example listed under the examples folder, https://github.com/geekan/MetaGPT/blob/ccc4c9e04debfdb8296c342d7a3f9606f407e007/examples/search_kb.py#L14-L16\r\n\r\nbut the example.json not provided.\r\n\r\nI have try to make a fake data that's structure follow the code.\r\n```json\r\n[\r\n {\r\n \"source\": \"Which facial cleanser is good for oily skin?\",\r\n \"output\": \"ABC cleanser is preferred by many with oily skin.\"\r\n },\r\n {\r\n \"source\": \"Which facial cleanser is good for oily skin?\",\r\n \"output\": \"For oily skin, consider using DEF facial wash.\"\r\n },\r\n {\r\n \"source\": \"Which facial cleanser is good for oily skin?\",\r\n \"output\": \"XYZ facial cleanser is suitable for oily skin.\"\r\n },\r\n {\r\n \"source\": \"Which facial cleanser is good for oily skin?\",\r\n \"output\": \"XYZ facial cleanser is suitable for oily skin.\"\r\n },\r\n {\r\n \"source\": \"Which facial cleanser is good for oily skin?\",\r\n \"output\": \"XYZ facial cleanser is suitable for oily skin.\"\r\n },\r\n {\r\n \"source\": \"Is L'Oreal good to use?\",\r\n \"output\": \"L'Oreal is a reputable brand and is generally considered good.\"\r\n },\r\n {\r\n \"source\": \"Is L'Oreal good to use?\",\r\n \"output\": \"L'Oreal is a popular brand with many positive reviews.\"\r\n },\r\n {\r\n \"source\": \"Is L'Oreal good to use?\",\r\n \"output\": \"Many people find L'Oreal products effective.\"\r\n },\r\n {\r\n \"source\": \"Is L'Oreal good to use?\",\r\n \"output\": \"L'Oreal is a popular brand with many positive reviews.\"\r\n },\r\n {\r\n \"source\": \"Is L'Oreal good to use?\",\r\n \"output\": \"Many people find L'Oreal products effective.\"\r\n }\r\n]\r\n```\r\nbut the console gives me the below information, it seems my fake data is irrelevant to the SearchAndSummarize.\r\n```shell\r\n(metagpt) yhtao@PC:/mnt/d/github_repo/MetaGPT$ /home/yhtao/anaconda3/envs/metagpt/bin/python /mnt/d/github_repo/MetaGPT/examples/search_kb.py\r\n2023-11-02 13:36:56.963 | INFO | metagpt.config:__init__:44 - Config loading done.\r\n2023-11-02 13:36:57.744 | INFO | __main__:search:20 - User: Which facial cleanser is good for oily skin?\r\n2023-11-02 13:36:57.745 | INFO | metagpt.roles.role:_act:167 - Xiaomei(Sales): ready to SearchAndSummarize\r\nTraceback (most recent call last):\r\n File \"/mnt/d/github_repo/MetaGPT/examples/search_kb.py\", line 26, in <module>\r\n asyncio.run(search())\r\n File \"/home/yhtao/anaconda3/envs/metagpt/lib/python3.10/asyncio/runners.py\", line 44, in run\r\n return loop.run_until_complete(main)\r\n File \"/home/yhtao/anaconda3/envs/metagpt/lib/python3.10/asyncio/base_events.py\", line 641, in run_until_complete\r\n return future.result()\r\n File \"/mnt/d/github_repo/MetaGPT/examples/search_kb.py\", line 21, in search\r\n result = await role.run(query)\r\n File \"/mnt/d/github_repo/MetaGPT/metagpt/roles/role.py\", line 240, in run\r\n rsp = await self._react()\r\n File \"/mnt/d/github_repo/MetaGPT/metagpt/roles/role.py\", line 209, in _react\r\n return await self._act()\r\n File \"/mnt/d/github_repo/MetaGPT/metagpt/roles/role.py\", line 168, in _act\r\n response = await self._rc.todo.run(self._rc.important_memory)\r\n File \"/mnt/d/github_repo/MetaGPT/metagpt/actions/search_and_summarize.py\", line 121, in run\r\n query = context[-1].content\r\nIndexError: list index out of range\r\n```", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/501", "file_loc": {"base_commit": "b17846401ec7d12b73079fb21f3939ad9e9e2d70", "files": [{"path": "examples/search_kb.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 11], "mod": [25]}, "(None, 'search', 14)": {"mod": [15, 18]}}}, {"path": "metagpt/actions/search_and_summarize.py", "status": "modified", "Loc": {}}, {"path": "metagpt/document_store/faiss_store.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7], "mod": [81, 82, 83, 84, 85]}, "('FaissStore', None, 22)": {"add": [60], "mod": [23, 53]}}}, {"path": "metagpt/roles/sales.py", "status": "modified", "Loc": {"('Sales', '__init__', 14)": {"mod": [15, 16, 17, 18, 19, 20, 21, 22, 23, 24]}, "('Sales', '_set_store', 29)": {"mod": [31]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/actions/search_and_summarize.py", "metagpt/document_store/faiss_store.py", "metagpt/roles/sales.py", "examples/search_kb.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "bdba23e4225b3b77402c8725854668c2b84c5041", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1367", "iss_label": "", "title": "Dashscope service causing :ValueError: too many values to unpack (expected 11)", "body": "https://github.com/geekan/MetaGPT/blob/9f8f0a27fd3e7d6a7f6fcf40103a94829533bdc2/metagpt/provider/dashscope_api.py#L51\r\nWhen using DashScope service, in this line, the`_get_protocol_params`method returns 13 values but the unpack logic assumes that 11 values are returned, causing the ValueError: too many values to unpack (expected 11).\r\nA proper way that works for me is adding another two values in the unpacking logic:\r\n```python\r\n(\r\n api_protocol,\r\n ws_stream_mode,\r\n is_binary_input,\r\n http_method,\r\n stream,\r\n async_request,\r\n query,\r\n headers,\r\n request_timeout,\r\n form,\r\n resources,\r\n base_address,\r\n flattened_output\r\n ) = _get_protocol_params(kwargs)\r\n```\r\nThe version of dashscope package is `1.19.3`", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1496", "file_loc": {"base_commit": "bdba23e4225b3b77402c8725854668c2b84c5041", "files": [{"path": "metagpt/provider/dashscope_api.py", "status": "modified", "Loc": {"(None, 'build_api_arequest', 36)": {"add": [50], "mod": [54, 55, 57]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/provider/dashscope_api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f201b2f5f32c2d48eab6632bf103e9b3a92fc999", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/233", "iss_label": "", "title": "ImportError \u5f53\u8fd0\u884c startup.py", "body": "\u5f53\u6211\u5c1d\u8bd5\u8fd0\u884c `startup.py` \u65f6\uff0c\u6211\u9047\u5230\u4e86\u4e00\u4e2a ImportError\u3002\u6211\u5df2\u6309\u7167 `requirements.txt` \u4e2d\u5217\u51fa\u7684\u4f9d\u8d56\u8fdb\u884c\u4e86\u5b89\u88c5\uff0c\u5e76\u786e\u8ba4\u6211\u7684Python\u7248\u672c\u6ee1\u8db3\u9879\u76ee\u7684\u8981\u6c42\u3002\r\n\r\n\u5177\u4f53\u7684\u9519\u8bef\u6d88\u606f\u5982\u4e0b\uff1a\r\n\r\n2023-08-15 20:31:23.375 | INFO | metagpt.config:init:44 - Config loading done.\r\nTraceback (most recent call last):\r\nFile \"F:\\metaGPT\\metagpt\\startup.py\", line 7, in <module>\r\nfrom metagpt.roles import Architect, Engineer, ProductManager, ProjectManager, QaEngineer\r\nImportError: cannot import name 'ProductManager' from 'metagpt.roles' (F:\\metaGPT\\metagpt\\metagpt\\roles_init_.py)\r\n\r\n\r\n\u6211\u5df2\u7ecf\u5c1d\u8bd5\u91cd\u65b0\u5b89\u88c5\u6240\u6709\u4f9d\u8d56\u9879\uff0c\u5e76\u786e\u4fdd `pandas` \u548c\u5176\u4ed6\u5fc5\u8981\u7684\u5e93\u5df2\u6210\u529f\u5b89\u88c5\uff0c\u4f46\u95ee\u9898\u4ecd\u7136\u5b58\u5728\u3002\r\n\r\n\u8bf7\u95ee\u6709\u89e3\u51b3\u8fd9\u4e2a\u95ee\u9898\u7684\u5efa\u8bae\u5417\uff1f\r\n\r\n\u73af\u5883\u914d\u7f6e\uff1a\r\n- \u7cfb\u7edf\uff1aWindows 10\r\n- Python\u7248\u672c\uff1a3.11\r\n\r\n\u8c22\u8c22\u3002\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1253", "file_loc": {"base_commit": "f201b2f5f32c2d48eab6632bf103e9b3a92fc999", "files": [{"path": "metagpt/provider/openai_api.py", "status": "modified", "Loc": {"('OpenAILLM', '_achat_completion_stream', 89)": {"mod": [103]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/provider/openai_api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "6b70f7b0ed3c2215ffff500772e6ae4f8ce79c5a", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1257", "iss_label": "", "title": "stream field in LLMConfig does not work", "body": "**Bug description**\r\n~/.metagpt/config2.yaml\r\n```\r\nllm: \r\n stream: False\r\n```\r\ndoes not affect the way to call llm.aask.\r\n**Bug solved method**\r\n\r\n```\r\nclass BaseLLM(ABC):\r\n async def aask(\r\n self,\r\n msg: Union[str, list[dict[str, str]]],\r\n system_msgs: Optional[list[str]] = None,\r\n format_msgs: Optional[list[dict[str, str]]] = None,\r\n images: Optional[Union[str, list[str]]] = None,\r\n timeout=USE_CONFIG_TIMEOUT,\r\n stream=None,\r\n ) -> str:\r\n\r\n if stream is None:\r\n stream = config.llm.stream\r\n rsp = await self.acompletion_text(message, stream=stream, timeout=self.get_timeout(timeout))\r\n \r\n```\r\n\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1258", "file_loc": {"base_commit": "6b70f7b0ed3c2215ffff500772e6ae4f8ce79c5a", "files": [{"path": "metagpt/configs/llm_config.py", "status": "modified", "Loc": {"('LLMConfig', None, 41)": {"mod": [77]}}}, {"path": "metagpt/provider/base_llm.py", "status": "modified", "Loc": {"('BaseLLM', 'aask', 128)": {"add": [148], "mod": [135]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/provider/base_llm.py", "metagpt/configs/llm_config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "ddf4697381ec6a5e929669eff59e3e4953a6598e", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/278", "iss_label": "", "title": "Can I use engine mode to connect azure-gpt?", "body": "<img width=\"590\" alt=\"image\" src=\"https://github.com/geekan/MetaGPT/assets/23121539/3ef72f57-b82b-4890-83ca-9c5044927ff1\">\r\n\r\n```\r\nresponse = openai.ChatCompletion.create(\r\n engine=\"gpt-4-chatbot-ui\",\r\n messages = [{\"role\":\"system\",\"content\":\"You are an AI assistant that helps people find information.\"},\r\n```\r\nOfficially supports engine mode, Can I use engine mode to connect azure-gpt?\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/280", "file_loc": {"base_commit": "ddf4697381ec6a5e929669eff59e3e4953a6598e", "files": [{"path": "config/config.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [19, 23]}}}, {"path": "metagpt/config.py", "status": "modified", "Loc": {"('Config', '__init__', 41)": {"add": [61]}}}, {"path": "metagpt/provider/openai_api.py", "status": "modified", "Loc": {"('OpenAIGPTAPI', None, 134)": {"add": [176]}, "('OpenAIGPTAPI', '_achat_completion_stream', 156)": {"mod": [165, 166, 167, 168]}, "('OpenAIGPTAPI', '_cons_kwargs', 176)": {"mod": [178, 179, 180, 181, 182, 183, 184, 185, 187, 188, 189, 190, 191, 192, 193, 194, 195]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/provider/openai_api.py", "metagpt/config.py"], "doc": [], "test": [], "config": ["config/config.yaml"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "5e8bd105177e08848990d32b9ea636daa639be19", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1220", "iss_label": "", "title": "pydantic_core._pydantic_core.ValidationError: 1 validation error for Config", "body": "D:\\MetaGPT-main\\venv\\Scripts\\python.exe D:/MetaGPT-main/test.py\r\n2024-04-23 17:14:39.661 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to D:\\MetaGPT-main\r\nTraceback (most recent call last):\r\n File \"D:\\MetaGPT-main\\test.py\", line 3, in <module>\r\n repo: ProjectRepo = generate_repo(\"Create a 2048 game\") # or ProjectRepo(\"<path>\")\r\n File \"D:\\MetaGPT-main\\metagpt\\software_company.py\", line 30, in generate_repo\r\n from metagpt.config2 import config\r\n File \"D:\\MetaGPT-main\\metagpt\\config2.py\", line 164, in <module>\r\n config = Config.default()\r\n File \"D:\\MetaGPT-main\\metagpt\\config2.py\", line 106, in default\r\n return Config(**final)\r\n File \"D:\\MetaGPT-main\\venv\\lib\\site-packages\\pydantic\\main.py\", line 164, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for Config\r\nllm.api_key\r\n Value error, Please set your API key in config2.yaml [type=value_error, input_value='YOUR_API_KEY', input_type=str]\r\n For further information visit https://errors.pydantic.dev/2.5/v/value_error\r\n\r\nI've written the config as to why this is still the case\uff1f\uff1f\uff1f\r\n![\u5fae\u4fe1\u56fe\u7247_20240423172525](https://github.com/geekan/MetaGPT/assets/91006305/804b3f17-98b4-4137-9487-475ead58560b)\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1324", "file_loc": {"base_commit": "5e8bd105177e08848990d32b9ea636daa639be19", "files": [{"path": "metagpt/configs/llm_config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15]}, "('LLMConfig', 'check_llm_key', 95)": {"mod": [97]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/configs/llm_config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "12948a5482bf4c6c79fb4c84f89bbad3600942e4", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1100", "iss_label": "bug", "title": "debate example fail to work with gemini", "body": "**Bug description**\r\ndebate example throws error with gemini-pro 1.5.\r\nWebsearch works with gemini-pro\r\n\r\n**Bug solved method**\r\n\r\n**Environment information**\r\nPython 3.9\r\nConda\r\n\r\n- LLM type and model name: Gemini-Pro\r\n- System version:\r\n- Python version: 3.9\r\n\r\n\r\n**Screenshots or logs**\r\npython3 debate.py \"Talk about Artificial General Intelligence\"\r\n2024-03-25 17:57:01.666 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to /Users/samsaha2\r\n2024-03-25 17:57:03.800 | INFO | metagpt.team:invest:90 - Investment: $3.0.\r\n2024-03-25 17:57:03.801 | INFO | __main__:_act:63 - Biden(Democrat): to do SpeakAloud(SpeakAloud)\r\n2024-03-25 17:57:06.072 | WARNING | metagpt.utils.common:wrapper:572 - There is a exception in role's execution, in order to resume, we delete the newest role communication message in the role's memory.\r\n2024-03-25 17:57:06.081 | ERROR | metagpt.utils.common:wrapper:554 - Exception occurs, start to serialize the project, exp:\r\nTraceback (most recent call last):\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/utils/common.py\", line 563, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/roles/role.py\", line 558, in run\r\n rsp = await self.react()\r\nValueError: The `response.text` quick accessor only works for simple (single-`Part`) text responses. This response is not simple text.Use the `result.parts` accessor or the full `result.candidates[index].content.parts` lookup instead.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/utils/common.py\", line 549, in wrapper\r\n result = await func(self, *args, **kwargs)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/team.py\", line 134, in run\r\n await self.env.run()\r\nException: Traceback (most recent call last):\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/utils/common.py\", line 563, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/roles/role.py\", line 558, in run\r\n rsp = await self.react()\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/roles/role.py\", line 525, in react\r\n rsp = await self._react()\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/roles/role.py\", line 471, in _react\r\n rsp = await self._act()\r\n File \"/Users/samsaha2/debate.py\", line 70, in _act\r\n rsp = await todo.run(context=context, name=self.name, opponent_name=self.opponent_name)\r\n File \"/Users/samsaha2/debate.py\", line 41, in run\r\n rsp = await self._aask(prompt)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/actions/action.py\", line 93, in _aask\r\n return await self.llm.aask(prompt, system_msgs)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/provider/base_llm.py\", line 89, in aask\r\n rsp = await self.acompletion_text(message, stream=stream, timeout=timeout)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/tenacity/_asyncio.py\", line 88, in async_wrapped\r\n return await fn(*args, **kwargs)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/tenacity/_asyncio.py\", line 47, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/tenacity/__init__.py\", line 314, in iter\r\n return fut.result()\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/concurrent/futures/_base.py\", line 439, in result\r\n return self.__get_result()\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/concurrent/futures/_base.py\", line 391, in __get_result\r\n raise self._exception\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/tenacity/_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/provider/google_gemini_api.py\", line 147, in acompletion_text\r\n return await self._achat_completion_stream(messages)\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/metagpt/provider/google_gemini_api.py\", line 127, in _achat_completion_stream\r\n content = chunk.text\r\n File \"/Users/samsaha2/miniconda3/envs/metagpt/lib/python3.9/site-packages/google/generativeai/types/generation_types.py\", line 328, in text\r\n raise ValueError(\r\nValueError: The `response.text` quick accessor only works for simple (single-`Part`) text responses. This response is not simple text.Use the `result.parts` accessor or the full `result.candidates[index].content.parts` lookup instead.\r\n\r\n\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1105", "file_loc": {"base_commit": "12948a5482bf4c6c79fb4c84f89bbad3600942e4", "files": [{"path": "metagpt/provider/google_gemini_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 13], "mod": [6]}, "('GeminiLLM', '_achat_completion_stream', 138)": {"add": [152], "mod": [144]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [63]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/provider/google_gemini_api.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "02f999204009ae5cf78152a0fc47aa6ac98b4aa2", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/475", "iss_label": "", "title": "tenacity.RetryError: RetryError[<Future at 0x7faeafc5ffd0 state=finished raised JSONDecodeError>]", "body": "When I set up everything and python3 startup.py \"Write a cli snake game\", I get this error.\r\n\r\n \"\"\",\r\n \"Logic Analysis\": [\r\n [\"main.py\",\"Contains the main game loop and user input handling\"],\r\n [\"game.py\",\"Contains the game logic, including the snake and food classes\"],\r\n [\"snake.py\",\"Contains the Snake class and its methods for moving and eating food\"],\r\n [\"food.py\",\"Contains the Food class and its method for generating new food\"]\r\n ],\r\n \"Task list\": [\r\n \"main.py\",\r\n \"game.py\",\r\n \"snake.py\",\r\n \"food.py\"\r\n ],\r\n \"Shared Knowledge\": \"\"\"\r\n 'game.py' contains the Game class, which manages the game state and controls the snake and food.\r\n 'snake.py' contains the Snake class, which represents the snake and its movements.\r\n 'food.py' contains the Food class, which represents the food and generates new food when eaten by the snake.\r\n \"\"\",\r\n \"Anything UNCLEAR\": \"We need to decide on the game's width and height, which will be specified in the API request when starting a new game.\"\r\n}\r\n[END]\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/tenacity/_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/cx/qianwen/MetaGPT-main/metagpt/actions/action.py\", line 78, in _aask_v1\r\n parsed_data = CustomDecoder(strict=False).decode(content)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/cx/qianwen/MetaGPT-main/metagpt/utils/custom_decoder.py\", line 297, in decode\r\n return super().decode(s)\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/user/cx/qianwen/MetaGPT-main/startup.py\", line 72, in <module>\r\n fire.Fire(main)\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/fire/core.py\", line 141, in Fire\r\n component_trace = _Fire(component, args, parsed_flag_args, context, name)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/fire/core.py\", line 466, in _Fire\r\n component, remaining_args = _CallAndUpdateTrace(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/fire/core.py\", line 681, in _CallAndUpdateTrace\r\n component = fn(*varargs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/cx/qianwen/MetaGPT-main/startup.py\", line 68, in main\r\n asyncio.run(startup(idea, investment, n_round, code_review, run_tests, implement))\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/asyncio/runners.py\", line 190, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/asyncio/runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/asyncio/base_events.py\", line 653, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/user/cx/qianwen/MetaGPT-main/startup.py\", line 47, in startup\r\n await company.run(n_round=n_round)\r\n File \"/home/user/cx/qianwen/MetaGPT-main/metagpt/software_company.py\", line 60, in run\r\n await self.environment.run()\r\n File \"/home/user/cx/qianwen/MetaGPT-main/metagpt/environment.py\", line 67, in run\r\n await asyncio.gather(*futures)\r\n File \"/home/user/cx/qianwen/MetaGPT-main/metagpt/roles/role.py\", line 240, in run\r\n rsp = await self._react()\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/cx/qianwen/MetaGPT-main/metagpt/roles/role.py\", line 209, in _react\r\n return await self._act()\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/home/user/cx/qianwen/MetaGPT-main/metagpt/roles/role.py\", line 168, in _act\r\n response = await self._rc.todo.run(self._rc.important_memory)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/cx/qianwen/MetaGPT-main/metagpt/actions/project_management.py\", line 184, in run\r\n rsp = await self._aask_v1(prompt, \"task\", OUTPUT_MAPPING, format=format)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/tenacity/_asyncio.py\", line 88, in async_wrapped\r\n return await fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/tenacity/_asyncio.py\", line 47, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/user/anaconda3/envs/py11/lib/python3.11/site-packages/tenacity/__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[<Future at 0x7faeafc5ffd0 state=finished raised JSONDecodeError>]\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/500", "file_loc": {"base_commit": "02f999204009ae5cf78152a0fc47aa6ac98b4aa2", "files": [{"path": "config/config.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [36, 96]}}}, {"path": "metagpt/actions/action.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8, 18]}, "('Action', None, 21)": {"mod": [52]}, "('Action', '_aask_v1', 53)": {"mod": [66, 70, 71, 73, 74, 75, 76, 78, 83]}}}, {"path": "metagpt/config.py", "status": "modified", "Loc": {"('Config', '__init__', 41)": {"add": [48, 71, 95], "mod": [51, 52]}}}, {"path": "metagpt/llm.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14], "mod": [9]}, "(None, 'LLM', 18)": {"add": [28], "mod": [23, 24]}}}, {"path": "metagpt/roles/role.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21]}, "('Role', '_think', 185)": {"add": [195]}, "('RoleContext', None, 77)": {"mod": [82, 86]}, "('Role', '_init_actions', 123)": {"mod": [130, 131]}}}, {"path": "tests/metagpt/utils/test_custom_decoder.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8, 39, 56]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["metagpt/llm.py", "metagpt/actions/action.py", "metagpt/config.py", "metagpt/roles/role.py"], "doc": [], "test": ["tests/metagpt/utils/test_custom_decoder.py"], "config": ["config/config.yaml"], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "0d1c2914a4a601217fb59316e4bfd600b57fd655", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/5438", "iss_label": "bug", "title": "Tweaks not passing into Text Input Components via API", "body": "### Bug Description\n\nFor example flow with multiple text inputs, passing tweaks via API doesn't pass in the variables correctly to the component and flow.\r\nFlow Json: [Multi-input flow test.json](https://github.com/user-attachments/files/18242439/Multi-input.flow.test.json)\r\nAPI Request Body:\r\n{\r\n \"output_type\": \"text\",\r\n \"input_type\": \"text\",\r\n \"tweaks\": {\r\n \"TextInput-fDJCN\": {\r\n \"input_value\": \"Elon Musk\"\r\n },\r\n \"TextInput-GHYWO\": {\r\n \"input_value\": \"June 28, 1971\"\r\n }\r\n}}\r\nResponse: [tweaks-api-response.json](https://github.com/user-attachments/files/18242442/tweaks-api-response.json)\r\n\r\nFails only on dev/local, seems to work on DataStax Langflow/cloud (https://astra.datastax.com/langflow/) with the same flow json and api request body.\r\n\n\n### Reproduction\n\n1. Clone/Fork latest code from https://github.com/langflow-ai/langflow or via cli (uv pip install langflow)\r\n2. Run langflow with make init from codebase or cli (uv run langflow run)\r\n3. Create new flow and import Flow Json above which has 2 text input components - make sure to set OpenAI API key so flow runs without failure\r\n4. Invoke flow via API and use tweaks, setting the following tweaks for the text input components input_value: \"TextInput-ABC\": {\r\n \"input_value\": \"Elon Musk\"\r\n },\r\n \"TextInput-XYZ\": {\r\n \"input_value\": \"June 28, 1971\"\r\n }\r\n\r\nExpected Results:\r\nAI Response provides a Christmas greeting for Elon and mentions his Birthday\r\n\r\nActual Response:\r\nPrompt asks to provide the name and birthday or uses placeholders\n\n### Expected behavior\n\nAI Response provides a Christmas greeting for Elon and mentions his Birthday\r\n\n\n### Who can help?\n\n@italojohnny , @oga\n\n### Operating System\n\nMac OS 15.2\n\n### Langflow Version\n\n1.1.1\n\n### Python Version\n\n3.11\n\n### Screenshot\n\nLocal/Dev API Response:\r\n<img width=\"872\" alt=\"Screenshot 2024-12-24 at 5 19 39\u202fPM\" src=\"https://github.com/user-attachments/assets/33766f09-5a72-43d4-9962-f83ead9ad303\" />\r\n\r\nDataStax Langflow / Cloud API Response:\r\n<img width=\"876\" alt=\"Screenshot 2024-12-24 at 5 19 48\u202fPM\" src=\"https://github.com/user-attachments/assets/a23c07f7-57de-42c4-a5c9-eb0d47381f83\" />\r\n\n\n### Flow File\n\n[Multi-input flow test.json](https://github.com/user-attachments/files/18242478/Multi-input.flow.test.json)\r\n", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/5656", "file_loc": {"base_commit": "0d1c2914a4a601217fb59316e4bfd600b57fd655", "files": [{"path": "src/backend/base/langflow/api/v1/endpoints.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [91], "mod": [11, 30, 45, 47]}, "(None, 'validate_input_and_tweaks', 70)": {"mod": [75, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89]}, "(None, 'simple_run_flow', 93)": {"mod": [101, 102, 113, 114, 115, 116, 117, 118, 119]}}}, {"path": "src/backend/tests/unit/test_endpoints.py", "status": "modified", "Loc": {"(None, 'test_successful_run_no_payload', 275)": {"mod": [290]}, "(None, 'test_successful_run_with_output_type_text', 303)": {"mod": [321]}, "(None, 'test_successful_run_with_output_type_any', 334)": {"mod": [353]}, "(None, 'test_successful_run_with_output_type_debug', 366)": {"mod": [386]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/backend/base/langflow/api/v1/endpoints.py"], "doc": [], "test": ["src/backend/tests/unit/test_endpoints.py"], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "b2e40ec92f236043684ac542b9be1c77faa664fe", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/2520", "iss_label": "bug", "title": "Langflow not loading all required Environment variable mentioned in LANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT", "body": "**Describe the bug**\r\nTrying to load 3 global variables from Environment. only the first variable is loaded and not the second\r\n\r\n**Browser and Version**\r\n - Browser: Edge\r\n - Version: 126.0.2592.81\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. my .env file: includes the lines:\r\n\r\n```python\r\n\r\nLANGFLOW_STORE_ENVIRONMENT_VARIABLES = true\r\nLANGFLOW_VARIABLES_TO_GET_FROM_ENVIRONMENT='[\"default_gcp_project\", \"default_gcp_location\", \"default_gcp_dataset\"]'\r\n```\r\n2. launch Langflow: `python -m langflow run --components-path /src_backend_platform/ --env-file /env/.env`\r\n3. Only the first variable is loaded in Langflow: `default_gcp_project`\r\n\r\n**Screenshots**\r\n\r\n![image](https://github.com/langflow-ai/langflow/assets/36305975/cd106870-c6e1-4e92-bb20-be1394c14621)\r\n", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/2971", "file_loc": {"base_commit": "b2e40ec92f236043684ac542b9be1c77faa664fe", "files": [{"path": "src/backend/base/langflow/__main__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7, 12]}, "(None, 'run', 78)": {"mod": [130, 134, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 153, 154, 155, 156]}}}, {"path": "src/backend/base/langflow/services/settings/factory.py", "status": "modified", "Loc": {"('SettingsServiceFactory', None, 5)": {"add": [5]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/backend/base/langflow/__main__.py", "src/backend/base/langflow/services/settings/factory.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "395c2d7372dffcf1d4f9577f623a2966183595d9", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/1995", "iss_label": "bug", "title": "Unable to upload file to folder", "body": "**Describe the bug**\r\nUnable to upload file to folder due to key error when selecting `folder_name` from the `flow. I see that a recent commit has made it such that each flow will have a default folder name, if none exists, but this not backwards-compatible with existing flows made in <=v1.0.0a38. Also - I would have expected the payload to send _my_ folder that I want the flow created in, not for it to create a folder in my store. \r\n\r\n(EDIT: I see that that recent change was just for making sure the flow was _in_ a folder, but does not export that folder name in the json, so that's an invalid concern. Seems we just may need to pass the selected folder to the /folders/upload endpoint)\r\n\r\nhttps://github.com/langflow-ai/langflow/blob/543e8d52afbb8e64ae22255909f4453484b2bb07/src/backend/base/langflow/api/v1/folders.py#L194\r\n\r\n**Browser and Version**\r\n - Browser [e.g. chrome, safari] firefox\r\n - Version [e.g. 22] v1.0.0a38\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. pip install \r\n2. run\r\n3. create folder\r\n4. upload a flow\r\n5. see logs\r\n\r\n**logs** \r\n\r\n```\r\n \u2502 langflow/.venv/lib/python3.11/site-packages/langflow/api/v1/folders.py:209 in upload_file \u2502 \r\n \u2502 \u2502 \r\n \u2502 206 \u2502 \u2502 \r\n \u2502 207 \u2502 folder_results = session.exec( \u2502 \r\n \u2502 208 \u2502 \u2502 select(Folder).where( \u2502 \r\n \u2502 \u2771 209 \u2502 \u2502 \u2502 Folder.name == data[\"folder_name\"], \u2502 \r\n \u2502 210 \u2502 \u2502 \u2502 Folder.user_id == current_user.id, \u2502 \r\n \u2502 211 \u2502 \u2502 ) \u2502 \r\n \u2502 212 \u2502 ) \u2502 \r\n \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \r\n KeyError: 'folder_name' \r\n```\r\n\r\n\r\n**Fix(?)** \r\n* Pass currently selected folder to payload when uploading\r\n* Use that as the folder name in `/folders/upload` \r\n* Add test for `/folders/upload`", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/2125", "file_loc": {"base_commit": "395c2d7372dffcf1d4f9577f623a2966183595d9", "files": [{"path": "src/frontend/src/components/sidebarComponent/components/sideBarFolderButtons/index.tsx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [126]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/frontend/src/components/sidebarComponent/components/sideBarFolderButtons/index.tsx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "9c69d134c9a4a34865d44e6f37a2c513c3a49969", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/6880", "iss_label": "bug", "title": "Traces not being captured anymore", "body": "### Bug Description\n\nHello there,\n\nThe recent changes on the build process caused `end_all_traces` to no longer being called, as a result the final call to send the traces for langwatch, langfuse etc are not being called:\n\nhttps://github.com/langflow-ai/langflow/pull/5940#issuecomment-2685617499\n\n### Reproduction\n\n1. Set up LANGWATCH_API_KEY\n2. Traces are not arriving to langwatch\n3. Add print statements on `end_all_traces`\n4. Verify it's not being called\n\n### Expected behavior\n\n`end_all_traces` should be called during executions on the langflow canvas, playground and api\n\n### Who can help?\n\n@edwinjosechittilappilly @ogabrielluiz @italojohnny \n\n### Operating System\n\nMac OSX\n\n### Langflow Version\n\n15.3\n\n### Python Version\n\n3.11\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/6991", "file_loc": {"base_commit": "9c69d134c9a4a34865d44e6f37a2c513c3a49969", "files": [{"path": "src/backend/base/langflow/api/build.py", "status": "modified", "Loc": {"(None, 'generate_flow_events', 145)": {"add": [427]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/backend/base/langflow/api/build.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "b1a552fa9ed7d4c4eabb90642f4b81f24775f676", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/4212", "iss_label": "bug", "title": "Langflow traces are not well reflected in Langfuse", "body": "### Bug Description\r\n\r\nNot able to see Langflow components inputs and outputs in Langfuse tracing. \r\nFor instance, for a given component, you will see the component python code instead of the input data. And the Output is completely empty. \r\n\r\nSee screenshot below.\r\n\r\n### Reproduction\r\n\r\n1. create a simple flow with chat input, prompt, and a chat output! flow example [reproduce _error_langfuse (1).json](https://github.com/user-attachments/files/17446337/reproduce._error_langfuse.1.json)\r\n2. connect to Langfuse, and see trace.\r\n3. notice that components input are tracking the python code of the component and not the data inputted\r\n4. notice that the component output is empty\r\n\r\nsee screenshow:\r\n![image](https://github.com/user-attachments/assets/9461c3f1-cf90-430a-8d38-54ab61418e99)\r\n\r\n\r\n### Expected behavior\r\n\r\nI'm expecting to see input data (not the python code) and output data of each components.\r\n\r\n### Who can help?\r\n\r\n@italojohnny \r\n\r\n### Operating System\r\n\r\nUbuntu\r\n\r\n### Langflow Version\r\n\r\n1.0.19\r\n\r\n### Python Version\r\n\r\n3.10\r\n\r\n### Screenshot\r\n\r\n![image](https://github.com/user-attachments/assets/9461c3f1-cf90-430a-8d38-54ab61418e99)\r\n\r\n### Flow File\r\n\r\n[reproduce _error_langfuse (1).json](https://github.com/user-attachments/files/17446337/reproduce._error_langfuse.1.json)\r\n", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/4669", "file_loc": {"base_commit": "b1a552fa9ed7d4c4eabb90642f4b81f24775f676", "files": [{"path": "src/backend/base/langflow/api/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [144], "mod": [20]}, "(None, 'build_graph_from_data', 145)": {"add": [146]}}}, {"path": "src/backend/base/langflow/api/v1/chat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13], "mod": [45]}, "(None, 'build_graph_and_get_order', 161)": {"mod": [169]}}}, {"path": "src/backend/base/langflow/custom/custom_component/component.py", "status": "modified", "Loc": {"('Component', 'get_trace_as_inputs', 798)": {"mod": [804, 805, 806]}}}, {"path": "src/backend/base/langflow/services/tracing/langwatch.py", "status": "modified", "Loc": {"('LangWatchTracer', '__init__', 26)": {"add": [43]}}}, {"path": "src/backend/base/langflow/services/tracing/service.py", "status": "modified", "Loc": {"('TracingService', '_end_traces', 174)": {"add": [186]}, "('TracingService', '_end_all_traces', 188)": {"add": [194]}, "('TracingService', 'end', 196)": {"mod": [198]}, "('TracingService', '_end_and_reset', 235)": {"mod": [239]}}}, {"path": "src/backend/tests/unit/events/test_event_manager.py", "status": "modified", "Loc": {"('TestEventManager', None, 11)": {"mod": [39, 40, 41, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52]}, "('TestEventManager', 'test_handling_large_number_of_events', 72)": {"mod": [73]}, "('TestEventManager', 'test_performance_impact_frequent_registrations', 136)": {"mod": [137]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/backend/base/langflow/api/v1/chat.py", "src/backend/base/langflow/api/utils.py", "src/backend/base/langflow/services/tracing/langwatch.py", "src/backend/base/langflow/custom/custom_component/component.py", "src/backend/base/langflow/services/tracing/service.py"], "doc": [], "test": ["src/backend/tests/unit/events/test_event_manager.py"], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "06ea6c408bd5da392aa3650f9d04be0804742525", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/1890", "iss_label": "bug", "title": "getting a column size error when using MySQL.", "body": "**Describe the bug**\r\nI'm getting a column size error when using MySQL.\r\nIt seems to executed when initial setup starter_projects\r\n\r\n\r\n```\r\nsqlalchemy.exc.DataError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely)\r\n (pymysql.err.DataError) (1406, \"Data too long for column 'description' at row 1\")\r\n [SQL: INSERT INTO flow (name, description, icon, icon_bg_color, is_component, updated_at, folder, id, data, user_id) VALUES (%(name)s, %(description)s, %(icon)s,\r\n %(icon_bg_color)s, %(is_component)s, %(updated_at)s, %(folder)s, %(id)s, %(data)s, %(user_id)s)]\r\n [parameters: {'name': 'Basic Prompting (Hello, World)', 'description': 'This flow will get you experimenting with the basics of the UI, the Chat and the Prompt component. \r\n \\n\\nTry changing the Template in it to see how the ... (26 characters truncated) ... change it to this and a Text Input into the `type_of_person` variable : \"Answer the \r\n user as if you were a pirate.\\n\\nUser: {user_input}\\n\\nAnswer: \" ', 'icon': '', 'icon_bg_color': None, 'is_component': 0, 'updated_at': datetime.datetime(2024, 5, 14, 9,\r\n 36, 35, 63929, tzinfo=datetime.timezone.utc), 'folder': 'Starter Projects', 'id': '5d216c873f0f4c5a98ec1c85438a90f4', 'data': '{\"nodes\": [{\"id\": \"Prompt-uxBqP\", \"type\": \r\n \"genericNode\", \"position\": {\"x\": 53.588791333410654, \"y\": -107.07318910019967}, \"data\": {\"type\": \"Prompt\", ... (24099 characters truncated) ... \r\n 153Text\\\\u0153],\\\\u0153type\\\\u0153:\\\\u0153str\\\\u0153}\"}], \"viewport\": {\"x\": 260.58251815500563, \"y\": 318.2261172111936, \"zoom\": 0.43514115784696294}}', 'user_id': None}]\r\n (Background on this error at: https://sqlalche.me/e/20/9h9h)\r\n```\r\n**Screenshots**\r\n\r\n\r\n**Additional context**\r\nplz help\r\n", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3431", "file_loc": {"base_commit": "06ea6c408bd5da392aa3650f9d04be0804742525", "files": [{"path": "src/backend/base/langflow/services/database/models/flow/model.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13]}, "('FlowBase', None, 26)": {"mod": [28]}}}, {"path": "src/backend/base/langflow/services/database/models/folder/model.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4, 5]}, "('FolderBase', None, 14)": {"mod": [16]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["src/backend/base/langflow/services/database/models/flow/model.py", "src/backend/base/langflow/services/database/models/folder/model.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "ee8276c6b9ac8abcf847a12bc5e6cb5e66079115", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/655", "iss_label": "", "title": "Trivial focused PRs", "body": "### Duplicates\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Summary \ud83d\udca1\r\n\r\nThere are many trivial PRs which are harmless and focused enough, so it's good to make them merged in a batch to reduce the backlog. These are ALL minor mergeable fixes at the moment (2023-04-10T10:57Z)\r\n\r\n\r\n### Examples \ud83c\udf08\r\n\r\n- #33 - The oldest open PR. Now made focused, only adds documentation strings, with very minor extra whitespace\r\n- #115 - Better error messages if OpenAI keys are missing from the config. A single check at the startup. Safe and focused.\r\n- #126 - Just a typo in Readme. 3 chars diff, 1 line\r\n- #179 - Windows beginners don't like $ in the commandline example. 2 chars 1 line diff.\r\n- #226 - Just a single COPY Dockerfile instruction people repeat in PRs over and over. 1 line added diff\r\n- #317 - A trivial improvement of the Readme.\r\n- #338 - A trivial safety warning in the Readme. 1 line diff\r\n- #378 - A trivial typo (and the file ending change is ok, some editors correctly keep fixing it)\r\n- #421 - Documentation for 2 classes. Minor extra whitespace fixes, can be merged IMO\r\n- #457 - A pretty small addition of the debug option\r\n- #579 - A trivial link mistake. 5 char diff\r\n- #590 - The whitespace fixes everybody keeps repeating.\r\n- #611 - A single line addition to .gitignore everybody keeps repeating\r\n- #615 - A single line 7 character fix to Windows setx invocation in the Readme\r\n- #649 - A minor fixup to already merged #575. 10 chars on one line.\r\n\r\n\r\n### Motivation \ud83d\udd26\r\n\r\nWe need to improve the backlog by merging as many PRs a day as possible. Many PRs were bad but thanks to our efforts are pinpointed now. Also, big changes tend to include these small fixes, so by applying the small ones we will improve the big ones too.", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/33", "file_loc": {"base_commit": "da6c0240de37725780f59eb8da7c36a9e810ae5c", "files": [{"path": "scripts/agent_manager.py", "status": "modified", "Loc": {"(None, 'create_agent', 9)": {"add": [9]}, "(None, 'message_agent', 34)": {"add": [34]}, "(None, 'list_agents', 54)": {"add": [54]}, "(None, 'delete_agent', 61)": {"add": [61]}}}, {"path": "scripts/ai_config.py", "status": "modified", "Loc": {"('AIConfig', None, 5)": {"add": [5, 6, 29, 34]}, "('AIConfig', 'load', 15)": {"mod": [16]}}}, {"path": "scripts/ai_functions.py", "status": "modified", "Loc": {"(None, 'evaluate_code', 10)": {"add": [10]}, "(None, 'improve_code', 22)": {"add": [22]}, "(None, 'write_tests', 36)": {"add": [36]}}}, {"path": "scripts/browse.py", "status": "modified", "Loc": {"(None, 'scrape_text', 8)": {"add": [8]}, "(None, 'extract_hyperlinks', 35)": {"add": [35]}, "(None, 'format_hyperlinks', 42)": {"add": [42]}, "(None, 'scrape_links', 49)": {"add": [49]}, "(None, 'split_text', 66)": {"add": [66]}, "(None, 'create_message', 84)": {"add": [84]}, "(None, 'summarize_text', 90)": {"add": [90]}}}, {"path": "scripts/call_ai_function.py", "status": "modified", "Loc": {"(None, 'call_ai_function', 8)": {"add": [8]}}}, {"path": "scripts/chat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10], "mod": [7]}, "(None, 'chat_with_ai', 44)": {"add": [50]}}}, {"path": "scripts/commands.py", "status": "modified", "Loc": {"(None, 'get_command', 27)": {"add": [27]}, "(None, 'execute_command', 55)": {"add": [55]}, "(None, 'get_datetime', 120)": {"add": [120]}, "(None, 'google_search', 125)": {"add": [125]}, "(None, 'google_official_search', 132)": {"add": [132]}, "(None, 'browse_website', 167)": {"add": [167]}, "(None, 'get_text_summary', 180)": {"add": [180]}, "(None, 'get_hyperlinks', 186)": {"add": [186]}, "(None, 'commit_memory', 191)": {"add": [191]}, "(None, 'delete_memory', 197)": {"add": [197]}, "(None, 'overwrite_memory', 208)": {"add": [208]}, "(None, 'shutdown', 234)": {"add": [234]}, "(None, 'start_agent', 239)": {"add": [239]}, "(None, 'message_agent', 262)": {"add": [262]}, "(None, 'list_agents', 280)": {"add": [280]}, "(None, 'delete_agent', 284)": {"add": [284]}}}, {"path": "scripts/config.py", "status": "modified", "Loc": {"('Singleton', None, 9)": {"add": [16]}, "('Config', None, 28)": {"add": [33, 79, 82, 85, 88, 91, 94, 97, 100, 103, 106, 109, 112, 115]}}}, {"path": "scripts/data.py", "status": "modified", "Loc": {"(None, 'load_prompt', 4)": {"add": [4]}}}, {"path": "scripts/execute_code.py", "status": "modified", "Loc": {"(None, 'execute_python_file', 5)": {"add": [5]}}}, {"path": "scripts/file_operations.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6]}, "(None, 'safe_join', 11)": {"add": [11]}, "(None, 'read_file', 21)": {"add": [21]}, "(None, 'write_to_file', 31)": {"add": [31]}, "(None, 'append_to_file', 44)": {"add": [44]}, "(None, 'delete_file', 54)": {"add": [54]}}}, {"path": "scripts/json_parser.py", "status": "modified", "Loc": {"(None, 'fix_and_parse_json', 29)": {"add": [32]}, "(None, 'fix_json', 74)": {"add": [74]}}}, {"path": "scripts/llm_utils.py", "status": "modified", "Loc": {"(None, 'create_chat_completion', 8)": {"add": [8]}}}, {"path": "scripts/main.py", "status": "modified", "Loc": {"(None, 'print_to_console', 21)": {"add": [27]}, "(None, 'print_assistant_thoughts', 48)": {"add": [48]}, "(None, 'construct_prompt', 161)": {"add": [161]}, "(None, 'prompt_user', 189)": {"add": [189]}, "(None, 'parse_arguments', 241)": {"add": [241]}, "(None, 'load_variables', 107)": {"mod": [108]}}}, {"path": "scripts/speak.py", "status": "modified", "Loc": {"(None, 'eleven_labs_speech', 17)": {"add": [17]}}}, {"path": "scripts/spinner.py", "status": "modified", "Loc": {"('Spinner', None, 7)": {"add": [7, 8, 15, 22, 27]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scripts/llm_utils.py", "scripts/file_operations.py", "scripts/commands.py", "scripts/data.py", "scripts/speak.py", "scripts/json_parser.py", "scripts/config.py", "scripts/main.py", "scripts/call_ai_function.py", "scripts/ai_functions.py", "scripts/chat.py", "scripts/spinner.py", "scripts/execute_code.py", "scripts/ai_config.py", "scripts/agent_manager.py", "scripts/browse.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "9150f32f8b8602395534795ddd2d930a1684e419", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/4634", "iss_label": "", "title": "Ver 0.4, and still get the error of \"This model's maximum context length is 4097 tokens\"", "body": "### \u26a0\ufe0f Search for existing issues first \u26a0\ufe0f\n\n- [X] I have searched the existing issues, and there is no existing issue for my problem\n\n### Which Operating System are you using?\n\nWindows\n\n### Which version of Auto-GPT are you using?\n\nLatest Release\n\n### Do you use OpenAI GPT-3 or GPT-4?\n\nGPT-3.5\n\n### Which area covers your issue best?\n\nCommands\n\n### Describe your issue.\n\nAutoGPT crashes in the same scenario as v0.3.1 when working with \"large\" local files.\n\n### Upload Activity Log Content\n\n_No response_\n\n### Upload Error Log Content\n\nTraceback (most recent call last): File \"/usr/local/lib/python3.10/runpy.py\", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File \"/usr/local/lib/python3.10/runpy.py\", line 86, in _run_code exec(code, run_globals) File \"/workspace/Auto-GPT/autogpt/__main__.py\", line 5, in <module> autogpt.cli.main() File \"/home/vscode/.local/lib/python3.10/site-packages/click/core.py\", line 1130, in __call__ return self.main(*args, **kwargs) File \"/home/vscode/.local/lib/python3.10/site-packages/click/core.py\", line 1055, in main rv = self.invoke(ctx) File \"/home/vscode/.local/lib/python3.10/site-packages/click/core.py\", line 1635, in invoke rv = super().invoke(ctx) File \"/home/vscode/.local/lib/python3.10/site-packages/click/core.py\", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File \"/home/vscode/.local/lib/python3.10/site-packages/click/core.py\", line 760, in invoke return __callback(*args, **kwargs) File \"/home/vscode/.local/lib/python3.10/site-packages/click/decorators.py\", line 26, in new_func return f(get_current_context(), *args, **kwargs) File \"/workspace/Auto-GPT/autogpt/cli.py\", line 96, in main run_auto_gpt( File \"/workspace/Auto-GPT/autogpt/main.py\", line 197, in run_auto_gpt agent.start_interaction_loop() File \"/workspace/Auto-GPT/autogpt/agent/agent.py\", line 130, in start_interaction_loop assistant_reply = chat_with_ai( File \"/workspace/Auto-GPT/autogpt/llm/chat.py\", line 112, in chat_with_ai new_summary_message, trimmed_messages = agent.history.trim_messages( File \"/workspace/Auto-GPT/autogpt/memory/message_history.py\", line 79, in trim_messages new_summary_message = self.update_running_summary( File \"/workspace/Auto-GPT/autogpt/memory/message_history.py\", line 194, in update_running_summary self.summary = create_chat_completion(prompt) File \"/workspace/Auto-GPT/autogpt/llm/utils/__init__.py\", line 53, in metered_func return func(*args, **kwargs) File \"/workspace/Auto-GPT/autogpt/llm/utils/__init__.py\", line 87, in _wrapped return func(*args, **kwargs) File \"/workspace/Auto-GPT/autogpt/llm/utils/__init__.py\", line 235, in create_chat_completion response = api_manager.create_chat_completion( File \"/workspace/Auto-GPT/autogpt/llm/api_manager.py\", line 61, in create_chat_completion response = openai.ChatCompletion.create( File \"/home/vscode/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py\", line 25, in create return super().create(*args, **kwargs) File \"/home/vscode/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py\", line 153, in create response, _, api_key = requestor.request( File \"/home/vscode/.local/lib/python3.10/site-packages/openai/api_requestor.py\", line 226, in request resp, got_stream = self._interpret_response(result, stream) File \"/home/vscode/.local/lib/python3.10/site-packages/openai/api_requestor.py\", line 619, in _interpret_response self._interpret_response_line( File \"/home/vscode/.local/lib/python3.10/site-packages/openai/api_requestor.py\", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4904 tokens. Please reduce the length of the messages. Press any key to continue...", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/4652", "file_loc": {"base_commit": "9150f32f8b8602395534795ddd2d930a1684e419", "files": [{"path": "autogpt/memory/message_history.py", "status": "modified", "Loc": {"('MessageHistory', 'update_running_summary', 123)": {"add": [169], "mod": [172, 179, 180, 181, 182, 183, 204]}, "(None, None, None)": {"mod": [17]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["autogpt/memory/message_history.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "a0ecb969589ac5f5172fb543190ca7ecf4803059", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/8887", "iss_label": "good first issue", "title": "Move to a single source of truth for docs \u2014 Remove duplicate info from readme", "body": "autogpt_platform\\\\backend\\\\README.advanced.md and autogpt_platform\\\\backend\\\\README.md. We should just point people to the docs directory (docs/platform/advanced_setup|getting-started) in these. Check the content is all in that file, and the normal getting started, then remove these two files and replace with a link to the docs site \u2014 the dev-docs.agpt.co and docs.agpt.co. Call out both and their branch match for released master vs dev branch", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/9580", "file_loc": {"base_commit": "a0ecb969589ac5f5172fb543190ca7ecf4803059", "files": [{"path": "autogpt_platform/backend/README.advanced.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 3, 5, 7, 9, 10, 11, 12, 14, 15, 16, 17, 19, 21, 22, 23, 25, 27, 28, 29, 31, 33, 34, 35, 37, 39, 40, 41, 44, 45, 46, 47, 48, 49, 50, 51, 53, 55, 56, 57, 58, 60, 62, 63, 64, 65, 67, 69, 71, 73, 74, 75]}}}, {"path": "autogpt_platform/backend/README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 3, 4, 6, 8, 10, 12, 14, 15, 16, 17, 19, 20, 21, 22, 24, 26, 27, 28, 30, 32, 33, 34, 36, 38, 39, 40, 42, 44, 45, 46, 49, 50, 51, 52, 53, 54, 55, 56, 58, 60, 61, 62, 63, 65, 67, 69, 71, 72, 73, 75, 77, 78, 79, 80, 81, 83, 85, 87, 88, 89, 91, 93, 94, 95, 97, 99, 100, 101, 103, 105, 106, 107, 109, 111, 112, 113, 115, 117, 119, 120, 121, 123, 125, 126, 128, 129, 130, 131, 133, 134, 135, 136, 138, 139, 140, 141, 143, 145, 147, 149, 151, 153, 154, 155, 156, 157, 158, 159, 161, 163, 164, 165, 166, 168, 170, 171, 172, 174, 176, 177, 178, 179, 181, 183, 185, 186, 187, 189, 190, 192, 195, 197, 198, 199, 201, 203, 204, 205, 206, 207, 208, 209, 210]}}}, {"path": "docs/content/platform/advanced_setup.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [66]}}}, {"path": "docs/content/platform/getting-started.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [63, 148], "mod": [26, 45, 132]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["autogpt_platform/backend/README.md", "docs/content/platform/getting-started.md", "autogpt_platform/backend/README.advanced.md", "docs/content/platform/advanced_setup.md"], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "f9d8f728fa3c60dba80cc1c69dfef8bf748eaec4", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/711", "iss_label": "", "title": "Not creating pinecone index", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\nI by mistake deleted my pinecone index. When i fire up auto-gpt it does not make me a new one but functions correctly. I have tried starting fresh, but same issue. Works fine but is not triggering pinecone.\n\n### Current behavior \ud83d\ude2f\n\ni download a fresh copy and fill in the .env and it starts but does not create a pinecone index.\n\n### Expected behavior \ud83e\udd14\n\nEvery other time ive loaded it it creates the index and then communicates with it.\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\n# Paste your prompt here\r\n```", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/794", "file_loc": {"base_commit": "f9d8f728fa3c60dba80cc1c69dfef8bf748eaec4", "files": [{"path": ".env.template", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [232]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [".env.template"], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "c6d90227fecec8acc1481c486a91337b07e8a820", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/402", "iss_label": "", "title": "Pinecone Error", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\n`\r\nTraceback (most recent call last):\r\n File \"scripts/main.py\", line 286, in <module>\r\n memory = PineconeMemory()\r\n File \"/Users/areebpasha/Desktop/Auto GPT copy/Auto-GPT/scripts/config.py\", line 17, in __call__\r\n cls._instances[cls] = super(\r\n File \"/Users/areebpasha/Desktop/Auto GPT copy/Auto-GPT/scripts/memory.py\", line 30, in __init__\r\n if table_name not in pinecone.list_indexes():\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/manage.py\", line 185, in list_indexes\r\n response = api_instance.list_indexes()\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/core/client/api_client.py\", line 776, in __call__\r\n return self.callable(self, *args, **kwargs)\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/core/client/api/index_operations_api.py\", line 1132, in __list_indexes\r\n return self.call_with_http_info(**kwargs)\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/core/client/api_client.py\", line 838, in call_with_http_info\r\n return self.api_client.call_api(\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/core/client/api_client.py\", line 413, in call_api\r\n return self.__call_api(resource_path, method,\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/core/client/api_client.py\", line 200, in __call_api\r\n response_data = self.request(\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/core/client/api_client.py\", line 439, in request\r\n return self.rest_client.GET(url,\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/core/client/rest.py\", line 236, in GET\r\n return self.request(\"GET\", url,\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/pinecone/core/client/rest.py\", line 202, in request\r\n r = self.pool_manager.request(method, url,\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/urllib3/request.py\", line 74, in request\r\n return self.request_encode_url(\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/urllib3/request.py\", line 96, in request_encode_url\r\n return self.urlopen(method, url, **extra_kw)\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/urllib3/poolmanager.py\", line 376, in urlopen\r\n response = conn.urlopen(method, u.request_uri, **kw)\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 703, in urlopen\r\n httplib_response = self._make_request(\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py\", line 398, in _make_request\r\n conn.request(method, url, **httplib_request_kw)\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/urllib3/connection.py\", line 239, in request\r\n super(HTTPConnection, self).request(method, url, body=body, headers=headers)\r\n File \"/opt/anaconda3/lib/python3.8/http/client.py\", line 1255, in request\r\n self._send_request(method, url, body, headers, encode_chunked)\r\n File \"/opt/anaconda3/lib/python3.8/http/client.py\", line 1296, in _send_request\r\n self.putheader(hdr, value)\r\n File \"/opt/anaconda3/lib/python3.8/site-packages/urllib3/connection.py\", line 224, in putheader\r\n _HTTPConnection.putheader(self, header, *values)\r\n File \"/opt/anaconda3/lib/python3.8/http/client.py\", line 1232, in putheader\r\n if _is_illegal_header_value(values[i]):\r\nTypeError: expected string or bytes-like object\r\n`\n\n### Current behavior \ud83d\ude2f\n\nDoes not produce output. \n\n### Expected behavior \ud83e\udd14\n\nShould work as shown in the demo. Any assistance is greatly appreciated.\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\n# Paste your prompt here\r\n```", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/440", "file_loc": {"base_commit": "c6d90227fecec8acc1481c486a91337b07e8a820", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [61]}}}, {"path": "scripts/main.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [283]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["scripts/main.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "ad7cefa10c0647feee85114d58559fcf83ba6743", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/1821", "iss_label": "", "title": "SYSTEM: Command browse_website returned: Error: Message: Service /home/appuser/.wdm/drivers/chromedriver/linux64/112.0.5615.49/chromedriver unexpectedly exited. Status code was: 127", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\n1. Build a dockerized version of aouto-gpt\r\n2. Give it any instruction that leads to it needing to browse_website\n\n### Current behavior \ud83d\ude2f\n\nSYSTEM: Command browse_website returned: Error: Message: Service /home/appuser/.wdm/drivers/chromedriver/linux64/112.0.5615.49/chromedriver unexpectedly exited. Status code was: 127\n\n### Expected behavior \ud83e\udd14\n\nShould be bale to browse website properly\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\n# It really could be any prompt\r\n```\r\n", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/1857", "file_loc": {"base_commit": "ad7cefa10c0647feee85114d58559fcf83ba6743", "files": [{"path": "Dockerfile", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["Dockerfile"], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "04494991884d1eee3e111349cff5d98f37830522", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/119", "iss_label": "question\nquestion-migrate", "title": "Enriching the auto-generated OpenAPI-Spec", "body": "**Description**\r\nHi there, new to FastAPI so sorry if this question has been asked elsewhere:\r\n\r\nWhat is the best way to enrich the auto-generated OpenAPI-Spec generated by FastAPI. It currently seems to support only a few things (like changing the title or description), but if I would want to add tags to group my endpoints or do sth more fancy like adding a logo (which is supported by ReDoc via x-logo) it seems to me that I would want to use FastAPI to generate the base specification and then add some scripts that will enrich this spec in a structured way?\r\n\r\nHas anyone encountered this issue before?\r\n\r\n**Additional context**\r\nI already looked into the code a little bit and it seems that the ReDoc that gets shown when serving the API is generated on the fly (as is the OpenAPI schema) => this seems to make a custom processing of the specification to make use of cool ReDoc features harder\r\n", "pr_html_url": "https://github.com/fastapi/fastapi/pull/126", "file_loc": {"base_commit": "04494991884d1eee3e111349cff5d98f37830522", "files": [{"path": "mkdocs.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [69]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["mkdocs.yml"], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "42a4ed7a1804f631f971d05f3302d54361ebe10e", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/3910", "iss_label": "question\nreviewed\nquestion-migrate", "title": "Would be nice to be able to route request using header's Accept field (or generic header's field)", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\nN.A.\n```\n\n\n### Description\n\nIn some cases would be nice to specify header's field as routing rules.\r\n\r\nOne important example is to support API versioning based on Header's Accept field\n\n### Wanted Solution\n\nAbility to specify some header's fields in the `.get()`, `.post()` .... decorators\n\n### Wanted Code\n\n```python\nfrom fastapi import FastAPI\r\n\r\napp = FastAPI()\r\n\r\n\r\n@app.get(\"/\", accept=\"application/json;version=1.0\")\r\nasync def root():\r\n return {\"message\": \"Hello World v1.0\"}\r\n\r\n@app.get(\"/\", accept=\"application/json;version=1.1\")\r\nasync def root():\r\n return {\"message\": \"Hello World v1.1\"}\n```\n\n\n### Alternatives\n\nfrom fastapi import FastAPI\r\n\r\napp = FastAPI()\r\n\r\n\r\n@app.get(\"/\", headers={\"accept\": \"application/json;version=1.0\"})\r\nasync def root():\r\n return {\"message\": \"Hello World v1.0\"}\r\n\r\n@app.get(\"/\", headers={\"accept\": \"application/json;version=1.1\"})\r\nasync def root():\r\n return {\"message\": \"Hello World v1.1\"}\r\n\n\n### Operating System\n\nmacOS\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\npython -c \"import fastapi; print(fastapi.__version__)\"\n\n### Python Version\n\nPython 3.9.7\n\n### Additional Context\n\n_No response_", "pr_html_url": "https://github.com/fastapi/fastapi/pull/4727", "file_loc": {"base_commit": "42a4ed7a1804f631f971d05f3302d54361ebe10e", "files": [{"path": "fastapi/openapi/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2], "mod": [5]}, "(None, 'get_openapi', 393)": {"add": [448], "mod": [434]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["fastapi/openapi/utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "ea8d7f689efcb0ddf28f4686fa3be90c2154503b", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/1628", "iss_label": "feature\nanswered\nreviewed", "title": "pass method return value as positional parameters to Response", "body": "Recently I've noticed that `status_code` parameter of the next statement is useless:\r\n```python3\r\n@router.get('/callback/{provider}', status_code=303,\r\n response_description='Redirect to the application login',\r\n response_class=RedirectResponse)\r\n```\r\nBecause I always have to create `RedirectResponse` objects manually:\r\n```python3\r\n return RedirectResponse(target.include_query_params(error=f'{provider}_{error_reason}'),\r\n status_code=303)\r\n```\r\n\r\nI've tried to play with return values and noticed that return value is always passed to `response_class` instance as a `content` parameter:\r\n```python3\r\n@app.get('/test', status_code=303, response_class=RedirectResponse)\r\nasync def test():\r\n return '/'\r\n```\r\nthis snippet produces the next exception:\r\n```\r\nINFO: 127.0.0.1:43878 - \"GET /test HTTP/1.1\" 500 Internal Server Error\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 385, in run_asgi\r\n result = await app(self.scope, self.receive, self.send)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py\", line 45, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/fastapi/applications.py\", line 171, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/starlette/applications.py\", line 102, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/starlette/middleware/errors.py\", line 181, in __call__\r\n raise exc from None\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/starlette/middleware/errors.py\", line 159, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/starlette/exceptions.py\", line 82, in __call__\r\n raise exc from None\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/starlette/exceptions.py\", line 71, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/starlette/routing.py\", line 550, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/starlette/routing.py\", line 227, in handle\r\n await self.app(scope, receive, send)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/starlette/routing.py\", line 41, in app\r\n response = await func(request)\r\n File \"/home/dmig/.pyenv/versions/fastapi-sandbox/lib/python3.8/site-packages/fastapi/routing.py\", line 217, in app\r\n response = response_class(\r\nTypeError: __init__() got an unexpected keyword argument 'content'\r\n```\r\n\r\nI didn't dive into the code yet, so this is not a PR but a request. But the idea is simple:\r\n```python3\r\nresponse = response_class(*(endpoint_result if isinstance(endpoint_result, (Tuple, List)) else tuple(endpoint_result)))\r\n```\r\nOr maybe even more complex logic: pass `**endpoint_result` if `endpoint_result` is a `Mapping`, pass `*endpoint_result` if it is an `Iterable` or else pass it as a `*tuple(endpoint_result)`", "pr_html_url": "https://github.com/fastapi/fastapi/pull/3457", "file_loc": {"base_commit": "ea8d7f689efcb0ddf28f4686fa3be90c2154503b", "files": [{"path": "docs/en/docs/advanced/custom-response.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [163, 167, 205]}}}, {"path": "docs_src/custom_response/tutorial006.py", "status": "modified", "Loc": {"(None, 'read_typer', 8)": {"mod": [8]}}}, {"path": "fastapi/applications.py", "status": "modified", "Loc": {"('FastAPI', 'add_api_route', 203)": {"mod": [209]}, "('FastAPI', 'api_route', 256)": {"mod": [261]}, "('FastAPI', 'get', 349)": {"mod": [354]}, "('FastAPI', 'put', 398)": {"mod": [403]}, "('FastAPI', 'post', 447)": {"mod": [452]}, "('FastAPI', 'delete', 496)": {"mod": [501]}, "('FastAPI', 'options', 545)": {"mod": [550]}, "('FastAPI', 'head', 594)": {"mod": [599]}, "('FastAPI', 'patch', 643)": {"mod": [648]}, "('FastAPI', 'trace', 692)": {"mod": [697]}}}, {"path": "fastapi/openapi/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}, "(None, 'get_openapi_path', 168)": {"mod": [221]}}}, {"path": "fastapi/routing.py", "status": "modified", "Loc": {"(None, 'get_request_handler', 154)": {"mod": [157]}, "(None, 'app', 176)": {"mod": [235, 236, 237, 238, 239]}, "('APIRoute', '__init__', 290)": {"mod": [296]}, "('APIRouter', 'add_api_route', 466)": {"mod": [472]}, "('APIRouter', 'api_route', 539)": {"mod": [544]}, "('APIRouter', 'get', 717)": {"mod": [722]}, "('APIRouter', 'put', 767)": {"mod": [772]}, "('APIRouter', 'post', 817)": {"mod": [822]}, "('APIRouter', 'delete', 867)": {"mod": [872]}, "('APIRouter', 'options', 917)": {"mod": [922]}, "('APIRouter', 'head', 967)": {"mod": [972]}, "('APIRouter', 'patch', 1017)": {"mod": [1022]}, "('APIRouter', 'trace', 1067)": {"mod": [1072]}}}, {"path": "tests/test_tutorial/test_custom_response/test_tutorial006.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["docs_src/custom_response/tutorial006.py", "fastapi/openapi/utils.py", "fastapi/routing.py", "fastapi/applications.py"], "doc": ["docs/en/docs/advanced/custom-response.md"], "test": ["tests/test_tutorial/test_custom_response/test_tutorial006.py"], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "4638b2c64e259b90bef6a44748e00e405825a111", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/5646", "iss_label": "question\nquestion-migrate", "title": "Bad encoding in query parameters with new TestClient using httpx.Client", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\nimport logging\r\n\r\nfrom fastapi import FastAPI\r\n\r\napp = FastAPI()\r\n\r\n\r\n@app.get(\"/example\")\r\nasync def _show_encoding_error(look_for: str):\r\n return {\"found\": look_for}\r\n\r\n\r\nif __name__ == '__main__':\r\n from fastapi.testclient import TestClient\r\n\r\n with TestClient(app) as client:\r\n params = {\"look_for\": \"plain text\"}\r\n resp = client.get(\"/example\", params=params).json()\r\n logging.warning(resp)\r\n assert resp[\"found\"] == \"plain text\"\r\n\r\n params = {\"look_for\": \"Espa\u00f1a\"}\r\n resp = client.get(\"/example\", params=params).json()\r\n logging.warning(resp)\r\n assert resp[\"found\"] == \"Espa\u00f1a\", resp[\"found\"]\n```\n\n\n### Description\n\nAfter the change to `httpx` for the `TestClient` in **v0.87.0**, the query parameters are not properly encoded? when sending requests with it, and strings are corrupted when received in the endpoints.\r\n\r\nThe example app works as expected if called from the SwaggerUI or from another python process using a _plain_ `httpx.Client`, so it appears something broke with the new wrapping for `TestClient` \ud83e\udd72\r\n\r\n```python\r\nimport httpx\r\n\r\nparams = {\"look_for\": \"Espa\u00f1a\"}\r\nwith httpx.Client(base_url=\"http://localhost:8000/\") as client:\r\n resp = client.get(\"/example\", params=params).json()\r\n assert resp[\"found\"] == \"Espa\u00f1a\"\r\n```\n\n### Operating System\n\nmacOS\n\n### Operating System Details\n\nM1, running arm64 arch\n\n### FastAPI Version\n\n0.87.0\n\n### Python Version\n\nPython 3.10.5\n\n### Additional Context\n\nstarlette-0.21\r\nhttpx-0.23.0\r\n\r\nDiscovered when trying to migrate the test suite for a ~big project previously using fastapi-0.85.1 + starlette-0.20.4. \r\n\r\nAll minor syntax changes from old `requests` to new `httpx` were under control, but in one unit test, **a string with an accent** was making some search to fail without results (test is sending \"Formalizaci\u00f3n\" but endpoint is receiving **\"Formalizaci\u00c3\u00b3n\"** \ud83d\ude31), and I was getting crazy \ud83d\ude05", "pr_html_url": "https://github.com/fastapi/fastapi/pull/5659", "file_loc": {"base_commit": "4638b2c64e259b90bef6a44748e00e405825a111", "files": [{"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [42]}}}, {"path": "tests/test_starlette_urlconvertors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21, 47], "mod": [1]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["tests/test_starlette_urlconvertors.py"], "config": ["pyproject.toml"], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "3127bc4e05b72e39d1681735ec1ee49844b7dc88", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/1972", "iss_label": "feature\nlang-all\nlang-fr", "title": "French translations", "body": "# Bonjour \ud83c\uddeb\ud83c\uddf7\r\n\r\nWelcome to the issue that coordinates the French translation effort.\r\n\r\n## Purpose \r\n\r\n- Avoiding several people working on the same document at the same time. \r\n\r\nThe first person who declares that he/she is working on a translation gets the responsibility to carry it out. If a PR seems to be stalled, we can discuss a transfer of responsibility here.\r\n\r\n- Enforcing best practices\r\n\r\nBest practices are listed later in this description. You can propose your practice at any time, ideally with a supporting source and an example. \r\n\r\nDefining and sharing best practices will help to avoid common mistakes and will allow faster and easier reviews.\r\n\r\n- Help and build the community\r\n\r\nDo not hesitate to ask any questions regarding the french translation effort here. The stronger the community, the more effective we will be and the more we will enjoy. \r\n\r\n- Provide a french translation for this awesome library (last but not least)\r\n\r\nIf you are here, you probably like **FastAPI**, and maybe you even speak French. Giving more people the opportunity to get started using the documentation in its native language will encourage adoption. In that spirit, let's contribute to the magic of open source in this way.\r\n\r\n## How to contribute\r\n\r\n### Review \r\n\r\nKeep in mind that the easiest way to participate is to review the PRs. We need to avoid accumulating PRs waiting for review.\r\n\r\n### Translate\r\n\r\nIf you are not familiar with contributing to open source projects have a look at https://github.com/firstcontributions/first-contributions.\r\n\r\nIn any case, take a look at the documentation section related to the [contribution](https://fastapi.tiangolo.com/contributing/#development-contributing) and more precisely the part about the [documentation](https://fastapi.tiangolo.com/contributing/#docs).\r\n\r\nOnce you are decided to translate a document, make yourself known here by leaving a message here (eg. https://github.com/tiangolo/fastapi/issues/1972#issuecomment-702956335).\r\n\r\n### Organize\r\n\r\nIf you wish, your energy is welcome to help with the organization. Bringing together motivated people and helping them get the job done is essential. Moreover, we can surely learn a lot from the translation work of other languages that are much more advanced and we can have a significant impact if we put good processes in place that can help the whole community \r\n\r\n## Good practices\r\n\r\n- technical terms \r\n\r\nTechnical terms do not need to be translated. It is also a question of common sense, in certain conditions English can be preferred because the French version is not in use.\r\nSee: https://github.com/tiangolo/fastapi/issues/1972#issuecomment-715500921\r\n\r\n- punctuation and typography\r\n\r\nFor example, missing whitespace before/after punctuation. You can rely on [this page](https://leconjugueur.lefigaro.fr/ukponctuationtypographie.php) to help you.\r\nsee: https://github.com/tiangolo/fastapi/pull/1973#issuecomment-1186304199\r\n\r\n- structure the PR by commit\r\n\r\nSplitting the commit will ease the review and helps to track the change on the original documentation while the PR is open.\r\n(see: [example](https://github.com/tiangolo/fastapi/pull/2234/commits)).\r\n\r\nThe first commit should only contain the copy of the English version of the document to the french one. With the exact same content (eg. 30f1dd6966ceedd9e8bea2d7aac7bbded9bbc568).\r\n\r\nThe second one is dedicated the index update (eg. 8ff5f7a6d4510819f95d570ac6a1d3279e2595ed)\r\n\r\nAnd starting from this point you can start the translation. Notice that, thanks to this structure we can directly compare the two languages (eg. 3729f5b1c2bc858b15266aa4eae21bce07eb04c0).\r\n\r\nAlso, if the English document got updated, we just have to update the first commit and the conflicts will reveal updated part of the document \ud83e\ude84 \r\n\r\n## Recommended tools\r\n\r\n- https://www.deepl.com\r\n- https://www.linguee.fr\r\n- https://www.wordreference.com\r\n- https://french.stackexchange.com", "pr_html_url": "https://github.com/fastapi/fastapi/pull/3103", "file_loc": {"base_commit": "3127bc4e05b72e39d1681735ec1ee49844b7dc88", "files": [{"path": "docs/fr/mkdocs.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [58]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/fr/mkdocs.yml"], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "3ff504f03fb5ba852def5a0a41653c6bed7efb1b", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/713", "iss_label": "feature\nconfirmed\nanswered\nreviewed", "title": "Support body in GET and other methods with undefined behavior", "body": "**Describe the bug**\r\nIn new version of rest api specification get methods can have body but fastapi not add it to swagger spec.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\nCreate get method with some body parameters.\r\n\r\n\r\n**Expected behavior**\r\nBody parameters present in the specification.\r\n\r\n**Environment:**\r\n - OS: Windows\r\n - FastAPI Version: 0.42.0\r\n\r\n\r\n- Python version, get it with: 3.8", "pr_html_url": "https://github.com/fastapi/fastapi/pull/1626", "file_loc": {"base_commit": "3ff504f03fb5ba852def5a0a41653c6bed7efb1b", "files": [{"path": "docs/en/docs/tutorial/body.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [12, 14]}}}, {"path": "fastapi/dependencies/utils.py", "status": "modified", "Loc": {"(None, 'get_typed_annotation', 246)": {"mod": [249]}}}, {"path": "fastapi/openapi/constants.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["fastapi/openapi/constants.py", "fastapi/dependencies/utils.py"], "doc": ["docs/en/docs/tutorial/body.md"], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "d03678dfbbdee0018252af3f5899716e824d6e87", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/110", "iss_label": "question\nanswered\nreviewed\nquestion-migrate", "title": "be able to host statics myself", "body": "**Is your feature request related to a problem? Please describe.**\r\nI am using FastAPI on my private network and can't connect to the internet. but the `/docs` use cdn. I cannot visit that page. \r\n\r\nAlthough I can rewrite /docs router to replace the html template , and add a /staitc route , but I feel a bit ugly. I wish there is some config for it.\r\n\r\n**Describe the solution you'd like**\r\n\r\nadd two variable for FastAPI\r\n```\r\napi = FastAPI(static_prefix=\"/statics\", static_url=\"/data/swagger-dist/\")\r\n```\r\nwith static_prefix=\"/statics\", the /docs html may look like:\r\n```html\r\n<link type=\"text/css\" rel=\"stylesheet\" href=\"/statics/swagger-ui.css\">\r\n```\r\n\r\nwith static_url=\"/data/swagger-dist/\" , FastAPI will add router for static_prefix automatically, like \r\n```python\r\nself.router.get(self.static_prefix, response_description=PlainTextResponse, ...)\r\n```\r\n\r\n**Describe alternatives you've considered**\r\nIf you don't want to handle statics, just ignore ``static_url`` , I can create it myself, or put it behind nginx.\r\n\r\nIf do so, you may need to write in the document how to download all static files .\r\n\r\n**Additional context**\r\n\r\n", "pr_html_url": "https://github.com/fastapi/fastapi/pull/112", "file_loc": {"base_commit": "d03678dfbbdee0018252af3f5899716e824d6e87", "files": [{"path": "fastapi/openapi/docs.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [43]}, "(None, 'get_swagger_ui_html', 4)": {"mod": [4, 5, 6, 10, 11, 12, 13, 14, 15, 16, 21, 25, 26, 27, 28, 34, 35, 37, 42]}, "(None, 'get_redoc_html', 45)": {"mod": [45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 60, 66, 69, 71, 72, 73, 74, 75, 76, 77, 78, 80]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["fastapi/openapi/docs.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "4fdcdf341c106d345e6d0c349091cfb208f9c792", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/2237", "iss_label": "question\nquestion-migrate", "title": "Add __all__ to __init__.py files to silence mypy(strict) errors", "body": "Strict mypy mode gives such errors:\r\n```\r\nbase/api/users/controllers.py:4: error: Module 'fastapi' has no attribute 'Depends'\r\nbase/api/users/controllers.py:4: error: Module 'fastapi' has no attribute 'HTTPException'\r\n```\r\n\r\non such import statement:\r\n```\r\nfrom fastapi import Depends, HTTPException\r\n```\r\n\r\nTried using \r\n\r\n```\r\nfrom fastapi import Depends as Depends\r\nfrom fastapi import HTTPException as HTTPException\r\n```\r\nas per recommendations in https://github.com/tiangolo/typer/issues/112 discussion. But the errors remain.\r\n\r\nIt seems that adding __all__ to the __init__.py files for the stuff that's reexported is a way to go (as per https://github.com/python/mypy/issues/7042 discussion).\r\n\r\nThanks for considering this!\r\n\r\n\r\n\r\n", "pr_html_url": "https://github.com/fastapi/fastapi/pull/2547", "file_loc": {"base_commit": "4fdcdf341c106d345e6d0c349091cfb208f9c792", "files": [{"path": "docs_src/openapi_callbacks/tutorial001.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [29]}}}, {"path": "fastapi/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]}}}, {"path": "fastapi/applications.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19], "mod": [1, 27]}, "('FastAPI', '__init__', 31)": {"mod": [47, 50, 52, 53, 58, 59, 63, 64, 77, 85, 86, 109]}, "('FastAPI', None, 30)": {"mod": [119, 306, 307]}, "('FastAPI', 'add_api_route', 194)": {"mod": [197]}, "('FastAPI', 'api_route', 247)": {"mod": [271, 272]}, "('FastAPI', 'add_api_websocket_route', 301)": {"mod": [302]}, "('FastAPI', 'include_router', 313)": {"mod": [321, 324]}, "('FastAPI', 'get', 338)": {"mod": [361, 362]}, "('FastAPI', 'put', 387)": {"mod": [410, 411]}, "('FastAPI', 'post', 436)": {"mod": [459, 460]}, "('FastAPI', 'delete', 485)": {"mod": [508, 509]}, "('FastAPI', 'options', 534)": {"mod": [557, 558]}, "('FastAPI', 'head', 583)": {"mod": [606, 607]}, "('FastAPI', 'patch', 632)": {"mod": [655, 656]}, "('FastAPI', 'trace', 681)": {"mod": [704, 705]}}}, {"path": "fastapi/background.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/concurrency.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 4, 5, 22, 25, 30, 33, 38]}, "(None, '_fake_asynccontextmanager', 14)": {"mod": [14]}}}, {"path": "fastapi/datastructures.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "('UploadFile', None, 6)": {"mod": [8]}}}, {"path": "fastapi/dependencies/models.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "('Dependant', '__init__', 16)": {"mod": [27]}}}, {"path": "fastapi/dependencies/utils.py", "status": "modified", "Loc": {"(None, 'check_file_field', 88)": {"mod": [93, 98]}, "(None, 'get_sub_dependant', 133)": {"mod": [136]}, "(None, None, None)": {"mod": [166]}, "(None, 'get_typed_signature', 243)": {"mod": [243]}, "(None, 'get_typed_annotation', 259)": {"mod": [262, 263, 264]}, "(None, 'get_dependant', 281)": {"mod": [284]}, "(None, 'is_coroutine_callable', 426)": {"mod": [426]}, "(None, 'is_async_gen_callable', 435)": {"mod": [435]}, "(None, 'is_gen_callable', 442)": {"mod": [442]}, "(None, 'solve_generator', 449)": {"mod": [450]}, "(None, 'solve_dependencies', 467)": {"mod": [475, 481, 488, 489, 490, 495, 497]}}}, {"path": "fastapi/encoders.py", "status": "modified", "Loc": {"(None, 'generate_encoders_by_class_tuples', 14)": {"mod": [15, 16, 17]}, "(None, 'jsonable_encoder', 26)": {"mod": [34, 46, 47]}}}, {"path": "fastapi/middleware/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/middleware/cors.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/middleware/gzip.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/middleware/httpsredirect.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/middleware/trustedhost.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/middleware/wsgi.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/openapi/docs.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2]}, "(None, 'get_swagger_ui_html', 8)": {"mod": [16]}}}, {"path": "fastapi/openapi/models.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8]}, "('EmailStr', None, 14)": {"mod": [16]}}}, {"path": "fastapi/openapi/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16]}, "(None, 'get_openapi_security_definitions', 67)": {"mod": [67]}, "(None, 'get_openapi_operation_parameters', 82)": {"mod": [91, 97]}, "(None, 'get_openapi_operation_request_body', 108)": {"mod": [112, 116, 118]}, "(None, 'get_openapi_operation_metadata', 143)": {"mod": [143]}, "(None, 'get_openapi_path', 156)": {"mod": [157, 158, 164, 172, 199, 200, 201, 202]}, "(None, 'get_openapi', 326)": {"mod": [335, 342, 343, 345, 346, 347, 349, 371]}}}, {"path": "fastapi/param_functions.py", "status": "modified", "Loc": {"(None, 'Depends', 241)": {"mod": [242]}, "(None, 'Security', 247)": {"mod": [248]}}}, {"path": "fastapi/params.py", "status": "modified", "Loc": {"('Depends', '__init__', 317)": {"mod": [318]}, "('Security', '__init__', 330)": {"mod": [332]}}}, {"path": "fastapi/responses.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 4, 5, 6, 7, 8, 9, 10]}}}, {"path": "fastapi/routing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [18], "mod": [5, 33]}, "('APIRouter', 'include_router', 585)": {"add": [665], "mod": [594, 595, 669]}, "(None, 'get_request_handler', 140)": {"mod": [153]}, "(None, 'app', 162)": {"mod": [210]}, "(None, 'get_websocket_app', 220)": {"mod": [222]}, "('APIWebSocketRoute', '__init__', 240)": {"mod": [243]}, "('APIRoute', '__init__', 262)": {"mod": [265, 290, 301]}, "('APIRoute', None, 261)": {"mod": [378]}, "('APIRouter', '__init__', 396)": {"mod": [404, 410, 411, 412, 416, 418, 419, 420]}, "('APIRouter', 'add_api_route', 438)": {"mod": [441, 466]}, "('APIRouter', 'api_route', 511)": {"mod": [535, 536, 537]}, "('APIRouter', 'add_api_websocket_route', 567)": {"mod": [568]}, "('APIRouter', None, 395)": {"mod": [578, 579]}, "('APIRouter', 'get', 686)": {"mod": [709, 710]}, "('APIRouter', 'put', 736)": {"mod": [759, 760]}, "('APIRouter', 'post', 786)": {"mod": [809, 810]}, "('APIRouter', 'delete', 836)": {"mod": [859, 860]}, "('APIRouter', 'options', 886)": {"mod": [909, 910]}, "('APIRouter', 'head', 936)": {"mod": [959, 960]}, "('APIRouter', 'patch', 986)": {"mod": [1009, 1010]}, "('APIRouter', 'trace', 1036)": {"mod": [1059, 1060]}}}, {"path": "fastapi/security/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]}}}, {"path": "fastapi/security/oauth2.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "('OAuth2', '__init__', 116)": {"mod": [119]}, "('OAuth2PasswordBearer', '__init__', 140)": {"mod": [144]}, "('OAuth2AuthorizationCodeBearer', '__init__', 168)": {"mod": [174]}}}, {"path": "fastapi/staticfiles.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/templating.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/testclient.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "fastapi/utils.py", "status": "modified", "Loc": {"(None, 'get_model_definitions', 17)": {"mod": [22, 24, 26]}, "(None, 'create_cloned_field', 73)": {"mod": [83]}, "(None, 'deep_dict_update', 130)": {"mod": [130]}}}, {"path": "fastapi/websockets.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["fastapi/websockets.py", "fastapi/middleware/httpsredirect.py", "fastapi/middleware/cors.py", "fastapi/routing.py", "fastapi/templating.py", "fastapi/middleware/__init__.py", "fastapi/middleware/wsgi.py", "fastapi/params.py", "fastapi/middleware/gzip.py", "fastapi/openapi/models.py", "fastapi/security/__init__.py", "fastapi/responses.py", "fastapi/middleware/trustedhost.py", "fastapi/datastructures.py", "fastapi/openapi/utils.py", "fastapi/dependencies/utils.py", "fastapi/security/oauth2.py", "fastapi/dependencies/models.py", "fastapi/openapi/docs.py", "fastapi/applications.py", "fastapi/__init__.py", "fastapi/staticfiles.py", "docs_src/openapi_callbacks/tutorial001.py", "fastapi/encoders.py", "fastapi/param_functions.py", "fastapi/utils.py", "fastapi/concurrency.py", "fastapi/background.py"], "doc": [], "test": ["fastapi/testclient.py"], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c09e950bd2efb81f82931469bee6856c72e54357", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/2996", "iss_label": "question\nquestion-migrate", "title": "Please support latest SQLAlchemy or pin it", "body": "Hi @tiangolo, fastapi tests are currently failing and therefore causing pydantic tests to fail.\r\n\r\nSee https://github.com/samuelcolvin/pydantic/pull/2584, fastapi is not compatible with the v1.4 of SQLAlchemy which was released earlier in March, I've had to pin to `SQLAlchemy==1.3.23`\r\n\r\nPlease could we fix fastapi (the incompatibility looks like it might be trivial) or pin the dependency?\r\n\r\nOnce master of fastapi is fixed, we'll need to remember to also remove the hack from pydantic.", "pr_html_url": "https://github.com/fastapi/fastapi/pull/3001", "file_loc": {"base_commit": "c09e950bd2efb81f82931469bee6856c72e54357", "files": [{"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [56]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["pyproject.toml"], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "d6bd71db7f3200c2b1ef46123c07374848aed86a", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5533", "iss_label": "", "title": "Insecure argument passed to cURL", "body": "## Expected Behavior\r\na secure download of the Miniconda install script\r\n\r\n## Current Behavior\r\ninstall script is cURL'd with `-k` resulting in an insecure transfer and possible arbitrary code execution on my machine.\r\n\r\n## Steps to Reproduce\r\n1. run any of these without miniconda installed:\r\n- `wsl.sh`\r\n- `start_windows.bat`\r\n- `start_macos.sh`\r\n- `start_linux.sh`\r\n\r\n2. pwned\r\n\r\n## Possible Solution\r\ndon't curl executable things insecurely\r\n\r\n## Context\r\nBelow is a copy of the text I had put in a security advisory on both this repository and ParisNeo/lollms-webui as of early December. The response from the maintainers has been radio silence, so I am publishing the text here so that people can know/protect themselves.\r\n\r\n# Security Advisory\r\n\r\n### Summary\r\nAn unsafe command line argument being passed to cURL allows the Miniconda installer download to be MITM'd.\r\n\r\nThis downloaded script is subsequently run, potentially resulting in arbitrary code execution on user machines. \r\n\r\n### Details\r\nHere's an example from `start_linux.sh`\r\n```bash\r\n mkdir -p \"$INSTALL_DIR\"\r\n curl -Lk \"$MINICONDA_DOWNLOAD_URL\" > \"$INSTALL_DIR/miniconda_installer.sh\"\r\n```\r\nIt passes the `-k` argument to cURL.\r\n\r\ncURL man page documentation for `-k`:\r\n```\r\n -k, --insecure\r\n (TLS SFTP SCP) By default, every secure connection curl makes is\r\n verified to be secure before the transfer takes place. This\r\n option makes curl skip the verification step and proceed without\r\n checking.\r\n\r\n When this option is not used for protocols using TLS, curl\r\n verifies the server's TLS certificate before it continues: that\r\n the certificate contains the right name which matches the host\r\n name used in the URL and that the certificate has been signed by\r\n a CA certificate present in the cert store. See this online\r\n resource for further details:\r\n https://curl.se/docs/sslcerts.html\r\n\r\n For SFTP and SCP, this option makes curl skip the known_hosts\r\n verification. known_hosts is a file normally stored in the\r\n user's home directory in the \".ssh\" subdirectory, which contains\r\n host names and their public keys.\r\n\r\n WARNING: using this option makes the transfer insecure.\r\n```\r\nThe operative line is at the end:\r\n\r\n**` WARNING: using this option makes the transfer insecure.`**\r\n\r\n### Impact\r\nAll users of the following installer scripts are affected:\r\n\r\n- `wsl.sh`\r\n- `start_windows.bat`\r\n- `start_macos.sh`\r\n- `start_linux.sh`\r\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/5535", "file_loc": {"base_commit": "d6bd71db7f3200c2b1ef46123c07374848aed86a", "files": [{"path": "start_linux.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [34]}}}, {"path": "start_macos.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [34]}}}, {"path": "start_windows.bat", "status": "modified", "Loc": {"(None, None, None)": {"mod": [40]}}}, {"path": "wsl.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [61]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["start_windows.bat", "start_linux.sh", "wsl.sh", "start_macos.sh"]}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "8f6405d2fa1c704edbcd2f4371ac21c3491d162b", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/4015", "iss_label": "enhancement\nstale", "title": "Adding flash attention to one click installer", "body": "**Description**\r\n\r\nAdding flash attention to one click installer, for usage with exllamaV2 \r\n\r\n**Additional Context**\r\n\r\nMe and others not so tech savvy people are having issues installing it manually on windows", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/4235", "file_loc": {"base_commit": "8f6405d2fa1c704edbcd2f4371ac21c3491d162b", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [61, 81, 89, 93, 101, 143, 145, 147, 148]}}}, {"path": "docker/Dockerfile", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 24]}}}, {"path": "modules/exllamav2.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 26]}}}, {"path": "one_click.py", "status": "modified", "Loc": {"(None, 'install_webui', 146)": {"add": [173], "mod": [175, 192]}, "(None, 'update_requirements', 198)": {"mod": [239, 241, 272]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84]}}}, {"path": "requirements_noavx2.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84]}}}, {"path": "start_linux.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [46]}}}, {"path": "start_macos.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [46]}}}, {"path": "start_windows.bat", "status": "modified", "Loc": {"(None, None, None)": {"mod": [53]}}}, {"path": "wsl.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [73]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/exllamav2.py", "one_click.py"], "doc": ["docker/Dockerfile", "README.md"], "test": [], "config": ["requirements.txt", "requirements_noavx2.txt"], "asset": ["wsl.sh", "start_windows.bat", "start_macos.sh", "start_linux.sh"]}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "c8a59d79befd208bc341491d79eb4a2f8d25bb74", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/3043", "iss_label": "bug", "title": "always \"llama_tokenize: too many tokens\" (even 1 char input) in latest commit(b6643e5039ae210dbc54ae6aa0f4dcf90b2269a8)", "body": "### Describe the bug\n\nLoad model(vicuna-chinese) OK.\r\nChat error(console): llama_tokenize: too many tokens\r\nReduce input to 1 char: llama_tokenize: too many tokens\r\nCaculate Token: llama_tokenize: too many tokens\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nInstall.(last commit)\r\nRun server.\r\nLoad model(vicuna-chinese) OK.\r\nChat Is Good.\r\n...\r\ngit pull -> latest(b6643e5039ae210dbc54ae6aa0f4dcf90b2269a8)\r\nLoad model(vicuna-chinese) OK.\r\nChat error(console): llama_tokenize: too many tokens\r\nReduce input to 1 char: llama_tokenize: too many tokens\r\nCaculate Token: llama_tokenize: too many tokens\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\n# Server Log\r\n2023-07-08 03:06:08 INFO:Loaded the model in 1.56 seconds.\r\n\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nOutput generated in 0.19 seconds (0.00 tokens/s, 0 tokens, context 58, seed 2000)\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nOutput generated in 0.19 seconds (0.00 tokens/s, 0 tokens, context 49, seed 2000)\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nOutput generated in 0.20 seconds (0.00 tokens/s, 0 tokens, context 47, seed 2000)\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nOutput generated in 0.20 seconds (0.00 tokens/s, 0 tokens, context 2, seed 2000)\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nOutput generated in 0.20 seconds (0.00 tokens/s, 0 tokens, context 2, seed 2000)\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\nllama_tokenize: too many tokens\r\n\r\n\r\n# Git log\r\ncommit b6643e5039ae210dbc54ae6aa0f4dcf90b2269a8 (HEAD -> main, origin/main, origin/HEAD)\r\nAuthor: oobabooga <112222186+oobabooga@users.noreply.github.com>\r\nDate: Fri Jul 7 09:11:30 2023 -0700\r\n\r\n Add decode functions to llama.cpp/exllama\r\n\r\ncommit 1ba2e88551f968cd74478fd02218a62869336ac5\r\nAuthor: oobabooga <112222186+oobabooga@users.noreply.github.com>\r\nDate: Fri Jul 7 09:09:23 2023 -0700\r\n\r\n Add truncation to exllama\n```\n\n\n### System Info\n\n```shell\nMac m2 / macOS 13.4.1\n```\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/3400", "file_loc": {"base_commit": "c8a59d79befd208bc341491d79eb4a2f8d25bb74", "files": [{"path": "modules/llamacpp_model.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8]}, "('LlamaCppModel', 'generate', 76)": {"add": [77]}}}, {"path": "modules/text_generation.py", "status": "modified", "Loc": {"(None, 'encode', 38)": {"mod": [42]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/llamacpp_model.py", "modules/text_generation.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "d87ca8f2af2458e8b57b1ec9915c72a4ca5ca19f", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/1578", "iss_label": "bug", "title": "New precise prompts break eachadea_ggml-vicuna-13b-1.1", "body": "### Describe the bug\r\n\r\nThese changes seem to break a \"default\" install of eachadea/ggml-vicuna-13b-1.1-q4, acquired via the UI, both _0 and _2. I now receive blank responses in cai-chat, chat, and instruct modes using both the vicuna and vicuna v0 templates. \r\n\r\nConfirmed it was caused in commit a777c05 by testing commit a840942 which resolves the issue.\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\nPull commit a777c05 or later. Start the application with \r\n\r\n`python server.py --chat --model eachadea_ggml-vicuna-13b-1.1 --auto-devices --gpu-memory 8`\r\n\r\nUse the chat window.\r\n\r\n### Screenshot\r\n\r\n![image](https://user-images.githubusercontent.com/4226491/234644445-c23f2d33-5d13-42c4-8ad1-b0c2a0005b51.png)\r\n\r\n\r\n### Logs\r\n\r\n```shell\r\nNone.\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nWindows 11 using WSL Ubuntu 22.04\r\nRyzen 3700x\r\n32gb ram\r\nnvidia 2060 Super 8gb\r\n```\r\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/1579", "file_loc": {"base_commit": "d87ca8f2af2458e8b57b1ec9915c72a4ca5ca19f", "files": [{"path": "characters/instruction-following/Vicuna-v0.yaml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}}}, {"path": "characters/instruction-following/Vicuna.yaml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}}}, {"path": "models/config.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["models/config.yaml", "characters/instruction-following/Vicuna.yaml", "characters/instruction-following/Vicuna-v0.yaml"], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "1d7e893fa199d6e0f868c383782aba9dada7d911", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/177", "iss_label": "enhancement", "title": "GPTQ quantization(3 or 4 bit quantization) support for LLaMa", "body": "[GPTQ](https://arxiv.org/abs/2210.17323) is currently the SOTA one shot quantization method for LLMs.\r\nGPTQ supports amazingly low 3-bit and 4-bit weight quantization. And it can be applied to LLaMa.\r\nI've actually confirmed that this works well in LLaMa 7b.\r\nI haven't tested the memory usage(n-bit cuda kernel), but I think it should work.\r\n\r\n| Model([LLaMa-7B](https://arxiv.org/abs/2302.13971)) | Bits | group-size | Wikitext2 | PTB | C4 |\r\n| --------- | ---- | ---------- | --------- | --------- | ------- |\r\n| FP16 | 16 | - | 5.67 | 8.79 | 7.05 | \r\n| RTN | 4 | - | 6.28 | 9.68 | 7.70 | \r\n| [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | 64 | **6.16** | **9.66** | **7.52** | \r\n| RTN | 3 | - | 25.66 | 61.25 | 28.19 | \r\n| [GPTQ](https://arxiv.org/abs/2210.17323) | 3 | 64 | **12.24** | **16.77** | **9.55** | \r\n\r\ncode: https://github.com/qwopqwop200/GPTQ-for-LLaMa", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/219", "file_loc": {"base_commit": "1d7e893fa199d6e0f868c383782aba9dada7d911", "files": [{"path": "modules/models.py", "status": "modified", "Loc": {"(None, 'load_model', 38)": {"mod": [113]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/models.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "6627f7feb9afe106df89e0b290adde21b1f8c914", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2390", "iss_label": "bug\nstale", "title": "Cannot download a huggingface model due to the authorization.", "body": "### Describe the bug\n\nI tied to download a new model which is visible in huggingface: bigcode/starcoder\r\n\r\nBut failed due to the \"Unauthorized\". I have a access token from hugginface how can I add it to the downlaod_model.py \r\n\r\n\r\nFile \u201c/home/ahnlab/GPT/text-generation-webui/download-model.py\u201d, line 102, in get_download_links_from_huggingface r.raise_for_status() File \u201c/home/ahnlab/miniconda3/envs/vicuna/lib/python3.11/site-packages/requests/models.py\u201d, line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: [https://huggingface.co/api/models/ bigcode/starcoder/tree/main](https://huggingface.co/api/models/%20bigcode/starcoder/tree/main)\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\npython download-model.py bigcode/starcoder\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nFile \u201c/home/ahnlab/GPT/text-generation-webui/download-model.py\u201d, line 102, in get_download_links_from_huggingface r.raise_for_status() File \u201c/home/ahnlab/miniconda3/envs/vicuna/lib/python3.11/site-packages/requests/models.py\u201d, line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: [https://huggingface.co/api/models/ bigcode/starcoder/tree/main](https://huggingface.co/api/models/%20bigcode/starcoder/tree/main)\n```\n\n\n### System Info\n\n```shell\nUbuntu\n```\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/2408", "file_loc": {"base_commit": "6627f7feb9afe106df89e0b290adde21b1f8c914", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [159]}}}, {"path": "download-model.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14, 258], "mod": [261, 267, 270, 274, 277]}, "(None, 'sanitize_model_and_branch_names', 73)": {"mod": [73, 74, 75, 76, 77, 78, 79, 80, 81, 83, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 109, 110, 111, 112, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 147, 148, 149, 150, 151, 153, 156, 157, 158, 160, 161, 162, 163, 164, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 193, 194, 197, 198, 199, 200, 201, 202, 203, 204, 205]}, "(None, 'download_model_files', 197)": {"mod": [207, 208, 209, 211, 212, 213, 216, 217, 218, 219, 220, 222, 223, 224, 225, 227, 228, 229, 230, 231]}, "(None, 'check_model_files', 216)": {"mod": [233, 234, 236, 237, 238, 239]}}}, {"path": "server.py", "status": "modified", "Loc": {"(None, 'download_model_wrapper', 185)": {"mod": [187]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["server.py", "download-model.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "02db4b0d06e9573de9e399b49006f882b996571b", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/6395", "iss_label": "bug", "title": "Backslashes are writen doubled in monospaced blocks", "body": "### Describe the bug\n\nWithout monospaced markdown there is a single backslash, but if it is written with either a single backtick, or tripple on multiline blocks, it gets doubled.\r\n\r\nIf I write:\r\n```\r\n'\\'\r\n```\r\nor\r\n```\r\n'''\r\n\\\r\n'''\r\n```\r\n(replaced the backticks in the example with aposthrophes because I couldn't figure out how to them escaped correctly here)\r\n\r\nOn the webgui it is written doubled, and it is not just visual, if you click the copy button it does get copied doubled. Like this:\r\n\r\n`\\\\`\r\nand\r\n```\r\n\\\\\r\n```\r\n\r\nBut looking at the console it is not internally seen as doubled; so I don't think it's a tokenizer issue. \n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nAlready described in the description.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\n(if any), well there is no errors, so it shouldn't be a required field, that red asterisks is annoying.\n```\n\n\n### System Info\n\n```shell\nSince it's a issue with the HTML itself, I suspect the system specs isn't relevant, lemme know if it somehow makes any difference.\n```\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/6648", "file_loc": {"base_commit": "02db4b0d06e9573de9e399b49006f882b996571b", "files": [{"path": "modules/html_generator.py", "status": "modified", "Loc": {"(None, 'convert_to_markdown', 149)": {"add": [241]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/html_generator.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "ed7a21687c4de9f32659c30a17571ce568c30b47", "iss_has_pr": 1, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/845", "iss_label": "", "title": "Indentation Error", "body": "Just updated after the changes to the face enhancer and I'm getting this error whenever I try to turn it on.\r\nI'm not smart enough to know what this is but I'm hoping this helps someone figure it out!\r\n```IndentationError: unindent does not match any outer indentation level\r\nException in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"C:\\Python310\\lib\\tkinter\\__init__.py\", line 1921, in __call__\r\n return self.func(*args)\r\n File \"C:\\Python310\\lib\\site-packages\\customtkinter\\windows\\widgets\\ctk_switch.py\", line 413, in toggle\r\n self._command()\r\n File \"D:\\Software\\Deep-Live-Cam\\modules\\ui.py\", line 205, in <lambda>\r\n update_tumbler(\"face_enhancer\", enhancer_value.get()),\r\n File \"D:\\Software\\Deep-Live-Cam\\modules\\ui.py\", line 561, in update_tumbler\r\n frame_processors = get_frame_processors_modules(\r\n File \"D:\\Software\\Deep-Live-Cam\\modules\\processors\\frame\\core.py\", line 40, in get_frame_processors_modules\r\n set_frame_processors_modules_from_ui(frame_processors)\r\n File \"D:\\Software\\Deep-Live-Cam\\modules\\processors\\frame\\core.py\", line 47, in set_frame_processors_modules_from_ui\r\n frame_processor_module = load_frame_processor_module(frame_processor)\r\n File \"D:\\Software\\Deep-Live-Cam\\modules\\processors\\frame\\core.py\", line 23, in load_frame_processor_module\r\n frame_processor_module = importlib.import_module(f'modules.processors.frame.{frame_processor}')\r\n File \"C:\\Python310\\lib\\importlib\\__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 879, in exec_module\r\n File \"<frozen importlib._bootstrap_external>\", line 1017, in get_code\r\n File \"<frozen importlib._bootstrap_external>\", line 947, in source_to_code\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"D:\\Software\\Deep-Live-Cam\\modules\\processors\\frame\\face_enhancer.py\", line 61\r\n FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=mps_device) # type: ignore[attr-defined]\r\n```", "pr_html_url": "https://github.com/hacksider/Deep-Live-Cam/pull/846", "file_loc": {"base_commit": "ed7a21687c4de9f32659c30a17571ce568c30b47", "files": [{"path": "modules/processors/frame/face_enhancer.py", "status": "modified", "Loc": {"(None, 'get_face_enhancer', 51)": {"mod": [57, 58, 59, 60, 61, 62, 63]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/processors/frame/face_enhancer.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "87081e78d0175c79bab4f1b50d41a9741920e1c4", "iss_has_pr": 1, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/916", "iss_label": "", "title": "Issues with pip install -r requirement.txt", "body": "INFO: pip is looking at multiple versions of opencv-python to determine which version is compatible with other requirements. This could take a while.\nERROR: Cannot install -r requirements.txt (line 14) and torch==2.5.1 because these package versions have conflicting dependencies.\n\nThe conflict is caused by:\nThe user requested torch==2.5.1\ntorchvision 0.20.1+cu121 depends on torch==2.5.1+cu121\n\nTo fix this you could try to:\n\nloosen the range of package versions you've specified\nremove package versions to allow pip attempt to solve the dependency conflict\nERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies\n\nPLEASE THIS IS MY OUTPUT, PLEASE HELP", "pr_html_url": "https://github.com/hacksider/Deep-Live-Cam/pull/917", "file_loc": {"base_commit": "87081e78d0175c79bab4f1b50d41a9741920e1c4", "files": [{"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "48742826420e786266593252179a6ad94c3b7d48", "iss_has_pr": 1, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/770", "iss_label": "", "title": "\u4e0d\u4f7f\u7528ui\u8fd0\u884c\u4ee3\u7801\uff0c\u4f7f\u7528face_enhancer\u6a21\u5f0f\u65f6\u65e0\u6cd5\u8fd0\u884c\u3002", "body": "\u60a8\u597d\uff0c\u78b0\u5230\u4e00\u4e2abug\u3002\u6211\u4e0d\u4f7f\u7528ui\u754c\u9762\uff0c\u76f4\u63a5\u7528\u8fd0\u884c\u811a\u672c\u5c31\u4f1a\u62a5\u9519\uff1a\r\n# ------------------------ run.py\u6587\u4ef6 ------------------------\r\n>\u201c\r\n>#!/usr/bin/env python3\r\n>import os\r\n>import sys\r\n>from modules import core\r\n>os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"1\"\r\n>\r\n>#\u6a21\u62df\u547d\u4ee4\u884c\u53c2\u6570\uff0c\u6dfb\u52a0--execution-provider cuda\r\n>sys.argv += [\"--execution-provider\", \"cuda\"]\r\n>sys.argv += [\"-s\", \"input/images/picture1.png\", \"-t\", \"input/videos/real_001.mp4\", \"-o\", \"output/output03.mp4\"]\r\n>#choices=['face_swapper', 'face_enhancer']\r\n>sys.argv += [\"--frame-processor\", \"face_swapper\"]\r\n>\r\n>if __name__ == '__main__':\r\n> print(\" ===============> \u5f00\u59cbrun \")\r\n> core.run()\r\n>\u201d\r\n# ------------------------ issues ------------------------\r\n\u5982\u679c--frame-processor\u9009\u62e9\u201csys.argv += [\"--frame-processor\", \"face_swapper\"]\u201d\u662f\u53ef\u4ee5\u6b63\u5e38\u6267\u884c\u3002\r\n\u5982\u679c--frame-processor\u53d8\u6210\u6a21\u5f0fsys.argv += [\"--frame-processor\", \"face_enhancer\"]\u3002\u5c31\u4f1a\u62a5\u9519\uff01\r\n\r\n\r\n\u4f46\u662f\u5982\u679c\u4f7f\u7528ui\u754c\u9762\u6765\u5904\u7406\uff0c\u6253\u5f00face_enhancer\u6a21\u5f0f\u5c31\u80fd\u6b63\u5e38\u8fd0\u884c\u3002\u60f3\u95ee\u662f\u4ec0\u4e48\u539f\u56e0\u5462\uff1f", "pr_html_url": "https://github.com/hacksider/Deep-Live-Cam/pull/773", "file_loc": {"base_commit": "48742826420e786266593252179a6ad94c3b7d48", "files": [{"path": "modules/processors/frame/face_enhancer.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23], "mod": [14]}, "(None, 'pre_check', 25)": {"mod": [26]}, "(None, 'get_face_enhancer', 45)": {"mod": [50, 51, 52, 53, 54]}}}, {"path": "modules/processors/frame/face_swapper.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [17, 22], "mod": [13]}, "(None, 'pre_check', 24)": {"mod": [25]}, "(None, 'get_face_swapper', 52)": {"mod": [57]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["modules/processors/frame/face_enhancer.py", "modules/processors/frame/face_swapper.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "cefafdc12e0220d139c704522979a0dc9b3f889b", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/178", "iss_label": "bug\naccepted", "title": "[BUG] One trailing newline ignored by rich.print in some cases", "body": "Hi, thanks for this great library.\r\n\r\nNot sure if this is expected behavior or a bug. In certain cases, `rich.print` handles newlines in a slightly different manner than the `print` built-in.\r\n\r\nExample:\r\n```\r\n>>> for i in range(3): print('Hey' + '\\n' * i)\r\n... \r\nHey\r\nHey\r\n\r\nHey\r\n\r\n\r\n>>> from rich import print\r\n>>> for i in range(3): print('Hey' + '\\n' * i)\r\n... \r\nHey\r\nHey\r\nHey\r\n\r\n>>> \r\n```\r\nApparently, when the printed string contains at least one trailing newline (i.e. when `i == 1` or `i == 2`), one newline is ignored by `rich.print`.\r\n\r\nA screenshot of a similar example:\r\n![rich](https://user-images.githubusercontent.com/43098013/88477785-33616200-cf43-11ea-9b6a-647ad0080f8b.png)\r\n\r\n(rich 3.3.2, Python 3.8.0, GNOME Terminal 3.18.3 on Linux Mint 18.2 64-bit)\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/180", "file_loc": {"base_commit": "cefafdc12e0220d139c704522979a0dc9b3f889b", "files": [{"path": ".github/workflows/pythonpackage.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 9, 11]}}}, {"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}}}, {"path": "docs/source/reference/emoji.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 2]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}}}, {"path": "rich/__main__.py", "status": "modified", "Loc": {"(None, 'make_test_card', 34)": {"mod": [78]}}}, {"path": "rich/_palettes.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4], "mod": [8, 9, 10, 11, 12, 13]}}}, {"path": "rich/color.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [5, 6, 8]}, "('Color', 'get_ansi_codes', 384)": {"add": [394], "mod": [390]}, "('Color', 'get_truecolor', 289)": {"mod": [300, 301, 302, 303]}, "('Color', 'parse', 337)": {"mod": [350, 365]}, "('Color', 'downgrade', 405)": {"mod": [456]}}}, {"path": "rich/console.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [75]}, "('Console', '_detect_color_system', 392)": {"mod": [409, 410, 411, 412, 413]}, "('Console', 'export_text', 1044)": {"mod": [1049, 1067]}}}, {"path": "rich/markdown.py", "status": "modified", "Loc": {"('Heading', '__rich_console__', 146)": {"mod": [159]}, "('CodeBlock', '__rich_console__', 178)": {"mod": [183]}}}, {"path": "rich/palette.py", "status": "modified", "Loc": {"('Palette', 'match', 19)": {"mod": [31, 44, 45, 46]}, "('Palette', 'get_color_distance', 31)": {"mod": [33]}}}, {"path": "rich/progress.py", "status": "modified", "Loc": {"(None, 'iter_track', 57)": {"mod": [58, 74, 90]}, "('Progress', 'stop', 631)": {"mod": [652]}, "('Progress', 'track', 663)": {"mod": [710]}}}, {"path": "rich/syntax.py", "status": "modified", "Loc": {"('Syntax', '__rich_console__', 220)": {"add": [231]}}}, {"path": "rich/text.py", "status": "modified", "Loc": {"('Text', 'split', 761)": {"add": [765], "mod": [775]}, "('Text', None, 104)": {"mod": [761]}, "('Text', 'wrap', 860)": {"mod": [889, 890]}}}, {"path": "tests/_card_render.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}}}, {"path": "tests/_markdown.py", "status": "removed", "Loc": {}}, {"path": "tests/test_card.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [3, 5, 6, 7, 8]}}}, {"path": "tests/test_color.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [29, 139]}, "(None, 'test_truecolor', 30)": {"mod": [40]}, "(None, 'test_parse_success', 44)": {"mod": [47, 48, 49]}, "(None, 'test_get_ansi_codes', 85)": {"mod": [90, 91]}, "(None, 'test_downgrade', 96)": {"mod": [98, 121]}}}, {"path": "tests/test_console.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 26], "mod": [7]}}}, {"path": "tests/test_log.py", "status": "modified", "Loc": {"(None, 'test_log', 29)": {"mod": [30]}}}, {"path": "tests/test_markdown.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10], "mod": [3, 5, 7]}, "(None, 'test_markdown_render', 11)": {"mod": [14]}}}, {"path": "tests/test_markdown_no_hyperlinks.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [10], "mod": [3, 5, 7]}, "(None, 'test_markdown_render', 11)": {"mod": [14]}}}, {"path": "tests/test_progress.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13, 45, 187]}}}, {"path": "tests/test_rich_print.py", "status": "modified", "Loc": {"(None, 'test_rich_print', 12)": {"add": [12], "mod": [19]}}}, {"path": "tests/test_rule.py", "status": "modified", "Loc": {"(None, 'test_rule', 10)": {"mod": [18, 19]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["rich/text.py", "rich/console.py", "rich/palette.py", "rich/__main__.py", "rich/_palettes.py", "tests/_card_render.py", "rich/syntax.py", "rich/color.py", "rich/progress.py", "rich/markdown.py", "tests/_markdown.py"], "doc": ["CHANGELOG.md", "docs/source/reference/emoji.rst"], "test": ["tests/test_console.py", "tests/test_color.py", "tests/test_log.py", "tests/test_progress.py", "tests/test_card.py", "tests/test_markdown_no_hyperlinks.py", "tests/test_rule.py", "tests/test_rich_print.py", "tests/test_markdown.py"], "config": [".github/workflows/pythonpackage.yml", "pyproject.toml"], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "489fafc63e4ab85cacde60ade1a15099d6c08ca8", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2150", "iss_label": "Needs triage", "title": "[BUG] ImportError OrderedDict", "body": "You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).\r\n\r\nFirst: Thanks for your awesome project and the work you take to make devs lifes better :)!\r\n**Describe the bug**\r\n\r\nSome updates seem to have mixed up dependencies.\r\n```bash\r\n> rich README.md \r\nTraceback (most recent call last):\r\n File \"/home/graeter/.local/bin/rich\", line 5, in <module>\r\n from rich_cli.__main__ import run\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich_cli/__main__.py\", line 7, in <module>\r\n from rich.console import Console, RenderableType\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/console.py\", line 46, in <module>\r\n from ._log_render import FormatTimeCallable, LogRender\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/_log_render.py\", line 5, in <module>\r\n from .text import Text, TextType\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/text.py\", line 5, in <module>\r\n from rich.emoji import EmojiVariant\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/emoji.py\", line 4, in <module>\r\n from .jupyter import JupyterMixin\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/jupyter.py\", line 4, in <module>\r\n from .segment import Segment\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/segment.py\", line 19, in <module>\r\n from .cells import (\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/cells.py\", line 6, in <module>\r\n from ._lru_cache import LRUCache\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/_lru_cache.py\", line 8, in <module>\r\n from typing_extensions import OrderedDict\r\nImportError: cannot import name 'OrderedDict' from 'typing_extensions' (/home/graeter/.local/lib/python3.8/site-packages/typing_extensions.py)\r\n```\r\nCan you point me to a working configuration?\r\nUp to now I used rich very happily and would miss it a lot ;)\r\n\r\n**Platform**\r\n- Ubuntu 20.04\r\n- python 3.8.10\r\n- pip 22.0.4\r\n- zsh with starship\r\n\r\n<details>\r\n```\r\npython -m rich.diagnose\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/diagnose.py\", line 5, in <module>\r\n from rich.console import Console, get_windows_console_features\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/console.py\", line 46, in <module>\r\n from ._log_render import FormatTimeCallable, LogRender\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/_log_render.py\", line 5, in <module>\r\n from .text import Text, TextType\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/text.py\", line 5, in <module>\r\n from rich.emoji import EmojiVariant\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/emoji.py\", line 4, in <module>\r\n from .jupyter import JupyterMixin\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/jupyter.py\", line 4, in <module>\r\n from .segment import Segment\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/segment.py\", line 19, in <module>\r\n from .cells import (\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/cells.py\", line 6, in <module>\r\n from ._lru_cache import LRUCache\r\n File \"/home/graeter/.local/lib/python3.8/site-packages/rich/_lru_cache.py\", line 8, in <module>\r\n from typing_extensions import OrderedDict\r\nImportError: cannot import name 'OrderedDict' from 'typing_extensions' (/home/graeter/.local/lib/python3.8/site-packages/typing_extensions.py)\r\n\r\npip freeze | grep rich\r\nrich==12.1.0\r\nrich-cli==1.6.1\r\nrich-rst==1.1.7\r\n```\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/2157", "file_loc": {"base_commit": "489fafc63e4ab85cacde60ade1a15099d6c08ca8", "files": [{"path": "poetry.lock", "status": "modified", "Loc": {"(None, None, None)": {"mod": [588, 1068, 1468, 1469]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [30]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["poetry.lock", "pyproject.toml"], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "42988b834f9c76b63145dd8d8142a94243e71375", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2566", "iss_label": "Needs triage", "title": "Drop Python 3.6 as a supported version of Python", "body": "Time to drop Python 3.6 as a version of Python supported by Rich. The reasons for doing so include:\r\n\r\n- Python 3.6 [reached end-of-life on 2021-12-03](https://devguide.python.org/versions/)\r\n- The Poetry installer used for our GitHub actions [recently dropped 3.6](https://github.com/python-poetry/install.python-poetry.org).\r\n\r\nActions to take include:\r\n\r\n- [x] Consider the significance of [this TODO in `_null_file.py`](https://github.com/Textualize/rich/blob/84e628655a2981ee90413ca3f35001ec3954161d/rich/_null_file.py#L7).\r\n- [x] Drop `dataclasses` as a dependency.\r\n- [x] Look at dropping [the special-casing of `isascii` in `rule.py`](https://github.com/Textualize/rich/blob/84e628655a2981ee90413ca3f35001ec3954161d/rich/rule.py#L54).\r\n- [x] Drop Python 3.6 from `pythonpackage.yml`.\r\n- [x] Drop mention of Python 3.6 in `pyproject.toml` -> `[tool.poetry]` -> `classifiers`.\r\n- [x] Bump the major version of Rich.\r\n- [x] Update the `Compatibility` section of `README.md` (and all translations).\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/2567", "file_loc": {"base_commit": "42988b834f9c76b63145dd8d8142a94243e71375", "files": [{"path": ".github/workflows/pythonpackage.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [11]}}}, {"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [41]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5, 18, 30, 32, 47]}}}, {"path": "rich/_null_file.py", "status": "modified", "Loc": {"('NullFile', None, 5)": {"mod": [7, 9, 10, 11, 13, 14, 15, 17, 18]}}}, {"path": "rich/highlighter.py", "status": "modified", "Loc": {"('ReprHighlighter', None, 80)": {"mod": [85]}}}, {"path": "rich/rule.py", "status": "modified", "Loc": {"('Rule', '__rich_console__', 49)": {"mod": [54, 55, 56, 57, 60]}}}, {"path": "tests/test_null_file.py", "status": "modified", "Loc": {"(None, 'test_null_file', 4)": {"mod": [8, 9, 10]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["rich/highlighter.py", "rich/rule.py", "rich/_null_file.py"], "doc": ["README.md", "CHANGELOG.md"], "test": ["tests/test_null_file.py"], "config": ["pyproject.toml", ".github/workflows/pythonpackage.yml"], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "c478588f3c228a4e86741a057c42b452d7bc6bce", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/3027", "iss_label": "Needs triage", "title": "[BUG] Extra Space above Certain Markdown Tables", "body": "- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.\r\n- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).\r\n\r\n**Describe the bug**\r\nCertain markdown tables contain extra newlines above them in Rich 13.4.2.\r\n\r\n```python\r\nfrom rich.console import Console\r\nfrom rich.markdown import Markdown\r\n\r\nMD = \"\"\"\r\n| Temperature | | | | | |\r\n|--------------:|:-------|:-------|:-------|:-------|:----------|\r\n| 0.01 | sam | sam | sam | sam | sam |\r\n| 0.1 | sam | sam | sam | sam | sam |\r\n| 0.25 | sam | sam | sam | sammy | sammy |\r\n| 0.5 | lilly | sam | sammy | sammy | taffy |\r\n| 0.75 | bambi | lola | snoopy | taffy | taz |\r\n| 0.9 | bella | harper | millie | molly | sweetie |\r\n| 1 | Anna | molly | shaker | sydney | wheessie |\r\n| 1.25 | Finley | funny | gertie | gladi | road kill |\r\n\"\"\".strip()\r\n\r\nconsole = Console()\r\nmarkdown = Markdown(MD)\r\n\r\nprint('--')\r\nconsole.print(markdown)\r\nprint('--')\r\n```\r\n![image](https://github.com/Textualize/rich/assets/394709/e4d49cad-109e-4ff5-9af6-065b3a91f70c)\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nWindows 10.\r\n\r\n\r\n```\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502 <console width=148 ColorSystem.WINDOWS> \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'windows' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 56 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = True \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=148, height=56), \u2502\r\n\u2502 legacy_windows=True, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=148, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=56, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=148, height=56) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 148 \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\u250c\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u2510\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 { \u2502\r\n\u2502 'TERM': None, \u2502\r\n\u2502 'COLORTERM': None, \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': None, \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\nplatform=\"Windows\"\r\n```\r\n\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/3469", "file_loc": {"base_commit": "c478588f3c228a4e86741a057c42b452d7bc6bce", "files": [{"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [21]}}}, {"path": "rich/markdown.py", "status": "modified", "Loc": {"('Markdown', '__rich_console__', 569)": {"mod": [680]}}}, {"path": "tests/test_markdown.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [176]}, "(None, 'test_markdown_render', 99)": {"mod": [102]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["rich/markdown.py"], "doc": ["CHANGELOG.md"], "test": ["tests/test_markdown.py"], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "9c0f164f8bbb8811f6e3ef8a69ac77c5e4464f36", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2668", "iss_label": "Needs triage", "title": "[BUG] rich.live does not redirect stdout with fileno", "body": "**Describe the bug**\r\n\r\nWhen using `rich.live.Live` with default settings (`redirect_stdout=True`), `sys.stdout` does not have a `fileno` which breaks some stdlib python code which expects it, for example:\r\n\r\n```python\r\nfrom rich.live import Live\r\nimport subprocess\r\nimport sys\r\n\r\nwith Live():\r\n subprocess.Popen([\"echo hello world\"], stdout=sys.stdout).communicate()\r\n```\r\n\r\nwhich errors with\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/kratsg/mario-mapyde/live.py\", line 6, in <module>\r\n subprocess.Popen([\"echo hello world\"], stdout=sys.stdout).communicate()\r\n File \"/usr/local/Cellar/python@3.9/3.9.15/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py\", line 829, in __init__\r\n errread, errwrite) = self._get_handles(stdin, stdout, stderr)\r\n File \"/usr/local/Cellar/python@3.9/3.9.15/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py\", line 1598, in _get_handles\r\n c2pwrite = stdout.fileno()\r\nio.UnsupportedOperation: fileno\r\n```\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\n```\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 <console width=119 ColorSystem.EIGHT_BIT> \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = '256' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 61 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=119, height=61), \u2502\r\n\u2502 legacy_windows=False, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=119, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=61, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=119, height=61) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 119 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'xterm-256color', \u2502\r\n\u2502 'COLORTERM': None, \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': 'Apple_Terminal', \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Darwin\"\r\n```\r\n\r\n```\r\nrich==12.6.0\r\n```\r\n\r\n</details>\r\n\r\n**What am I trying to do?**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nWell, I was following something like [this SO post](https://stackoverflow.com/questions/71077706/redirect-print-and-or-logging-to-panel) and #1720 to run subprocess, and have the output of `sys.stdout` redirect to *some* renderable. Code that does not error, but also does not do as expected (due to python picking up system `stdout` rather than the redirected `stdout` is similar:\r\n\r\n```python\r\nfrom rich.live import Live\r\nimport subprocess\r\nimport sys\r\n\r\nwith Live(screen=True):\r\n subprocess.Popen([\"echo hello world\"]).communicate()\r\n```\r\n\r\nbut this puts the output on the original screen, not the alternate screen. Perhaps there is a better way to pass in a file handler through `subprocess` to auto-redirect into a `Console` or similar, but it's not obvious to me how this can be done. This is the only way I can think of:\r\n\r\n```python\r\nfrom rich.console import Console\r\nimport os\r\n\r\nclass ConsolePanel(Console):\r\n def __init__(self,*args,**kwargs):\r\n console_file = open(os.devnull,'w')\r\n super().__init__(record=True,file=console_file,*args,**kwargs)\r\n\r\n def __rich_console__(self,console,options):\r\n texts = self.export_text(clear=False).split('\\n')\r\n for line in texts[-options.height:]:\r\n yield line\r\n\r\nif __name__=='__main__':\r\n from rich.layout import Layout\r\n from rich.live import Live\r\n import time\r\n from datetime import datetime\r\n import subprocess\r\n\r\n class Interface():\r\n def __init__(self) -> None:\r\n self.console:list[ConsolePanel] = [ConsolePanel() for _ in range(2)]\r\n\r\n def get_renderable(self):\r\n layout = Layout()\r\n layout.split_column(\r\n Layout(self.console[0],name='top'),\r\n Layout(self.console[1],name='bottom',size=6)\r\n )\r\n layout.children[0]\r\n return layout\r\n\r\n # comment out the below line to get wildly different behavior\r\n proc = subprocess.Popen([\"watch\", \"-n1\", \"echo hello world\"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\r\n\r\n db = Interface()\r\n with Live(get_renderable=db.get_renderable):\r\n while True:\r\n time.sleep(1)\r\n db.console[0].print(datetime.now().ctime()+'='*100)\r\n db.console[1].print(datetime.now().ctime())\r\n```\r\n\r\n</details>", "pr_html_url": "https://github.com/Textualize/rich/pull/2683", "file_loc": {"base_commit": "9c0f164f8bbb8811f6e3ef8a69ac77c5e4464f36", "files": [{"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [18]}}}, {"path": "rich/file_proxy.py", "status": "modified", "Loc": {"('FileProxy', 'flush', 50)": {"add": [54]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["rich/file_proxy.py"], "doc": ["CHANGELOG.md"], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "9f620dc50c0008c35e9f8493f198e6e593574a70", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/3104", "iss_label": "Needs triage", "title": "[BUG] `font-family` ignored in `html_export` due to user agent stylesheet for `<code>`", "body": "- [X] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.\r\n- [X] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).\r\n\r\n**Describe the bug**\r\n\r\nRun this code:\r\n\r\n```py\r\nimport rich.console\r\n\r\ntry:\r\n test = 1\r\n raise Exception()\r\nexcept Exception:\r\n console = rich.console.Console(record=True)\r\n console.print_exception(show_locals=True)\r\n html = console.export_html(inline_styles=True)\r\n with open(\"test.html\", \"w\") as html_file:\r\n html_file.write(html)\r\n```\r\n\r\nYou will get an `test.html` output file. Open it in Chrome.\r\n\r\nI'm on macOS, and it shows up like this:\r\n\r\n![image](https://github.com/Textualize/rich/assets/26592486/4b124132-b7a9-4156-bfd9-8912c65f2764)\r\n\r\n\r\nNotice the lines are not aligned properly on the right side. Here is why:\r\n\r\n![image](https://github.com/Textualize/rich/assets/26592486/8d6e13e6-2124-46e2-972d-1d4a31256615)\r\n\r\nAs you can see, Chrome's user agent stylesheet causes the `<code>` element to reset the `font-family` on the `<pre>` element back to `monospace`. All we need is to have Rich add a `font-family: inherit;` on the `<code>` element and everything is fine:\r\n\r\n![image](https://github.com/Textualize/rich/assets/26592486/ed1c2e6e-7d89-4d39-8301-cc92679458d9)\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nWhat platform (Win/Linux/Mac) are you running on? What terminal software are you using?\r\nMac with Chrome\r\n\r\n```\r\n\u276f python -m rich.diagnose\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 <console width=148 ColorSystem.TRUECOLOR> \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'truecolor' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 87 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=148, height=87), \u2502\r\n\u2502 legacy_windows=False, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=148, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=87, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=148, height=87) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 148 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'xterm-256color', \u2502\r\n\u2502 'COLORTERM': 'truecolor', \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': 'vscode', \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Darwin\"\r\n\r\n\u276f python -m pip freeze | grep rich\r\nrich==13.4.2\r\n```\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/3105", "file_loc": {"base_commit": "9f620dc50c0008c35e9f8493f198e6e593574a70", "files": [{"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [13]}}}, {"path": "CONTRIBUTORS.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [75]}}}, {"path": "rich/_export_format.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [15]}}}, {"path": "tests/test_console.py", "status": "modified", "Loc": {"(None, 'test_export_html', 527)": {"mod": [532]}, "(None, 'test_export_html_inline', 536)": {"mod": [541]}, "(None, 'test_save_html', 593)": {"mod": [594]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["rich/_export_format.py"], "doc": ["CONTRIBUTORS.md", "CHANGELOG.md"], "test": ["tests/test_console.py"], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "a05a5a1c2f95f25db70ac3657e99f0bab652e2cd", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/1180", "iss_label": "Needs triage", "title": "[BUG] No `Optional` typing in args that accept `None` in `Console`.", "body": "Some arguments to `rich.console.Console`\u2014like `width`\u2014accept `None` as an argument and are documented as `Optional` but are typed as only `int`, raising some type checking errors if `width=None` is passed.\r\n\r\nhttps://github.com/willmcgugan/rich/blob/a05a5a1c2f95f25db70ac3657e99f0bab652e2cd/rich/console.py#L577\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/1182", "file_loc": {"base_commit": "a05a5a1c2f95f25db70ac3657e99f0bab652e2cd", "files": [{"path": "CONTRIBUTING.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [53]}}}, {"path": "Makefile", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8, 10]}}}, {"path": "rich/__init__.py", "status": "modified", "Loc": {"(None, 'print', 45)": {"mod": [45]}, "(None, 'inspect', 63)": {"mod": [66, 67]}}}, {"path": "rich/_inspect.py", "status": "modified", "Loc": {"('Inspect', '__init__', 43)": {"mod": [47]}}}, {"path": "rich/_log_render.py", "status": "modified", "Loc": {"('LogRender', '__call__', 32)": {"mod": [36, 37, 39, 40, 41]}}}, {"path": "rich/_ratio.py", "status": "modified", "Loc": {"(None, 'ratio_distribute', 108)": {"mod": [109]}}}, {"path": "rich/align.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2]}, "('Align', '__init__', 36)": {"mod": [40, 42, 44, 45]}, "('Align', 'left', 67)": {"mod": [70, 72, 74, 75]}, "('Align', 'center', 89)": {"mod": [92, 94, 96, 97]}, "('Align', 'right', 111)": {"mod": [114, 116, 118, 119]}, "('VerticalCenter', '__init__', 242)": {"mod": [245]}}}, {"path": "rich/bar.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "('Bar', '__init__', 29)": {"mod": [35]}}}, {"path": "rich/color.py", "status": "modified", "Loc": {"('Color', 'get_truecolor', 307)": {"mod": [308]}}}, {"path": "rich/columns.py", "status": "modified", "Loc": {"('Columns', '__init__', 31)": {"mod": [33, 36, 41, 42]}}}, {"path": "rich/console.py", "status": "modified", "Loc": {"('PagerContext', '__init__', 323)": {"mod": [326]}, "('ScreenContext', None, 354)": {"mod": [365]}, "('Console', '__init__', 563)": {"mod": [569, 570, 571, 573, 575, 577, 578, 579, 580, 590, 592, 593, 594]}, "('Console', 'pager', 955)": {"mod": [956]}, "('Console', 'screen', 1074)": {"mod": [1075]}, "('Console', 'render', 1088)": {"mod": [1089]}, "('Console', 'render_str', 1191)": {"mod": [1196, 1197, 1198, 1199, 1200, 1201]}, "('Console', 'get_style', 1243)": {"mod": [1244]}, "('Console', '_collect_renderables', 1273)": {"mod": [1279, 1280, 1281, 1282]}, "('Console', 'out', 1386)": {"mod": [1391, 1392]}, "('Console', 'print', 1418)": {"mod": [1423, 1424, 1425, 1426, 1427, 1428, 1429, 1430, 1431, 1433]}, "('Console', 'update_screen', 1508)": {"mod": [1512, 1513]}, "('Console', 'log', 1589)": {"mod": [1594, 1595, 1596, 1597, 1598]}, "('Console', 'input', 1730)": {"mod": [1737]}, "('Console', 'export_html', 1816)": {"mod": [1819, 1821]}, "('Console', 'save_html', 1895)": {"mod": [1899]}}}, {"path": "rich/containers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "('Renderables', None, 28)": {"mod": [31]}}}, {"path": "rich/layout.py", "status": "modified", "Loc": {"('Layout', '__init__', 155)": {"mod": [157, 159, 160, 164]}}}, {"path": "rich/live.py", "status": "modified", "Loc": {"('Live', '__init__', 50)": {"mod": [52, 54, 62]}}}, {"path": "rich/logging.py", "status": "modified", "Loc": {"('RichHandler', '__init__', 58)": {"mod": [61, 68]}}}, {"path": "rich/markdown.py", "status": "modified", "Loc": {"('MarkdownContext', '__init__', 346)": {"mod": [351]}, "('Markdown', '__init__', 418)": {"mod": [422, 425, 426]}}}, {"path": "rich/measure.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2]}, "('Measurement', None, 11)": {"mod": [59]}}}, {"path": "rich/panel.py", "status": "modified", "Loc": {"('Panel', '__init__', 38)": {"mod": [43]}, "('Panel', 'fit', 68)": {"mod": [73]}}}, {"path": "rich/pretty.py", "status": "modified", "Loc": {"(None, 'install', 44)": {"mod": [45, 49, 50]}, "('Pretty', '__init__', 154)": {"mod": [157, 160, 164, 165]}, "(None, 'traverse', 416)": {"mod": [416]}, "(None, 'pretty_repr', 587)": {"mod": [592, 593]}, "(None, 'pprint', 622)": {"mod": [625, 627, 628]}}}, {"path": "rich/progress.py", "status": "modified", "Loc": {"(None, 'track', 83)": {"mod": [90]}, "('ProgressColumn', None, 151)": {"mod": [156]}, "('RenderableColumn', None, 193)": {"mod": [200]}, "('SpinnerColumn', '__init__', 218)": {"mod": [224]}, "('TextColumn', '__init__', 261)": {"mod": [267, 268]}, "('BarColumn', '__init__', 299)": {"mod": [306]}, "('DownloadColumn', None, 375)": {"mod": [382]}, "('Progress', '__init__', 568)": {"mod": [571, 578]}, "('Progress', 'update', 729)": {"mod": [734, 735, 736, 737]}}}, {"path": "rich/progress_bar.py", "status": "modified", "Loc": {"('ProgressBar', '__init__', 33)": {"mod": [37, 43]}, "('ProgressBar', None, 18)": {"mod": [114]}}}, {"path": "rich/prompt.py", "status": "modified", "Loc": {"('PromptBase', '__init__', 53)": {"mod": [57, 59]}, "('PromptBase', 'ask', 77)": {"mod": [81, 83, 87]}, "('PromptBase', 'ask', 93)": {"mod": [97, 99, 102]}, "('PromptBase', 'ask', 107)": {"mod": [111, 113, 117]}, "('PromptBase', 'get_input', 186)": {"mod": [191]}, "('PromptBase', None, 30)": {"mod": [253, 262]}, "('PromptBase', '__call__', 257)": {"mod": [258]}}}, {"path": "rich/scope.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [2]}, "(None, 'render_scope', 14)": {"mod": [17, 20, 21]}}}, {"path": "rich/screen.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1]}, "('Screen', '__init__', 21)": {"mod": [22]}}}, {"path": "rich/segment.py", "status": "modified", "Loc": {"('Segment', 'apply_style', 82)": {"mod": [85, 86]}, "('Segment', 'split_and_crop_lines', 168)": {"mod": [172]}, "('Segment', 'adjust_line_length', 215)": {"mod": [216]}, "('Segment', 'set_shape', 282)": {"mod": [286, 287]}}}, {"path": "rich/spinner.py", "status": "modified", "Loc": {"('Spinner', '__init__', 14)": {"mod": [15]}}}, {"path": "rich/status.py", "status": "modified", "Loc": {"('Status', '__init__', 23)": {"mod": [27]}}}, {"path": "rich/style.py", "status": "modified", "Loc": {"('Style', '__init__', 93)": {"mod": [96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111]}, "('Style', None, 29)": {"mod": [179, 495, 578]}}}, {"path": "rich/syntax.py", "status": "modified", "Loc": {"('Syntax', '__init__', 224)": {"mod": [233, 234, 238]}, "('Syntax', 'from_path', 260)": {"mod": [267, 269, 273]}, "('Syntax', None, 190)": {"mod": [354]}}}, {"path": "rich/table.py", "status": "modified", "Loc": {"('Table', '__init__', 151)": {"mod": [154, 155, 156, 157, 170, 173, 174, 175]}, "('Table', 'add_column', 328)": {"mod": [333, 334, 335, 338, 339, 340, 341]}, "('Table', 'add_row', 379)": {"mod": [382]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["rich/columns.py", "rich/scope.py", "rich/console.py", "rich/bar.py", "rich/layout.py", "rich/pretty.py", "rich/spinner.py", "rich/style.py", "rich/_inspect.py", "rich/align.py", "rich/logging.py", "rich/table.py", "rich/screen.py", "rich/__init__.py", "rich/syntax.py", "rich/segment.py", "rich/progress_bar.py", "rich/live.py", "rich/color.py", "rich/panel.py", "rich/progress.py", "rich/measure.py", "rich/containers.py", "rich/_log_render.py", "rich/prompt.py", "rich/markdown.py", "rich/_ratio.py", "rich/status.py"], "doc": ["CONTRIBUTING.md"], "test": [], "config": ["Makefile"], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "aa7926c1431eebfb2ccaab9f3b63a4ac6cd8dfe6", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2291", "iss_label": "bug", "title": "[BUG] Invalid markup in a ProgressBar causes the entire Python script to exit abnormally.", "body": "**Describe the bug**\r\n\r\n**NOTE: I found some more details on this. The issue isn't that an exception isn't raised, it's that you can't ever see any console output from that exception. Alternate screen issue?** See EDIT below. Original bug report follows.\r\n\r\nIf you try to create a ProgressBar object, and within your fields you have some invalid markup (example: a closing [/color] tag without a corresponding opening tag), starting the progress bar with `start()` will *cause the entire Python interpreter to exit with return code 1*. \r\n\r\nNo error message is printed. Wrapping the code in a try/except block does not trap the error, the entire script still exits.\r\n\r\nThe following is a minimal working example illustrating the bug.\r\n\r\n from rich.progress import (\r\n BarColumn,\r\n Progress,\r\n TaskID,\r\n TextColumn,\r\n TimeRemainingColumn,\r\n )\r\n\r\n import time\r\n\r\n def test():\r\n\r\n print(\"I will now make Rich fail horribly...\")\r\n\r\n pbar = Progress(\r\n \"[green]Status\",\r\n TimeRemainingColumn(),\r\n \"Running[/red]\" # THIS LINE HAS INVALID MARKUP. IT WILL CAUSE THE SCRIPT TO CRASH WITH NO ERROR.\r\n )\r\n\r\n task_id = pbar.add_task(\"test\",start=False, total=10)\r\n\r\n # PROGRAM EXECUTION ABORTS HERE WITH NO ERROR MESSAGES.\r\n # Python interpreter returns code 1.\r\n pbar.start()\r\n\r\n print(\"We should make it here, but we don't.\")\r\n\r\n for _ in range(10):\r\n time.sleep()\r\n pbar.update(task_id, advance=1)\r\n\r\n pbar.stop()\r\n\r\n if __name__ == \"__main__\":\r\n\r\n # Even wrapping the test in a try/catch block does not prevent Python from exiting!\r\n try:\r\n test()\r\n except Exception as e:\r\n print(f\"I caught an exception! {e}\") # This is NOT called, NO exception is raised.\r\n\r\n print(\"I made it through the test!\") # This is also NEVER reached. The script EXITS when pbar.start() is called.\r\n\r\nExample run (Not much to see...):\r\n\r\n dev@devbox:~$ python3 richbug.py\r\n dev@devbox:~$ echo $?\r\n 1\r\n dev@devbox:~$\r\n\r\nNote that I have not tested this further to determine if it happens in other areas of Rich, but I know for sure it happens with ProgressBar.\r\n\r\n**What should happen?**\r\n\r\nIf there's invalid markup, a normal exception should get thrown somewhere. \r\n\r\nEven if for some reason the app needs to fully exit, printing an error message would still be useful. I spent over an hour tracking down what I thought was a bug or a forgotten exit() call in my own code before realizing the exact line where things failed was `pbar.start()`.\r\n\r\nI have a *suspicion* that this might have to do with the alternate screen - perhaps an exception is printed but it's done on the alternate screen so you never see it? I haven't spent much time looking at Rich's code, but I'd imagine perhaps wrapping code in try blocks with code to exit the alternate screen followed by re-raising the exception might work?\r\n\r\nOne more point: after the code exits, the cursor is missing - I have to use `reset` to bring it back. Again, suggests that we're switching into the alternate screen, crashing and then not getting back out to print errors.\r\n\r\n**EDIT: I discovered this is indeed the case. If I add the line `open(\"exception.txt\",\"w\").write(str(e))` to the except block, the exception does get printed and indeed does include the correct markup error. So therefore this bug should perhaps be named \"App does not exit alternate screen before crashing\"?**\r\n\r\n**Platform**\r\n\r\n```\r\ndev@devbox:~$ python -m rich.diagnose\r\n\r\n \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n \u2502 A high level console interface. \u2502\r\n \u2502 \u2502\r\n \u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n \u2502 \u2502 <console width=148 ColorSystem.STANDARD> \u2502 \u2502\r\n \u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n \u2502 \u2502\r\n \u2502 color_system = 'standard' \u2502\r\n \u2502 encoding = 'utf-8' \u2502\r\n \u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n \u2502 height = 30 \u2502\r\n \u2502 is_alt_screen = False \u2502\r\n \u2502 is_dumb_terminal = False \u2502\r\n \u2502 is_interactive = True \u2502\r\n \u2502 is_jupyter = False \u2502\r\n \u2502 is_terminal = True \u2502\r\n \u2502 legacy_windows = False \u2502\r\n \u2502 no_color = False \u2502\r\n \u2502 options = ConsoleOptions( \u2502\r\n \u2502 size=ConsoleDimensions(width=148, height=30), \u2502\r\n \u2502 legacy_windows=False, \u2502\r\n \u2502 min_width=1, \u2502\r\n \u2502 max_width=148, \u2502\r\n \u2502 is_terminal=True, \u2502\r\n \u2502 encoding='utf-8', \u2502\r\n \u2502 max_height=30, \u2502\r\n \u2502 justify=None, \u2502\r\n \u2502 overflow=None, \u2502\r\n \u2502 no_wrap=False, \u2502\r\n \u2502 highlight=None, \u2502\r\n \u2502 markup=None, \u2502\r\n \u2502 height=None \u2502\r\n \u2502 ) \u2502\r\n \u2502 quiet = False \u2502\r\n \u2502 record = False \u2502\r\n \u2502 safe_box = True \u2502\r\n \u2502 size = ConsoleDimensions(width=148, height=30) \u2502\r\n \u2502 soft_wrap = False \u2502\r\n \u2502 stderr = False \u2502\r\n \u2502 style = None \u2502\r\n \u2502 tab_size = 8 \u2502\r\n \u2502 width = 148 \u2502\r\n \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n \u256d\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u256e\r\n \u2502 Windows features available. \u2502\r\n \u2502 \u2502\r\n \u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n \u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n \u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n \u2502 \u2502\r\n \u2502 truecolor = False \u2502\r\n \u2502 vt = False \u2502\r\n \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n \u256d\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n \u2502 { \u2502\r\n \u2502 'TERM': 'screen', \u2502\r\n \u2502 'COLORTERM': None, \u2502\r\n \u2502 'CLICOLOR': None, \u2502\r\n \u2502 'NO_COLOR': None, \u2502\r\n \u2502 'TERM_PROGRAM': None, \u2502\r\n \u2502 'COLUMNS': None, \u2502\r\n \u2502 'LINES': None, \u2502\r\n \u2502 'JPY_PARENT_PID': None, \u2502\r\n \u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n \u2502 } \u2502\r\n \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n platform=\"Linux\"\r\n\r\ndev@devbox:~$ pip freeze | grep rich\r\nrich==12.4.4\r\n```\r\n\r\nThe above was run while SSH'ed into the devbox from Windows terminal. Same issue will occur no matter what client is being used though. Happens no matter how I run the code, whether it be on a local terminal, via SSH, etc. \r\n\r\nDevbox is running Ubuntu Linux 22.04.\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/2305", "file_loc": {"base_commit": "aa7926c1431eebfb2ccaab9f3b63a4ac6cd8dfe6", "files": [{"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}}}, {"path": "rich/live.py", "status": "modified", "Loc": {"('Live', 'start', 104)": {"mod": [121]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["rich/live.py"], "doc": ["CHANGELOG.md"], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "a972ca05522577de2f98eb7c957deead9c87b38f", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/3123", "iss_label": "Needs triage", "title": "[BUG] Plain code blocks do not render correctly on a light background", "body": "- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.\r\n- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).\r\n\r\n**Describe the bug**\r\n\r\nMarkdown code *blocks* are rendering illegibly and ignoring any styling in the theme given to the console. The code has a black background and the text, rather than being a cyan, is just unstyled text. So on bright backgrounds this is effectively black on black (one can barely make out letter shapes); on dark backgrounds you see the text but not as styled. Inline markdown code displays fine and changes styles as expected. But neither the default code_block theme or any new theme attached to the console seems to change the output from none on black.\r\n\r\nI've attached an image showing this. You can see the inline markdown code and the code block.\r\n\r\n<img width=\"1574\" alt=\"Screen Shot 2023-09-04 at 11 07 08\" src=\"https://github.com/Textualize/rich/assets/198177/bdb3acae-d8a2-400d-a0ac-1e377ae44b95\">\r\n\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nWhat platform (Win/Linux/Mac) are you running on? What terminal software are you using?\r\n\r\nRunning on Mac OS 12.6.1. The same thing happens on standard Terminal and on ITerm2.\r\nNote that the styles show up in `python -m rich.default_styles`, so it is not that the styles\r\nare unable to display.\r\n\r\nI may ask you to copy and paste the output of the following commands. It may save some time if you do it now.\r\n\r\nIf you're using Rich in a terminal:\r\n\r\n```\r\npython -m rich.diagnose\r\npip freeze | grep rich\r\n```\r\n\r\nThe output of the second one is 'rich==13.5.2'\r\n\r\nThe output of the first is\r\n\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 <console width=100 ColorSystem.TRUECOLOR> \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'truecolor' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 32 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=100, height=32), \u2502\r\n\u2502 legacy_windows=False, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=100, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=32, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=100, height=32) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 100 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'xterm-256color', \u2502\r\n\u2502 'COLORTERM': 'truecolor', \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \r\n\r\n \u2502\r\n\u2502 'TERM_PROGRAM': 'iTerm.app', \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Darwin\"\r\n\r\n<img width=\"1574\" alt=\"Screen Shot 2023-09-04 at 11 07 08\" src=\"https://github.com/Textualize/rich/assets/198177/bdb3acae-d8a2-400d-a0ac-1e377ae44b95\">\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/3132", "file_loc": {"base_commit": "a972ca05522577de2f98eb7c957deead9c87b38f", "files": [{"path": ".pre-commit-config.yaml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [11, 43]}}}, {"path": "CHANGELOG.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [12], "mod": [8]}}}, {"path": "rich/console.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [280]}}}, {"path": "rich/markdown.py", "status": "modified", "Loc": {"('CodeBlock', 'create', 175)": {"mod": [178]}}}, {"path": "rich/syntax.py", "status": "modified", "Loc": {"('Syntax', None, 227)": {"add": [441]}, "('Syntax', 'highlight', 442)": {"mod": [470]}}}, {"path": "rich/text.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [40]}}}, {"path": "tests/test_markdown.py", "status": "modified", "Loc": {"(None, 'test_markdown_render', 99)": {"mod": [102]}}}, {"path": "tests/test_markdown_no_hyperlinks.py", "status": "modified", "Loc": {"(None, 'test_markdown_render', 92)": {"mod": [96]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["rich/text.py", "rich/syntax.py", "rich/console.py", "rich/markdown.py"], "doc": ["CHANGELOG.md"], "test": ["tests/test_markdown_no_hyperlinks.py", "tests/test_markdown.py"], "config": [".pre-commit-config.yaml"], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "e6a836d54ca1d3cd02f3ee45ef707a46f23e8291", "iss_has_pr": 1, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/31164", "iss_label": "broken-IE", "title": "[YouTube] When running without --verbose, \u2026 (No terminating paren } in {var b=a.split(\"\"),\u2026", "body": "## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.\r\n- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.\r\n- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.\r\n- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [x] I'm reporting a broken site support\r\n- [x] I've verified that I'm running youtube-dl version **2021.12.17**\r\n- [x] I've checked that all provided URLs are alive and playable in a browser\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n\r\nHalf no. Here's the version info from verbose mode:\r\n\r\n```text\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['--no-call-home', '--abort-on-error', '--no-overwrites', '--keep-video', '--fixup=warn', '--restrict-filenames', '--output', '%(upload_dat>\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8\r\n[debug] youtube-dl version 2021.12.17\r\n[debug] Git HEAD: e6a836d54\r\n[debug] Python version 3.8.10 (CPython) - Linux-5.8.0-44-lowlatency-x86_64-with-glibc2.29\r\n[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4\r\n```\r\nWhen running without --verbose, [youtube] Unable to decode n-parameter: \u2026 (No terminating paren } in {var b=a.split(\"\"),\u2026\r\n\r\nHowever, fortunately, using `--verbose` works around the problem. :+1:\r\nWith `--verbose`, I get a few dozen lines of debug messages and then the download starts.\r\nWhen running without `--verbose`, I see\r\n\r\n```text\r\n[youtube] 37gJCuf6UMY: Downloading webpage\r\n[youtube] 37gJCuf6UMY: Downloading player 324f67b9\r\nWARNING: [youtube] Unable to decode n-parameter: download likely to be throttled (No terminating paren } in {var b=a.split(\"\"),\r\nc=[1070485609,7,function(d,e){d.push(e)},\r\n```\r\n\r\nand then the terminal is busy spewing lots more seemingly minified JS code, until I send SIGINT.\r\n\r\nWhen I redirect stdout and stderr to a file (`\u2026 |& tee -- nparam.log`), it writes the first two lines and then seems stuck for about a minute, after which I gave up and sent SIGINT.\r\n\r\n## Description\r\n\r\nOn Ubuntu focal, downloading from YouTube stopped, and instead, my terminal is flooded.\r\nIt worked fine a few hours ago.\r\nDownloading from Twitch works as expected, so probably not a network problem.\r\n\r\n\r\nThanks for still maintaining compatible with ancient pythons!", "pr_html_url": "https://github.com/ytdl-org/youtube-dl/pull/31170", "file_loc": {"base_commit": "e6a836d54ca1d3cd02f3ee45ef707a46f23e8291", "files": [{"path": "test/test_jsinterp.py", "status": "modified", "Loc": {"('TestJSInterpreter', 'test_basic', 15)": {"add": [21]}, "('TestJSInterpreter', None, 14)": {"add": [53, 64, 106]}, "('TestJSInterpreter', 'test_call', 107)": {"add": [113], "mod": [110]}, "('TestJSInterpreter', 'test_comma', 175)": {"add": [179]}, "('TestJSInterpreter', 'test_array_access', 54)": {"mod": [55]}, "('TestJSInterpreter', 'test_for_loop', 115)": {"mod": [118]}, "('TestJSInterpreter', 'test_for_loop_continue', 157)": {"mod": [159]}, "('TestJSInterpreter', 'test_for_loop_break', 163)": {"mod": [165]}, "('TestJSInterpreter', 'test_literal_list', 169)": {"mod": [171]}}}, {"path": "test/test_utils.py", "status": "modified", "Loc": {"('TestUtil', 'test_unified_timestamps', 349)": {"add": [372]}}}, {"path": "test/test_youtube_signature.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [92]}, "('TestPlayerInfo', 'test_youtube_extract_player_info', 97)": {"add": [98]}}}, {"path": "youtube_dl/compat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2997, 3033, 3053]}}}, {"path": "youtube_dl/jsinterp.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2, 3, 7, 8, 9, 16, 39], "mod": [12, 15, 22, 28, 30, 31, 33, 34, 35, 37]}, "('JSInterpreter', None, 86)": {"add": [86, 131, 544], "mod": [102, 125, 126, 127, 160, 161, 162, 163]}, "('JSInterpreter', 'eval_method', 372)": {"add": [383], "mod": [373, 374, 375, 376, 377, 378, 379, 385, 386, 387, 388, 399, 403, 404, 450, 460, 461, 462, 463]}, "('JSInterpreter', 'interpret_expression', 160)": {"add": [468], "mod": [167, 169, 171, 175, 176, 177, 179, 188, 189, 194, 196, 197, 199, 200, 201, 204, 207, 208, 209, 215, 216, 217, 223, 224, 226, 229, 231, 236, 237, 238, 248, 257, 264, 268, 269, 270, 273, 274, 282, 284, 285, 286, 287, 288, 289, 290, 291, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 311, 316, 317, 318, 319, 320, 323, 327, 328, 329, 332, 334, 335, 337, 338, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 353, 354, 355, 356, 357, 358, 360, 466, 470, 472, 473, 474, 475, 476, 477, 479, 482, 484, 485]}, "('JSInterpreter', 'extract_object', 487)": {"add": [496], "mod": [501, 502, 505]}, "('Nonlocal', None, 18)": {"mod": [18, 19]}, "('LocalNameSpace', None, 52)": {"mod": [52, 53, 54, 56, 57, 58, 59, 60, 74, 75, 76, 77, 79, 80]}, "('LocalNameSpace', '__setitem__', 62)": {"mod": [63, 66, 67, 68, 69]}, "('LocalNameSpace', '__repr__', 82)": {"mod": [83]}, "('JSInterpreter', '__init__', 87)": {"mod": [88, 89, 90, 91, 92, 93]}, "('JSInterpreter', '_named_object', 95)": {"mod": [97]}, "('JSInterpreter', '_separate', 102)": {"mod": [106, 108, 109, 110, 111, 112, 113, 115]}, "('JSInterpreter', '_separate_at_paren', 126)": {"mod": [129]}, "('JSInterpreter', 'interpret_statement', 132)": {"mod": [134, 136, 137, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 157, 158]}, "('JSInterpreter', 'assertion', 367)": {"mod": [370]}, "('JSInterpreter', 'extract_function_code', 510)": {"mod": [513, 514, 516, 520, 521]}, "('JSInterpreter', 'extract_function_from_code', 526)": {"mod": [537]}, "('JSInterpreter', 'build_function', 545)": {"mod": [547, 549, 550, 551, 552, 553, 554, 555, 556, 557]}}}, {"path": "youtube_dl/utils.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1698, 1737, 1740, 1743, 1755, 1765]}, "(None, 'extract_timezone', 2967)": {"mod": [2969, 2970, 2972]}, "(None, 'unified_timestamp', 3036)": {"mod": [3040, 3066]}, "(None, 'int_or_none', 3672)": {"mod": [3676, 3677, 3678, 3682]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["youtube_dl/jsinterp.py", "youtube_dl/utils.py", "youtube_dl/compat.py"], "doc": [], "test": ["test/test_utils.py", "test/test_jsinterp.py", "test/test_youtube_signature.py"], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "0f6422590e44e99e9b81cf2367666efe89fae3aa", "iss_has_pr": 1, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/30166", "iss_label": "", "title": "problem parsing site ceskatelevize.cz while trying to downloading video", "body": "## Checklist\r\n\r\n- [x ] I'm reporting a broken site support\r\n- [ x] I've verified that I'm running youtube-dl version **2021.06.06**\r\n- [ x] I've checked that all provided URLs are alive and playable in a browser\r\n- [ x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [ x] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n```\r\nc:\\YoutubeDL>youtube-dl.exe --no-check-certificate --no-mtime -F https://www.ceskatelevize.cz/ivysilani/19796-pumpari-od-zlate-podkovy/29238360846/ -v\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['--no-check-certificate', '--no-mtime', '-F', 'https://www.ceskatelevize.cz/ivysilani/19796-pumpari-od-zlate-podkovy/29238360846/', '-v']\r\n[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252\r\n[debug] youtube-dl version 2021.06.06\r\n[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.19041\r\n[debug] exe versions: ffmpeg N-90920-ge07b1913fc, ffprobe N-90920-ge07b1913fc\r\n[debug] Proxy map: {}\r\n[CeskaTelevize] 29238360846: Downloading webpage\r\n[CeskaTelevize] 29238360846: Downloading JSON metadata\r\nTraceback (most recent call last):\r\n File \"__main__.py\", line 19, in <module>\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\__init__.py\", line 475, in main\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\__init__.py\", line 465, in _real_main\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\YoutubeDL.py\", line 2069, in download\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\YoutubeDL.py\", line 808, in extract_info\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\YoutubeDL.py\", line 815, in wrapper\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\YoutubeDL.py\", line 836, in __extract_info\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\extractor\\common.py\", line 534, in extract\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\extractor\\ceskatelevize.py\", line 130, in _real_extract\r\n File \"C:\\Users\\dst\\AppData\\Roaming\\Build archive\\youtube-dl\\ytdl-org\\tmpkqxnwl31\\build\\youtube_dl\\utils.py\", line 2158, in sanitized_Request\r\n File \"C:\\Python\\Python34\\lib\\urllib\\request.py\", line 267, in __init__\r\n File \"C:\\Python\\Python34\\lib\\urllib\\request.py\", line 293, in full_url\r\n File \"C:\\Python\\Python34\\lib\\urllib\\request.py\", line 322, in _parse\r\nValueError: unknown url type: 'Error'\r\n```\r\n\r\n\r\n## Description\r\n\r\nDownload form ceskatelevize.cz is not working - parsing error\r\n", "pr_html_url": "https://github.com/ytdl-org/youtube-dl/pull/30713", "file_loc": {"base_commit": "0f6422590e44e99e9b81cf2367666efe89fae3aa", "files": [{"path": "youtube_dl/extractor/ceskatelevize.py", "status": "modified", "Loc": {"('CeskaTelevizeIE', None, 22)": {"add": [54, 60, 70], "mod": [23, 25, 27, 29, 30, 32, 39, 41, 43, 44, 45, 46, 53, 65, 67]}, "('CeskaTelevizeIE', '_real_extract', 71)": {"add": [202], "mod": [74, 78, 103, 111, 133, 134, 170, 184, 185]}, "(None, None, None)": {"mod": [15, 16]}, "('CeskaTelevizePoradyIE', None, 241)": {"mod": [241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 277, 278, 280, 282, 283, 284, 285, 286, 287, 289]}}}, {"path": "youtube_dl/extractor/extractors.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [211, 212, 213, 214]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["youtube_dl/extractor/extractors.py", "youtube_dl/extractor/ceskatelevize.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "c6c0e23a32ffb9f2e5657aceaede7da1fb44e490", "iss_has_pr": 1, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/474", "iss_label": "", "title": "Is there any way to determine the length of a video without downloading it?", "body": "I was looking at the output of --write-info-json but could not determine the parameter (if there is any) that says the length of a video.\n", "pr_html_url": "https://github.com/ytdl-org/youtube-dl/pull/486", "file_loc": {"base_commit": "c6c0e23a32ffb9f2e5657aceaede7da1fb44e490", "files": [{"path": "youtube_dl/InfoExtractors.py", "status": "modified", "Loc": {"('YoutubeIE', '_real_extract', 289)": {"add": [416], "mod": [483]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["youtube_dl/InfoExtractors.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "0bdfa27ab6cce6f82243470d1e48d283e01aa84c", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/7109", "iss_label": "type: bug\naws:sns\nstatus: confirmed", "title": "bug: InvalidParameterException when sending to SNS topic since version 1.2", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nI'm using localstack in my current build. Except since version 1.2 I get the following exception (Java 17 & Spring Boot 2.7.1):\r\n\r\n```\r\ncom.amazonaws.services.sns.model.InvalidParameterValueException: The message attribute 'timestamp' has an invalid message attribute type, the set of supported type prefixes is Binary, Number, and String. (Service: AmazonSNS; Status Code: 400; Error Code: ParameterValueInvalid; Request ID: E8OZ22XIRX11DTY2PWOGI5FB55U5J0S11VC8YJK6ES9UKCVL0DY1; Proxy: null)\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1862) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1415) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1384) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1154) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:811) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539) ~[aws-java-sdk-core-1.12.132.jar:na]\r\n\tat com.amazonaws.services.sns.AmazonSNSClient.doInvoke(AmazonSNSClient.java:3545) ~[aws-java-sdk-sns-1.12.132.jar:na]\r\n\tat com.amazonaws.services.sns.AmazonSNSClient.invoke(AmazonSNSClient.java:3512) ~[aws-java-sdk-sns-1.12.132.jar:na]\r\n\tat com.amazonaws.services.sns.AmazonSNSClient.invoke(AmazonSNSClient.java:3501) ~[aws-java-sdk-sns-1.12.132.jar:na]\r\n\tat com.amazonaws.services.sns.AmazonSNSClient.executePublish(AmazonSNSClient.java:2475) ~[aws-java-sdk-sns-1.12.132.jar:na]\r\n\tat com.amazonaws.services.sns.AmazonSNSClient.publish(AmazonSNSClient.java:2444) ~[aws-java-sdk-sns-1.12.132.jar:na]\r\n\tat io.awspring.cloud.messaging.core.TopicMessageChannel.sendInternal(TopicMessageChannel.java:91) ~[spring-cloud-aws-messaging-2.4.0.jar:2.4.0]\r\n\tat org.springframework.messaging.support.AbstractMessageChannel.send(AbstractMessageChannel.java:139) ~[spring-messaging-5.3.21.jar:5.3.21]\r\n\tat org.springframework.messaging.support.AbstractMessageChannel.send(AbstractMessageChannel.java:125) ~[spring-messaging-5.3.21.jar:5.3.21]\r\n\tat io.awspring.cloud.messaging.core.support.AbstractMessageChannelMessagingSendingTemplate.doSend(AbstractMessageChannelMessagingSendingTemplate.java:59) ~[spring-cloud-aws-messaging-2.4.0.jar:2.4.0]\r\n\tat io.awspring.cloud.messaging.core.support.AbstractMessageChannelMessagingSendingTemplate.doSend(AbstractMessageChannelMessagingSendingTemplate.java:44) ~[spring-cloud-aws-messaging-2.4.0.jar:2.4.0]\r\n\tat org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109) ~[spring-messaging-5.3.21.jar:5.3.21]\r\n\tat org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:99) ~[spring-messaging-5.3.21.jar:5.3.21]\r\n\tat com.polovyi.ivan.tutorials.service.PurchaseTransactionService.processRequest(PurchaseTransactionService.java:36) ~[classes/:na]\r\n\tat com.polovyi.ivan.tutorials.controller.PurchaseTransactionController.acceptPurchaseTransaction(PurchaseTransactionController.java:17) ~[classes/:na]\r\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]\r\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]\r\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]\r\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]\r\n\tat org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:681) ~[tomcat-embed-core-9.0.64.jar:4.0.FR]\r\n\tat org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.3.21.jar:5.3.21]\r\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:764) ~[tomcat-embed-core-9.0.64.jar:4.0.FR]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.3.21.jar:5.3.21]\r\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.21.jar:5.3.21]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:890) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1787) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.64.jar:9.0.64]\r\n\tat java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]\r\n```\r\n\r\nWhen using localstack 1.1 (and earlier versions) and leaving everything else the same I don't get the exception.\r\nThe message header 'timestamp' is set by Spring messaging under the hood and is immutable, so there's no way to change that without using reflection or something else ugly. What I could do is use the aws-sdk directly.\r\nHowever, I just wanted to mention the change in behaviour of localstack v1.2\n\n### Expected Behavior\n\nI'd expect to get a 202/Accepted when the application is sending a message to the SNS topic.\r\n\r\n\n\n### How are you starting LocalStack?\n\nWith a docker-compose file\n\n### Steps To Reproduce\n\nYou can use the code from this project: https://github.com/polovyivan/spring-cloud-sns-topic-publisher\r\nAnd only update the localstack version to 1.2\r\n\r\n```\r\ncd src/main/resources/docker-compose\r\ndocker-compose up\r\nmvn clean spring-boot:run\r\n```\r\n\r\nThen send an empty http POST to http://localhost:8080/spring-cloud-sns-topic-publisher/purchase-transactions\n\n### Environment\n\n```markdown\n- OS: macOS Montery 12.6\r\n- LocalStack: 1.2\r\n- Java: 17\r\n- Spring boot: 2.7.1\r\n- Maven: 3.8.1\r\n- Docker: 20.10.17, build 100c701\n```\n\n\n### Anything else?\n\n_No response_", "pr_html_url": "https://github.com/localstack/localstack/pull/7181", "file_loc": {"base_commit": "0bdfa27ab6cce6f82243470d1e48d283e01aa84c", "files": [{"path": "localstack/services/sns/provider.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [135]}, "(None, 'validate_message_attributes', 1362)": {"mod": [1379]}, "(None, 'validate_message_attribute_name', 1395)": {"mod": [1402]}}}, {"path": "tests/integration/test_sns.py", "status": "modified", "Loc": {"('TestSNSProvider', 'test_publish_to_platform_endpoint_is_dispatched', 2539)": {"add": [2591]}}}, {"path": "tests/integration/test_sns.snapshot.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [2170]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/sns/provider.py", "tests/integration/test_sns.snapshot.json"], "doc": [], "test": ["tests/integration/test_sns.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "91859102289e257e360682887e871c6a4bfbd75d", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/457", "iss_label": "status: triage needed", "title": "CloudWatch listener returns Internal Server Error", "body": "Attempting to access the CloudWatch service at port 4582 returns `HTTP/1.0 500 INTERNAL SERVER ERROR`\r\n\r\n**Steps to reproduce**\r\n\r\n```\r\n$ localstack start\r\n$ curl http://127.0.0.1:4582\r\n```\r\n\r\nthis returns\r\n\r\n```\r\n<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\r\n<title>500 Internal Server Error\r\n

      Internal Server Error

      \r\n

      The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

      \r\n```\r\n", "pr_html_url": "https://github.com/localstack/localstack/pull/527", "file_loc": {"base_commit": "91859102289e257e360682887e871c6a4bfbd75d", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [261]}}}, {"path": "localstack/constants.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [33]}}}, {"path": "localstack/ext/java/pom.xml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [8]}}}, {"path": "localstack/services/install.py", "status": "modified", "Loc": {"(None, 'install_lambda_java_libs', 102)": {"add": [104]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [20]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/constants.py", "localstack/services/install.py"], "doc": ["README.md"], "test": [], "config": ["requirements.txt"], "asset": ["localstack/ext/java/pom.xml"]}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "60d2c3dc68d9fae0f1e0acb7d0c705df408bd8c5", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/5030", "iss_label": "type: bug\npriority: high\naws:cloudformation\nstatus: resolved/fixed\naws:stepfunctions", "title": "bug: State machines in non-default regions can't be deleted and fail to create proper ARN", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n- Sometimes wrong ARNs are created for the child state machine (e.g. `arn:aws:states:us-east-1:000000000000:stateMachine:us-east-1_us-east-1_mystatemachine` ... the double region shouldn't be there).\r\n\r\n\r\n- Deleting a CloudFormation stack with a nested statemachine will fail to properly delete the child state machine when deleting the stack.\n\n### Expected Behavior\n\nState machines should work the same in all regions due to the transparent ARN patching. \r\n\r\nParity with AWS should be established for nested state machines.\n\n### How are you starting LocalStack?\n\nCustom (please describe below)\n\n### Steps To Reproduce\n\nwill be provided via integration test\n\n### Environment\n\n```markdown\n- OS: Ubuntu 20.04 LTS\r\n- LocalStack: latest\n```\n\n\n### Anything else?\n\nMight be regressions from the move of stepfunctions functionality to Community.", "pr_html_url": "https://github.com/localstack/localstack/pull/5183", "file_loc": {"base_commit": "60d2c3dc68d9fae0f1e0acb7d0c705df408bd8c5", "files": [{"path": "localstack/services/install.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [36, 77, 344], "mod": [82]}, "(None, 'install_stepfunctions_local', 315)": {"mod": [340, 341]}}}, {"path": "localstack/services/stepfunctions/stepfunctions_listener.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3, 5, 9, 10, 16]}, "('ProxyListenerStepFunctions', 'forward_request', 20)": {"mod": [23, 24, 25, 26, 27, 29, 30, 31, 32]}, "('ProxyListenerStepFunctions', 'return_response', 34)": {"mod": [51, 52, 53, 54, 56, 57, 58, 60, 61, 62, 63, 64, 65, 66, 67, 69, 70]}}}, {"path": "localstack/services/stepfunctions/stepfunctions_starter.py", "status": "modified", "Loc": {"(None, 'get_command', 20)": {"mod": [22, 23, 28, 29]}}}, {"path": "tests/integration/test_stepfunctions.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 479], "mod": [481, 482, 483, 484, 485, 486]}, "(None, 'test_multiregion', 482)": {"add": [487, 494], "mod": [489, 492, 496, 497, 498, 499, 501, 502, 503, 504, 506, 507, 508, 509, 511, 512]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/stepfunctions/stepfunctions_listener.py", "localstack/services/stepfunctions/stepfunctions_starter.py", "localstack/services/install.py"], "doc": [], "test": ["tests/integration/test_stepfunctions.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "b302f2939d4f39432ccd565ab44d040dc1be4eea", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/7494", "iss_label": "type: bug\naws:kms\nstatus: confirmed", "title": "bug: KMS Alias Creation Fails to Return Error", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nThis looks similar to https://github.com/localstack/localstack/issues/6471\r\n\r\nI am trying to sign something using KMS for some tests. It seems like doing so using an alias does not work. For example I create a key and an alias like so:\r\n\r\n```\r\n# Add a key used for signing urls\r\naws-cli --endpoint-url=http://localhost:4566 kms create-key \\\r\n --key-usage SIGN_VERIFY \\\r\n --key-spec RSA_4096 \r\n\r\n\r\n# Add well known alias for key\r\naws-cli --endpoint-url=http://localhost:4566 kms create-alias \\\r\n --alias-name \"some-nice-alias-name\" \\\r\n --target-key-id \r\n```\r\n\r\nI can see that this looks to have worked by verifying the key and alias on the CLI\r\n\r\n```\r\naws-cli --endpoint-url=http://localhost:4566 kms list-keys\r\n{\r\n \"Keys\": [\r\n {\r\n \"KeyId\": \"f7d2d869-f6b8-4977-96ea-5bd70cb0d5f2\",\r\n \"KeyArn\": \"arn:aws:kms:us-east-1:000000000000:key/\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\nand\r\n\r\n```\r\n aws-cli --endpoint-url=http://localhost:4566 kms list-aliases\r\n{\r\n \"Aliases\": [\r\n {\r\n \"AliasName\": \"census-webform-url-signing-key\",\r\n \"AliasArn\": \"arn:aws:kms:us-east-1:000000000000:alias/some-nice-alias-name\",\r\n \"TargetKeyId\": \"\",\r\n \"CreationDate\": \"2023-01-13T16:58:52.279782-05:00\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\nhowever attempting to sign something does not work\r\n\r\n```\r\n# Make sure we can sign\r\naws-cli --endpoint-url=http://localhost:4566 kms sign \\\r\n --cli-binary-format raw-in-base64-out \\\r\n --key-id \"alias/some-nice-alias-name\" \\\r\n --message 'wwwtestcom' \\\r\n --message-type RAW \\\r\n --signing-algorithm \"RSASSA_PSS_SHA_512\"\r\n \r\n```\r\n \r\nresults in \r\n\r\nAn error occurred (NotFoundException) when calling the Sign operation: Unable to find KMS alias with name alias/some-nice-alias-name\r\n\r\n\r\n### Expected Behavior\r\n\r\nWould expect output from the last command not the resulting error.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n docker run localstack/localstack\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n```\r\naws-cli --endpoint-url=http://localhost:4566 kms create-key \\\r\n --key-usage SIGN_VERIFY \\\r\n --key-spec RSA_4096 \r\n\r\n\r\naws-cli --endpoint-url=http://localhost:4566 kms create-alias \\\r\n --alias-name \"some-nice-alias-name\" \\\r\n --target-key-id \r\n\r\naws-cli --endpoint-url=http://localhost:4566 kms list-keys\r\n{\r\n \"Keys\": [\r\n {\r\n \"KeyId\": \"f7d2d869-f6b8-4977-96ea-5bd70cb0d5f2\",\r\n \"KeyArn\": \"arn:aws:kms:us-east-1:000000000000:key/\"\r\n }\r\n ]\r\n}\r\n\r\n aws-cli --endpoint-url=http://localhost:4566 kms list-aliases\r\n{\r\n \"Aliases\": [\r\n {\r\n \"AliasName\": \"census-webform-url-signing-key\",\r\n \"AliasArn\": \"arn:aws:kms:us-east-1:000000000000:alias/some-nice-alias-name\",\r\n \"TargetKeyId\": \"\",\r\n \"CreationDate\": \"2023-01-13T16:58:52.279782-05:00\"\r\n }\r\n ]\r\n}\r\n\r\naws-cli --endpoint-url=http://localhost:4566 kms sign \\\r\n --cli-binary-format raw-in-base64-out \\\r\n --key-id \"alias/some-nice-alias-name\" \\\r\n --message 'wwwtestcom' \\\r\n --message-type RAW \\\r\n --signing-algorithm \"RSASSA_PSS_SHA_512\"\r\n```\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Macos 12.6.2\r\n- LocalStack: latest docker image\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\nI did test using the same using the actual generated key ID and this works. I also attempted this through a BOTO3 client in python and the same resulted. ", "pr_html_url": "https://github.com/localstack/localstack/pull/7826", "file_loc": {"base_commit": "b302f2939d4f39432ccd565ab44d040dc1be4eea", "files": [{"path": "localstack/services/kms/provider.py", "status": "modified", "Loc": {"('KmsProvider', 'create_alias', 712)": {"add": [714]}}}, {"path": "tests/integration/test_kms.py", "status": "modified", "Loc": {"('TestKMS', None, 56)": {"add": [60]}}}, {"path": "tests/integration/test_kms.snapshot.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [391]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["tests/integration/test_kms.snapshot.json", "localstack/services/kms/provider.py"], "doc": [], "test": ["tests/integration/test_kms.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "fa1a64c954b89b88ac30e77fd12930efc04c04c5", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/748", "iss_label": "area: documentation\nstatus: triage needed", "title": "Contributors not loading up. Broken links for backers and Contributors. ", "body": "\r\n\r\nLinks like this https://opencollective.com/localstack/sponsor/X/website won't exist.\r\nJust symlinks to https://opencollective.com/localstack#contributors.", "pr_html_url": "https://github.com/localstack/localstack/pull/856", "file_loc": {"base_commit": "fa1a64c954b89b88ac30e77fd12930efc04c04c5", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [454]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "5b6eee89f41af000b2da5ff43e3292529ff4c56f", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/1808", "iss_label": "type: question\narea: configuration\ngood first issue", "title": "SNS: unable to ConfirmSubscription: Topic not found", "body": "Hi all,\r\n\r\nThanks for your effort on localstack! I'm trying to locally test SNS (HTTP) w/ cloudwatch triggers, but am unable to get past confirming the subscription.\r\n\r\nMy application receives the following POST body when creating a subscription:\r\n```\r\n{\"MessageId\": \"5cb062ad-0d4e-41e6-9a80-7053926b20b4\", \"Type\": \"SubscriptionConfirmation\", \"Timestamp\": \"2019-11-27T04:29:21.166530Z\", \"Message\": \"You have chosen to subscribe to the topic arn:aws:sns:us-e\r\nast-1:000000000000:lambda-xyz-errors.\\nTo confirm the subscription, visit the SubscribeURL included in this message.\", \"TopicArn\": \"arn:aws:sns:us-east-1:000000000000:lambda-xyz-errors\", \"Token\": \"3f97b1$\r\n2\", \"SubscribeURL\": \"http://b40035e82fc6:4575/?Action=ConfirmSubscription&TopicArn=arn:aws:sns:us-east-1:000000000000:lambda-xyz-errors&Token=3f97b192\"}\r\n```\r\n\r\nIf I curl the SubscribeURL, I get a `topic does not exist` error:\r\n\r\n```\r\n[I] \u279c curl -XGET -H 'Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/sns/aws4_request,SignedHeaders=host;range;x-amz-date,Signature=fe5f80f77d5fa3beca038a248ff027d044534\r\n2fe2855ddc963176630326f1024' http://localhost:4575/\\?Action\\=ConfirmSubscription\\&TopicArn\\=arn:aws:sns:us-east-1:000000000000:lambda-xyz-errors\\&Token\\=75f32aec\r\n\r\n \r\n Sender\r\n NotFound\r\n Topic does not exist\r\n \r\n 9dd01905-5012-5f99-8663-4b3ecd0dfaef\r\n%\r\n```\r\n\r\nIf I run list-topics against the container, I can see it exists:\r\n\r\n```bash\r\ndocker-compose exec localstack awslocal sns list-topics\r\n{\r\n \"Topics\": [\r\n {\r\n \"TopicArn\": \"arn:aws:sns:us-east-1:000000000000:lambda-xyz-errors\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\n\r\nThe topic and subscription were created with:\r\n```bash\r\nawslocal sns create-topic --name lambda-xyz-errors\r\nawslocal sns subscribe --topic-arn arn:aws:sns:us-east-1:000000000000:lambda-xyz-errors --protocol http --notification-endpoint \"http://localhost:3000/\"\r\n\r\n```", "pr_html_url": "https://github.com/localstack/localstack/pull/2043", "file_loc": {"base_commit": "5b6eee89f41af000b2da5ff43e3292529ff4c56f", "files": [{"path": "localstack/services/sns/sns_listener.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [21, 251]}, "('ProxyListenerSNS', 'forward_request', 31)": {"add": [73]}, "(None, 'do_subscribe', 252)": {"add": [269]}}}, {"path": "tests/integration/test_sns.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [12]}, "('SNSTest', None, 22)": {"add": [206]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/sns/sns_listener.py"], "doc": [], "test": ["tests/integration/test_sns.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "e65705a6ebf93ed7fbb05b690ebeb2c9c4aa88ae", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/1225", "iss_label": "", "title": "Presigned S3 url doesnt notify sqs ", "body": "\r\nI have configured s3 bucket with event configuration to sqs for every object creation. When I try out aws cli command I get the notification correctly. \r\n\r\nWhen I try using presigned url with curl/postman command, I dont get the sqs notification. **Is this a known issue and are there any work arounds?**", "pr_html_url": "https://github.com/localstack/localstack/pull/1639", "file_loc": {"base_commit": "e65705a6ebf93ed7fbb05b690ebeb2c9c4aa88ae", "files": [{"path": "localstack/services/generic_proxy.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [32]}}}, {"path": "localstack/services/s3/s3_listener.py", "status": "modified", "Loc": {"('ProxyListenerS3', 'is_query_allowable', 705)": {"mod": [709, 710]}}}, {"path": "tests/integration/test_s3.py", "status": "modified", "Loc": {"('S3ListenerTest', None, 30)": {"add": [496]}, "('S3ListenerTest', '_perform_multipart_upload', 503)": {"add": [523]}, "('S3ListenerTest', 'test_s3_put_object_notification', 62)": {"mod": [66, 67, 68, 70, 71, 74, 75, 76, 77, 90, 91, 92, 93, 94]}, "('S3ListenerTest', 'test_s3_upload_fileobj_with_large_file_notification', 117)": {"mod": [118, 119, 120, 122, 123, 124, 125, 126, 127, 136, 137, 138, 139, 140]}, "('S3ListenerTest', 'test_s3_multipart_upload_with_small_single_part', 161)": {"mod": [167, 168, 169, 171, 172, 173, 174, 175, 180, 181, 182, 183, 184]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/generic_proxy.py", "localstack/services/s3/s3_listener.py"], "doc": [], "test": ["tests/integration/test_s3.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "3cc0541a260c2f2af90e435f333c623e84ed4880", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/4137", "iss_label": "type: feature", "title": "Consider Kinesis-Mock over Kinesalite", "body": "# Type of request: This is a ...\r\n\r\n- [ ] bug report\r\n- [X] feature request\r\n\r\n# Detailed description\r\n\r\nKinesalite has been a great mock for a long time. However, it is missing several API calls (e.g. UpdateShards), and seems to be on life-support as of late (last commit being Oct. 2020).\r\n\r\n[Kinesis-Mock](https://github.com/etspaceman/kinesis-mock) is a new mock which supports all API calls except SubscribeToShard (due to lack of support for the required Http2 Features in the Scala ecosystem). It is distributed as a docker image, but there is also a jar executable that can be used. \r\n\r\nI am the creator of Kinesis-Mock, so I can work with Localstack on any changes that would be needed to make this pairing work, if desired.\r\n", "pr_html_url": "https://github.com/localstack/localstack/pull/4152", "file_loc": {"base_commit": "3cc0541a260c2f2af90e435f333c623e84ed4880", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [103, 189, 719], "mod": [193]}}}, {"path": "localstack/config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [46]}}}, {"path": "localstack/services/kinesis/kinesis_listener.py", "status": "modified", "Loc": {"('ProxyListenerKinesis', 'forward_request', 37)": {"mod": [40, 62, 73, 81, 109, 117]}, "('ProxyListenerKinesis', 'return_response', 131)": {"mod": [173]}}}, {"path": "localstack/services/kinesis/kinesis_starter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 1, 2, 11], "mod": [5, 7]}, "(None, 'start_kinesis', 23)": {"add": [23], "mod": [26]}, "(None, 'appy_patches', 13)": {"mod": [13]}}}, {"path": "tests/integration/test_cloudformation.py", "status": "modified", "Loc": {"('CloudFormationTest', 'test_create_delete_stack', 618)": {"mod": [694]}}}, {"path": "tests/integration/test_dynamodb.py", "status": "modified", "Loc": {"('TestDynamoDB', 'test_dynamodb_stream_stream_view_type', 368)": {"mod": [387]}}}, {"path": "tests/integration/test_kinesis.py", "status": "modified", "Loc": {"('TestKinesis', 'test_stream_consumers', 14)": {"add": [25, 30, 59], "mod": [54, 55, 56]}, "('TestKinesis', 'test_subscribe_to_shard', 65)": {"add": [73]}, "('TestKinesis', 'test_subscribe_to_shard_with_sequence_number_as_iterator', 109)": {"add": [117]}}}, {"path": "tests/unit/test_kinesis.py", "status": "modified", "Loc": {"('KinesisListenerTest', 'test_describe_stream_summary_is_redirected', 13)": {"mod": [14, 16, 18]}, "('KinesisListenerTest', 'test_overwrite_update_shard_count_on_error', 46)": {"mod": [47, 48, 49, 50, 52, 54, 55, 56, 57, 58]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/config.py", "localstack/services/kinesis/kinesis_starter.py", "localstack/services/kinesis/kinesis_listener.py"], "doc": ["README.md"], "test": ["tests/integration/test_kinesis.py", "tests/unit/test_kinesis.py", "tests/integration/test_dynamodb.py", "tests/integration/test_cloudformation.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "31286eb81823ee97e4e4a6b519abab9efcffe091", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/6155", "iss_label": "type: bug\naws:kinesis\naws:dynamodbstreams\narea: integration/sam", "title": "samlocal not returning on second and later deploys", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nWhen executing samlocal deploy twice without updating the template it gets stuck after the deploy step (before the step of the output is shown)\r\n\r\nAlso i left it running for long time checking if it could finish and around 2 hours laters a kinesis stacktrace appeared on the logs on the docker container\r\n\r\n```\r\nlocalstack_main | 2022-05-26T09:14:21.135:INFO:localstack.services.kinesis.kinesis_mock_server: [io-compute-1] WARN 2022-05-26 09:14:21,132 k.m.cache.Cache x-amzn-RequestId=3a527f8b-dcd4-11ec-ac09-ad4f3aa847a9, action=GetRecords, contextId=3a527f8a-dcd4-11ec-ac09-ad4f3aa847a9, x-amz-id-2=WLxUzW0heAIF5/pAXeHZEM0qejb1MRVum6fgYxuWPz146y+KGwkjmNwjv9IWngSM8RaihKhqcdibbhN4kruU+3p8/FlKZnTp, contentType=application/x-amz-json-1.1 - Getting records was unuccessful\r\nlocalstack_main | 2022-05-26T09:14:21.135:INFO:localstack.services.kinesis.kinesis_mock_server: kinesis.mock.ExpiredIteratorException: The shard iterator has expired. Shard iterators are only valid for 300 seconds\r\nlocalstack_main | 2022-05-26T09:14:21.135:INFO:localstack.services.kinesis.kinesis_mock_server: at kinesis.mock.models.ShardIterator.parse(ShardIterator.scala:58)\r\nlocalstack_main | 2022-05-26T09:14:21.136:INFO:localstack.services.kinesis.kinesis_mock_server: at kinesis.mock.api.GetRecordsRequest.$anonfun$getRecords$1(GetRecordsRequest.scala:24)\r\nlocalstack_main | 2022-05-26T09:14:21.136:INFO:localstack.services.kinesis.kinesis_mock_server: at cats.effect.IOFiber.runLoop(IOFiber.scala:358)\r\nlocalstack_main | 2022-05-26T09:14:21.136:INFO:localstack.services.kinesis.kinesis_mock_server: at cats.effect.IOFiber.asyncContinueSuccessfulR(IOFiber.scala:1338)\r\nlocalstack_main | 2022-05-26T09:14:21.136:INFO:localstack.services.kinesis.kinesis_mock_server: at cats.effect.IOFiber.run(IOFiber.scala:140)\r\nlocalstack_main | 2022-05-26T09:14:21.136:INFO:localstack.services.kinesis.kinesis_mock_server: at cats.effect.unsafe.WorkerThread.run(WorkerThread.scala:549)\r\nlocalstack_main | 2022-05-26T09:14:21.136:INFO:localstack.services.kinesis.kinesis_mock_server: at com.oracle.svm.core.thread.JavaThreads.threadStartRoutine(JavaThreads.java:519)\r\nlocalstack_main | 2022-05-26T09:14:21.136:INFO:localstack.services.kinesis.kinesis_mock_server: at com.oracle.svm.core.posix.thread.PosixJavaThreads.pthreadStartRoutine(PosixJavaThreads.java:192)\r\nlocalstack_main | 2022-05-26T09:14:21.140:DEBUG:localstack.services.dynamodbstreams.provider: Shard iterator for underlying kinesis stream expired\r\nlocalstack_main | 2022-05-26T09:14:21.146:INFO:localstack.utils.threads: Thread run method >({'function_arn': 'arn:aws:lambda:us-west-2:000000000000:function:sandbox-events-generator-worker', 'stream_arn': 'arn:aws:dynamodb:us-west-2:000000000000:table/sandbox-events-generator-jobs/stream/2022-05-26T07:20:41.621', 'batch_size': 1, 'parallelization_factor': 1, 'lock_discriminator': '60cf3f5b-11af-4b81-840b-c62920e5f0cb/arn:aws:dynamodb:us-west-2:000000000000:table/sandbox-events-generator-jobs/stream/2022-05-26T07:20:41.621/shardId-00000001653500000000-000000000000', 'shard_id': 'shardId-00000001653500000000-000000000000', 'stream_client': , 'shard_iterator': 'AAAAAAAAAAEqli29q/ZrvGK0Qv58Ys0UOaNNnguVf1262Mr190addTsT21HR/XdUWnOyHg1FUUW4R774Gy1X2lmyJQMqkTKuh5nVySaVOmGBrjNRHabrLqpzejZqpTYba8lThyNRgs95fCdid2O4GmMSpaBEXElMSDpWQ/LU/Hb5NG3P0pInAfuajJsFpH8TjqTbNHNf3EBxC0OYM1EfSBu183HSLUkECOBmWfp87OOWPH+WiWiWzQ==', 'failure_destination': None, 'max_num_retries': inf}) failed: An error occurred (ExpiredIteratorException) when calling the GetRecords operation: Shard iterator has expired Traceback (most recent call last):\r\nlocalstack_main | File \"/opt/code/localstack/localstack/utils/threads.py\", line 39, in run\r\nlocalstack_main | result = self.func(self.params, **kwargs)\r\nlocalstack_main | File \"/opt/code/localstack/localstack/services/awslambda/event_source_listeners/stream_event_source_listener.py\", line 182, in _listen_to_shard_and_invoke_lambda\r\nlocalstack_main | records_response = stream_client.get_records(\r\nlocalstack_main | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/client.py\", line 508, in _api_call\r\nlocalstack_main | return self._make_api_call(operation_name, kwargs)\r\nlocalstack_main | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/client.py\", line 911, in _make_api_call\r\nlocalstack_main | raise error_class(parsed_response, operation_name)\r\nlocalstack_main | botocore.errorfactory.ExpiredIteratorException: An error occurred (ExpiredIteratorException) when calling the GetRecords operation: Shard iterator has expired\r\n```\n\n### Expected Behavior\n\nthe command finishes properly like in the first run\n\n### How are you starting LocalStack?\n\nWith a docker-compose file\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\ndocker-compose up -d\r\n\r\ndocker-compose.yml:\r\n\r\n```\r\nversion: \"3.8\"\r\n\r\nservices:\r\n localstack:\r\n container_name: \"${LOCALSTACK_DOCKER_NAME-localstack_main}\"\r\n image: localstack/localstack:latest\r\n network_mode: bridge\r\n ports:\r\n - \"127.0.0.1:53:53\" # only required for Pro (DNS)\r\n - \"127.0.0.1:53:53/udp\" # only required for Pro (DNS)\r\n - \"127.0.0.1:443:443\" # only required for Pro (LocalStack HTTPS Edge Proxy)\r\n - \"127.0.0.1:4510-4559:4510-4559\" # external service port range\r\n - \"127.0.0.1:4566:4566\" # LocalStack Edge Proxy\r\n environment:\r\n - SERVICES=dynamodb,cloudformation,lambda,s3,sts,apigateway,iam\r\n - DEBUG=1\r\n - HOST_TMP_FOLDER=${TMPDIR:-/tmp/}localstack\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n volumes:\r\n - \"${TMPDIR:-/tmp}/localstack:/tmp/localstack\"\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n networks:\r\n net1:\r\n ipv4_address: 10.10.100.3\r\nnetworks:\r\n net1:\r\n driver: bridge\r\n enable_ipv6: false\r\n ipam:\r\n config:\r\n - subnet: 10.10.100.0/24\r\n gateway: 10.10.100.32\r\n```\r\n\r\ntemplate.yaml\r\n\r\n```\r\nAWSTemplateFormatVersion: '2010-09-09'\r\nTransform: AWS::Serverless-2016-10-31\r\nDescription: >\r\n test-sam\r\n \r\n SAM Template for test\r\n\r\nParameters:\r\n Environment:\r\n Type: String\r\n Default: sandbox\r\n Project:\r\n Type: String\r\n Default: project\r\n Component:\r\n Type: String\r\n Default: component\r\n DynamoDBUrl:\r\n Type: String\r\n Default: LOCALSTACK_HOSTNAME\r\n UseLocalstack:\r\n Type: String\r\n Default: 'true'\r\n AllowedValues: [true, false]\r\n\r\nConditions:\r\n UseLocalStack: !Equals [!Ref UseLocalstack, 'true']\r\n\r\n# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst\r\nGlobals:\r\n Function:\r\n Timeout: 360\r\n Environment:\r\n Variables:\r\n REGION: !Ref \"AWS::Region\"\r\n ENVIRONMENT: !Ref Environment\r\n PROJECT: !Ref Project\r\n COMPONENT: !Ref Component\r\n DYNAMO_DB_JOB_TABLE: !Ref JobTable\r\n DYNAMO_DB_URL: !Ref DynamoDBUrl\r\n\r\nResources:\r\n TestFunction:\r\n Type: AWS::Serverless::Function\r\n Properties:\r\n FunctionName: !Join [ \"-\", [ !Ref Environment, !Ref Component, \"api\" ] ]\r\n CodeUri: api/\r\n Handler: api\r\n Runtime: go1.x\r\n Architectures:\r\n - x86_64\r\n Tracing: Active\r\n Events:\r\n CatchAll:\r\n Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api\r\n Properties:\r\n Path: /{proxy+}\r\n Method: ANY\r\n Environment:\r\n Variables:\r\n API_GATEWAY_V1_ENDPOINT: !Sub \"http://localhost:4566/restapis/${ApiGatewayIdV1}/\"\r\n Policies:\r\n - DynamoDBCrudPolicy:\r\n TableName: !Ref JobTable\r\n - AmazonAPIGatewayInvokeFullAccess\r\n\r\n WorkerFunction:\r\n Type: AWS::Serverless::Function\r\n Properties:\r\n FunctionName: !Join [ \"-\", [ !Ref Environment, !Ref Component, \"worker\" ] ]\r\n CodeUri: worker/\r\n Handler: worker\r\n Runtime: go1.x\r\n Architectures:\r\n - x86_64\r\n MemorySize: 512\r\n Timeout: 180\r\n Tracing: Active\r\n Events:\r\n JobTable:\r\n Type: DynamoDB\r\n Properties:\r\n Stream: !GetAtt JobTable.StreamArn\r\n StartingPosition: TRIM_HORIZON\r\n BatchSize: 1\r\n Environment:\r\n Variables:\r\n API_GATEWAY_ENDPOINT: !Sub \"http://localhost:4566/restapis/${ApiGatewayIdV2}/\"\r\n \r\n API_GATEWAY_BULK_ENDPOINT: !Sub \"http://localhost:4566/restapis/${ApiGatewayIdV2}/\"\r\n Policies:\r\n - DynamoDBCrudPolicy:\r\n TableName: !Ref JobTable\r\n - AmazonSNSFullAccess\r\n - AmazonAPIGatewayInvokeFullAccess\r\n\r\n JobTable:\r\n Type: AWS::DynamoDB::Table\r\n Properties:\r\n TableName: !Join [ \"-\", [ !Ref Environment, !Ref Component, \"jobs\" ] ]\r\n AttributeDefinitions:\r\n - AttributeName: Id\r\n AttributeType: S\r\n KeySchema:\r\n - AttributeName: Id\r\n KeyType: HASH\r\n BillingMode: PAY_PER_REQUEST\r\n StreamSpecification:\r\n StreamViewType: NEW_IMAGE\r\n\r\n\r\nOutputs:\r\n # ServerlessRestApi is an implicit API created out of Events key under Serverless::Function\r\n # Find out more about other implicit resources you can reference within SAM\r\n # https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api\r\n EventsGeneratorAPI:\r\n Description: \"API Gateway endpoint URL for Prod environment for First Function\"\r\n Value: !Sub \"http://localhost:4566/restapis/${ServerlessRestApi}/Prod/api/\"\r\n\r\n```\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n awslocal s3 mb s3://mybucket\r\n\n\n### Environment\n\n```markdown\n- OS: \r\n\r\nMacOS Monterey 12.4\r\n\r\n- LocalStack: \r\nLocalStack version: 0.14.3.1\r\nLocalStack Docker container id: 95b639ae197e\r\nLocalStack build date: 2022-05-25\r\nLocalStack build git hash: 2a564393\n```\n\n\n### Anything else?\n\n_No response_", "pr_html_url": "https://github.com/localstack/localstack/pull/6780", "file_loc": {"base_commit": "31286eb81823ee97e4e4a6b519abab9efcffe091", "files": [{"path": "localstack/services/cloudformation/models/cdk.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1]}, "('CDKMetadata', None, 4)": {"add": [10, 12]}, "('CDKMetadata', 'get_deploy_templates', 12)": {"mod": [14]}}}, {"path": "localstack/services/cloudformation/models/ec2.py", "status": "modified", "Loc": {"('EC2RouteTable', 'get_deploy_templates', 35)": {"mod": [46]}}}, {"path": "localstack/services/cloudformation/provider.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3], "mod": [74]}, "('Stack', '_set_resource_status_details', 221)": {"add": [238], "mod": [224]}, "('CloudformationProvider', 'describe_stack_resources', 1180)": {"add": [1198]}, "('CloudformationProvider', 'describe_change_set', 960)": {"mod": [983]}, "('CloudformationProvider', 'list_stack_resources', 1202)": {"mod": [1206]}}}, {"path": "localstack/utils/cloudformation/template_deployer.py", "status": "modified", "Loc": {"('TemplateDeployer', 'apply_change_set', 1218)": {"add": [1220], "mod": [1219, 1225]}, "('TemplateDeployer', 'construct_changes', 1474)": {"add": [1477], "mod": [1489]}, "('TemplateDeployer', 'prepare_should_deploy_change', 1662)": {"add": [1677], "mod": [1679, 1680, 1681, 1682, 1683]}, "('TemplateDeployer', 'apply_change', 1699)": {"add": [1703], "mod": [1709]}, "(None, 'execute_resource_action', 861)": {"mod": [888]}, "(None, 'get_action_name_for_resource_change', 1032)": {"mod": [1032]}, "('TemplateDeployer', 'deploy_stack', 1203)": {"mod": [1209]}, "('TemplateDeployer', 'update_stack', 1236)": {"mod": [1239]}, "('TemplateDeployer', 'init_resource_status', 1339)": {"mod": [1343]}, "('TemplateDeployer', 'update_resource_details', 1345)": {"mod": [1360]}, "('TemplateDeployer', None, 1187)": {"mod": [1473, 1555, 1581]}, "('TemplateDeployer', 'apply_changes', 1509)": {"mod": [1513, 1552]}, "('TemplateDeployer', '_run', 1558)": {"mod": [1560]}, "('TemplateDeployer', 'do_apply_changes_in_loop', 1581)": {"mod": [1625]}}}, {"path": "tests/integration/cloudformation/test_cloudformation_stacks.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 9]}, "(None, 'test_get_template', 251)": {"add": [271]}, "(None, 'test_list_stack_resources_for_removed_resource', 51)": {"mod": [71, 89]}}}, {"path": "tests/integration/cloudformation/test_cloudformation_stacks.snapshot.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [183]}}}, {"path": "tests/integration/templates/template36.yaml", "status": "modified", "Loc": {"(None, None, None)": {"add": [41, 54], "mod": [46, 47]}}}, {"path": "tests/integration/test_cloudformation.py", "status": "modified", "Loc": {"('TestCloudFormation', None, 501)": {"add": [1783]}, "('TestCloudFormation', 'test_cfn_with_multiple_route_tables', 1784)": {"mod": [1785, 1786, 1787, 1789, 1791, 1792, 1793, 1795, 1796, 1797]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/utils/cloudformation/template_deployer.py", "localstack/services/cloudformation/models/cdk.py", "tests/integration/cloudformation/test_cloudformation_stacks.snapshot.json", "localstack/services/cloudformation/models/ec2.py", "localstack/services/cloudformation/provider.py"], "doc": [], "test": ["tests/integration/cloudformation/test_cloudformation_stacks.py", "tests/integration/test_cloudformation.py"], "config": ["tests/integration/templates/template36.yaml"], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "0cf839ae1237e9b5aa9479d80e8f3f1eb3b79b5d", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/164", "iss_label": "priority: high\ntype: feature", "title": "Data persistence for all services", "body": "We should document our roadmap for extended data persistence. (So far, persistent state is only supported for a few of the services). We'll keep this ticket as a reminder in the meantime.", "pr_html_url": "https://github.com/localstack/localstack/pull/2382", "file_loc": {"base_commit": "0cf839ae1237e9b5aa9479d80e8f3f1eb3b79b5d", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [211]}}}, {"path": "localstack/constants.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [34]}}}, {"path": "localstack/plugins.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "(None, 'do_register_localstack_plugins', 29)": {"mod": [144]}}}, {"path": "localstack/services/infra.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [15, 16]}, "(None, 'start_apigateway', 82)": {"mod": [82, 83, 84, 85]}, "(None, 'start_events', 99)": {"mod": [99, 100, 101, 102]}, "(None, 'start_secretsmanager', 151)": {"mod": [151, 152, 153]}}}, {"path": "localstack/services/s3/s3_listener.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27], "mod": [21, 29]}, "('ProxyListenerS3', 'return_response', 995)": {"add": [1003], "mod": [998, 1001]}, "('ProxyListenerS3', None, 826)": {"mod": [826, 995]}}}, {"path": "localstack/services/secretsmanager/secretsmanager_starter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 3]}, "(None, 'start_secretsmanager', 22)": {"add": [23, 30], "mod": [22, 29]}}}, {"path": "localstack/services/ssm/ssm_listener.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}, "('ProxyListenerSSM', None, 19)": {"mod": [19]}}}, {"path": "localstack/utils/persistence.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 6, 9]}, "(None, 'should_record', 29)": {"mod": [29, 31, 32, 33]}, "(None, 'record', 36)": {"mod": [46, 49, 54, 55]}, "(None, 'get_recordable_data', 54)": {"mod": [57, 58, 59, 60, 61]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/constants.py", "localstack/utils/persistence.py", "localstack/services/ssm/ssm_listener.py", "localstack/plugins.py", "localstack/services/infra.py", "localstack/services/secretsmanager/secretsmanager_starter.py", "localstack/services/s3/s3_listener.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "23cd5fba5b3a2012f280a10b0d7266514fc46eb5", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/451", "iss_label": "area: configuration\ntype: feature", "title": "Unable to use self-signed certs - CN incorrect, and/or missing subject alternative name field", "body": "When enabling SSL for the services (required as the Kinesis producer will only support HTTPS), I would like to add the generated self-signed cert to my java truststore so that I can interact with the services. In some instances, I can disable SSL verification, but in others I cannot (for example, when using the Jest ES client library). \r\n\r\nI have tried adding the generated certs to my truststore, however, there is no Subject Alternative Name field, and the CN on the cert doesn't match the host (localhost in this instance), so I'm unable to make use of the cert. If I add a `localstack` entry to my hosts file, it works.\r\n\r\nThe error essentially looks like this:\r\n\r\n```\r\nCaused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for doesn't match any of the subject alternative names: []\r\n```\r\nPlease consider setting the certificate CN to the configured hostname (defaulting to localhost), and or add the subject alternative name field to the cert, which would include the various DNS entries to enable use of the cert.", "pr_html_url": "https://github.com/localstack/localstack/pull/1742", "file_loc": {"base_commit": "23cd5fba5b3a2012f280a10b0d7266514fc46eb5", "files": [{"path": "localstack/utils/common.py", "status": "modified", "Loc": {"(None, 'generate_ssl_cert', 779)": {"mod": [806]}}}, {"path": "tests/integration/test_sqs.py", "status": "modified", "Loc": {"('SQSTest', None, 29)": {"add": [46]}, "('SQSTest', 'test_set_queue_policy', 58)": {"mod": [59, 60]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/utils/common.py"], "doc": [], "test": ["tests/integration/test_sqs.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "c2c025a96888ce091adc4d9c6c9053af86704c4f", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/3336", "iss_label": "aws:sns\nstatus: resolved/stale", "title": "SNS Fifo topic ", "body": "\r\n\r\n# Type of request: This is a bug report\r\n\r\n# Detailed description\r\n`aws --endpoint-url=http://localhost:4575 sns create-topic --name command_post_topic.fifo --attributes FifoTopic=true --attributes ContentBasedDeduplication=false --region us-east-1`\r\n\r\nWhen the above command is executed the following error is thrown\r\n\r\n\r\n\r\n> _**`An error occurred (InvalidParameterValue) when calling the CreateTopic operation: Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.`**_\r\n\r\n\r\n\r\nHere the error is thrown because of the \".fifo\" suffix which is necessary according to AWS. So not able to create a topic with 'fifo' suffix. \r\n", "pr_html_url": "https://github.com/getmoto/moto/pull/3533", "file_loc": {"base_commit": "c2c025a96888ce091adc4d9c6c9053af86704c4f", "files": [{"path": "localstack/services/awslambda/lambda_api.py", "status": "modified", "Loc": {"(None, 'forward_to_fallback_url', 878)": {"mod": [900]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/awslambda/lambda_api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "3a6a3301fca769f2b9c5adbc5c19db442c02e03c", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/8444", "iss_label": "status: resolved/fixed\ntype: feature\naws:s3", "title": "enhancement request: support for s3:ObjectRestore:* bucket notifications", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Enhancement description\n\nCurrently, I'm using Localstack to test locally a lambda function that takes s3`ObjectRestore:Completed` notifications as inputs and it would be really great to have support for these events.\r\n\r\nI know that right now as a workaround I can invoke the lambda function manually using a payload with the same shape that s3 uses, but it's better to have the process run as close as it would run in AWS.\r\n\r\nThanks for creating and maintaining localstack, it's really great!\n\n### \ud83e\uddd1\u200d\ud83d\udcbb Implementation\n\nNot sure, but happy to help if you can give me some pointers.\n\n### Anything else?\n\n_No response_", "pr_html_url": "https://github.com/localstack/localstack/pull/8690", "file_loc": {"base_commit": "3a6a3301fca769f2b9c5adbc5c19db442c02e03c", "files": [{"path": "localstack/services/events/provider.py", "status": "modified", "Loc": {"(None, 'events_handler_put_events', 542)": {"add": [567], "mod": [575]}}}, {"path": "localstack/services/s3/notifications.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 28, 57], "mod": [44]}, "('S3EventNotificationContext', None, 87)": {"add": [89, 97]}, "('S3EventNotificationContext', 'from_request_context', 100)": {"add": [136, 139, 143, 149]}, "('BaseNotifier', '_get_event_payload', 303)": {"add": [349], "mod": [313, 329, 342]}, "('EventBridgeNotifier', '_get_event_payload', 557)": {"add": [564, 612]}}}, {"path": "localstack/services/s3/provider.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 127]}, "('S3Provider', None, 234)": {"add": [1680]}}}, {"path": "tests/integration/s3/test_s3_notifications_eventbridge.py", "status": "modified", "Loc": {"('TestS3NotificationsToEventBridge', 'test_object_put_acl', 126)": {"add": [178]}}}, {"path": "tests/integration/s3/test_s3_notifications_eventbridge.snapshot.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [172]}}}, {"path": "tests/integration/s3/test_s3_notifications_sqs.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}, "('TestS3NotificationsToSQS', 'test_object_put_acl', 962)": {"add": [1018]}}}, {"path": "tests/integration/s3/test_s3_notifications_sqs.snapshot.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [993]}}}, {"path": "tests/integration/test_events.py", "status": "modified", "Loc": {"('TestEvents', 'test_test_event_pattern', 1823)": {"add": [1863]}}}, {"path": "tests/integration/test_events.snapshot.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [173]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["tests/integration/s3/test_s3_notifications_eventbridge.snapshot.json", "tests/integration/test_events.snapshot.json", "localstack/services/events/provider.py", "tests/integration/s3/test_s3_notifications_sqs.snapshot.json", "localstack/services/s3/notifications.py", "localstack/services/s3/provider.py"], "doc": [], "test": ["tests/integration/s3/test_s3_notifications_eventbridge.py", "tests/integration/test_events.py", "tests/integration/s3/test_s3_notifications_sqs.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "784d5c3329b9fd0b77db92ee464c2f5404eab93b", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/187", "iss_label": "type: feature", "title": "Pass custom environment variables to lambda functions", "body": "Is it possible to pass custom environment variables when invoking lambda functions? Ideally I'd like to send the environment variables defined in the docker-compose.yml file to the docker run command here https://github.com/localstack/localstack/blob/d9b2715ba1776e57fabb9e46864e9c5d14d0933b/localstack/services/awslambda/lambda_api.py#L281 but maybe there's a better way of doing it from your point of view.", "pr_html_url": "https://github.com/localstack/localstack/pull/262", "file_loc": {"base_commit": "784d5c3329b9fd0b77db92ee464c2f5404eab93b", "files": [{"path": "localstack/services/awslambda/lambda_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15, 62], "mod": [70, 71]}, "(None, 'run_lambda', 219)": {"add": [230, 252, 257, 269], "mod": [262, 271]}, "(None, 'do_execute', 286)": {"add": [289], "mod": [287]}, "(None, 'set_function_code', 362)": {"add": [372], "mod": [438]}, "(None, 'create_function', 468)": {"add": [486]}, "(None, 'update_function_configuration', 597)": {"add": [610]}, "(None, 'exec_lambda_code', 306)": {"mod": [306, 307, 308, 309, 310, 311, 326, 327, 328, 329]}}}, {"path": "localstack/utils/testutil.py", "status": "modified", "Loc": {"(None, 'create_lambda_function', 105)": {"mod": [106, 119]}}}, {"path": "tests/integration/test_lambda.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16, 21]}, "(None, 'test_lambda_runtimes', 61)": {"add": [106]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/awslambda/lambda_api.py"], "doc": [], "test": ["tests/integration/test_lambda.py", "localstack/utils/testutil.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "83ff0cb0a0366db3c8067eef40b7869f15e7d05e", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/1860", "iss_label": "status: resolved/fixed\narea: integration/terraform", "title": "Route53 Add HostedZone or Add recordSet failing when done via terraform ", "body": "I am trying to create route53 hosted zones/record set addition ussing terraform. Though the resources are getting executed, terraform is ultimately failing.\n\nOn digging i see that, terraform is calling the getChange API after resource creation API to check the status of changes and seems like that API getChange is not implemented in localstack ?\n\n```\n\n019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4:\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4:\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: -----------------------------------------------------\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 2019/12/11 12:08:02 [DEBUG] [aws-sdk-go] \n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 404 Not Found\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4:

      Not Found

      \n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4:

      The requested URL was not found on the server. If you entered the URL manually please check\n your spelling and try again.

      \n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 2019/12/11 12:08:02 [DEBUG] [aws-sdk-go] DEBUG: Validate Response route53/GetChange failed, at\ntempt 0/25, error SerializationError: failed to unmarshal error message\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: status code: 404, request id:\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: caused by: UnmarshalError: failed to unmarshal error message\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 00000000 3c 21 44 4f 43 54 59 50 45 20 48 54 4d 4c 20 50 |.4|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 00000040 30 34 20 4e 6f 74 20 46 6f 75 6e 64 3c 2f 74 69 |04 Not Found</ti|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 00000050 74 6c 65 3e 0a 3c 68 31 3e 4e 6f 74 20 46 6f 75 |tle>.<h1>Not Fou|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 00000060 6e 64 3c 2f 68 31 3e 0a 3c 70 3e 54 68 65 20 72 |nd</h1>.<p>The r|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 00000070 65 71 75 65 73 74 65 64 20 55 52 4c 20 77 61 73 |equested URL was|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 00000080 20 6e 6f 74 20 66 6f 75 6e 64 20 6f 6e 20 74 68 | not found on th|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 00000090 65 20 73 65 72 76 65 72 2e 20 49 66 20 79 6f 75 |e server. If you|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 000000a0 20 65 6e 74 65 72 65 64 20 74 68 65 20 55 52 4c | entered the URL|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 000000b0 20 6d 61 6e 75 61 6c 6c 79 20 70 6c 65 61 73 65 | manually please|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 000000c0 20 63 68 65 63 6b 20 79 6f 75 72 20 73 70 65 6c | check your spel|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 000000d0 6c 69 6e 67 20 61 6e 64 20 74 72 79 20 61 67 61 |ling and try aga|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: 000000e0 69 6e 2e 3c 2f 70 3e 0a |in.</p>.|\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4:\n2019-12-11T12:08:02.818-0500 [DEBUG] plugin.terraform-provider-aws_v2.41.0_x4: caused by: unknown error response tag, {{ title} []}\n\n```\n\n\n\n\u2506Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-141) by [Unito](https://www.unito.io/learn-more)\n", "pr_html_url": "https://github.com/localstack/localstack/pull/3248", "file_loc": {"base_commit": "83ff0cb0a0366db3c8067eef40b7869f15e7d05e", "files": [{"path": "localstack/plugins.py", "status": "modified", "Loc": {"(None, 'do_register_localstack_plugins', 29)": {"add": [39], "mod": [142]}}}, {"path": "localstack/services/infra.py", "status": "modified", "Loc": {"(None, 'start_route53', 104)": {"mod": [104, 106]}}}, {"path": "tests/integration/test_route53.py", "status": "modified", "Loc": {"('TestRoute53', 'test_create_hosted_zone', 7)": {"mod": [13, 14]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/plugins.py", "localstack/services/infra.py"], "doc": [], "test": ["tests/integration/test_route53.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "6aafbcdebade24b26705913cbc413dc7d50dad7a", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/11048", "iss_label": "type: bug\naws:ssm\nstatus: backlog", "title": "bug: get-parameter and get-parameters on SSM does not work with ARNs (Localstack 3.5.0)", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nQueries to Localstack SSM endpoints with the `get-parameter` or `get-parameters` commands do not work if parameter names are provided as ARNs. This appears to be due to the internal LocalStack parameter validation disallowing forward slashes in SSM parameter names. We observe the following:\r\n\r\n```\r\n$ awslocal ssm get-parameter --name arn:aws:service:us-east-1:0000000000:parameter/myparam\r\nAn error occurred (ValidationException) when calling the GetParameter operation: Parameter name: can't be prefixed with \"ssm\" (case-insensitive). If formed as a path, it can consist of sub-paths divided by slash symbol; each sub-path can be formed as a mix of letters, numbers and the following 3 symbols .-_\r\n```\r\n\r\nRemoving the forward slash passes the input validation, but obviously fails to fetch a parameter:\r\n\r\n```\r\n$ awslocal ssm get-parameter --name arn:aws:service:us-east-1:0000000000:parametermyparam\r\nAn error occurred (ParameterNotFound) when calling the GetParameter operation: Parameter arn:aws:service:us-east-1:0000000000:parametermyparam not found.\r\n```\r\n\r\n\n\n### Expected Behavior\n\n`get-parameter` and `get-parameters` should allow ARNs in names, following the [official docs](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ssm/get-parameters.html#options)\n\n### How are you starting LocalStack?\n\nWith a docker-compose file\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n localstack start -d\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n awslocal ssm get-parameter --name arn:aws:service:us-east-1:0000000000:parameter/myparam\r\n\n\n### Environment\n\n```markdown\n- OS: Sonoma 14.5\r\n- LocalStack:\r\n LocalStack version: 3.5.1.dev20240618022512\r\n LocalStack Docker image sha: sha256:5cd0557de2fdfac98d8d26d2f861b8266dcfc07ed09dbdacad7dc21ee2560310\r\n LocalStack build date: 2024-06-18\r\n LocalStack build git hash: 666e239\n```\n\n\n### Anything else?\n\n_No response_", "pr_html_url": "https://github.com/localstack/localstack/pull/11218", "file_loc": {"base_commit": "6aafbcdebade24b26705913cbc413dc7d50dad7a", "files": [{"path": "localstack-core/localstack/services/ssm/provider.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [82]}, "('SsmProvider', None, 118)": {"add": [364]}}}, {"path": "localstack-core/localstack/utils/aws/arns.py", "status": "modified", "Loc": {"(None, 's3_bucket_name', 548)": {"add": [549]}}}, {"path": "tests/aws/services/ssm/test_ssm.py", "status": "modified", "Loc": {"('TestSSM', None, 26)": {"add": [151]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack-core/localstack/utils/aws/arns.py", "localstack-core/localstack/services/ssm/provider.py"], "doc": [], "test": ["tests/aws/services/ssm/test_ssm.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "1aad84d96159f6d12f872357e04f080a39836f5f", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/11253", "iss_label": "type: bug\nstatus: resolved/fixed\naws:apigateway", "title": "bug: API Gateway V1 (targeted to Lambda) gives 500 error with // in path", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nWith API Gateway V1, I have a `/orders` path that targets a Lambda function. When accessing the URL using the syntax:\r\n\r\n`curl -v https://45szzn0od7.execute-api.localhost.localstack.cloud:4566/prod/orders` I see the correct response.\r\n \r\nHowever, if there's an extra `/` in the path after `/prod` I see a 500 response:\r\n\r\n`curl -v https://45szzn0od7.execute-api.localhost.localstack.cloud:4566/prod//orders`\r\n\r\nI see the response message:\r\n\r\n```\r\n\"__type\": \"InternalError\", \"message\": \"exception while calling apigateway with unknown operation: 308 Permanent Redirect: http://45szzn0od7.execute-api.localhost.localstack.cloud:4566/prod/orders\"\r\n```\r\n\r\nI keep on hitting this bug because I have `API_URL` set to the base URL, and then call `$API_URL/orders`. If the `API_URL` contains a trailing `/`, it crashes.\r\n\r\nI'm not sure if the Lambda integration is relevant or not (or whether other integrations will also see the problem). I can confirm that my Lambda function is _not_ invoked when `/prod//orders` is used.\n\n### Expected Behavior\n\nIn the AWS service, both of these work correctly, regardless of whether the path is `/prod/orders` or `/prod//orders`.\n\n### How are you starting LocalStack?\n\nWith a docker-compose file\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n docker-composed up\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\nAPI Gateway V1 created with CDK\r\n\r\n```\r\nconst api = new apigateway.RestApi(this, 'example-api')\r\nconst ordersApi = api.root.addResource('orders')\r\nconst ordersLambdaInt = new apigateway.LambdaIntegration(orderLambda, { proxy: true })\r\nordersApi.addMethod('GET', ordersLambdaInt)\r\n``` \r\n\r\nI don't have the `awslocal` commands on hand, but I can figure them out if necessary.\n\n### Environment\n\n```markdown\n- OS: MacOS Sonoma 4.5\r\n- LocalStack: \r\n LocalStack version: 3.5.1.dev\r\n LocalStack Docker image sha: (built from latest source)\r\n LocalStack build date: 2024-07-23\r\n LocalStack build git hash: a0a1ba090\n```\n\n\n### Anything else?\n\n_No response_", "pr_html_url": "https://github.com/localstack/localstack/pull/11304", "file_loc": {"base_commit": "96f447ffcc6c56821b4f0b1e2c603a3976949307", "files": [{"path": "localstack-core/localstack/services/apigateway/context.py", "status": "modified", "Loc": {"('ApiInvocationContext', None, 21)": {"add": [100]}, "('ApiInvocationContext', 'path_with_query_string', 117)": {"add": [118]}, "('ApiInvocationContext', '__init__', 68)": {"mod": [80]}}}, {"path": "localstack-core/localstack/services/apigateway/helpers.py", "status": "modified", "Loc": {"(None, 'get_event_request_context', 1497)": {"add": [1511], "mod": [1506, 1507, 1508]}}}, {"path": "localstack-core/localstack/services/apigateway/next_gen/execute_api/integrations/aws.py", "status": "modified", "Loc": {"('RestApiAwsProxyIntegration', 'create_lambda_input_event', 494)": {"mod": [517]}}}, {"path": "tests/aws/services/apigateway/test_apigateway_lambda.py", "status": "modified", "Loc": {"(None, 'test_lambda_aws_proxy_integration', 81)": {"add": [175, 190], "mod": [85, 132, 135, 136, 137, 138, 139, 140, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 154, 155, 156, 157, 158, 159, 181, 185, 186, 187, 192, 198, 199, 200, 201, 204, 205, 206, 212, 213, 214, 222, 223, 224, 225, 226, 227, 245, 246, 247, 248, 249, 250]}, "(None, None, None)": {"add": [276]}, "(None, 'invoke_api', 161)": {"mod": [164, 173, 174]}}}, {"path": "tests/aws/services/apigateway/test_apigateway_lambda.snapshot.json", "status": "modified", "Loc": {"(None, None, 1236)": {"add": [1236]}, "(None, None, 3)": {"mod": [3]}, "(None, None, 84)": {"mod": [84]}, "(None, None, 86)": {"mod": [86]}, "(None, None, 111)": {"mod": [111]}, "(None, None, 118)": {"mod": [118]}, "(None, None, 202)": {"mod": [202]}, "(None, None, 204)": {"mod": [204]}, "(None, None, 229)": {"mod": [229]}, "(None, None, 236)": {"mod": [236]}, "(None, None, 320)": {"mod": [320]}, "(None, None, 322)": {"mod": [322]}, "(None, None, 347)": {"mod": [347]}, "(None, None, 354)": {"mod": [354]}, "(None, None, 374)": {"mod": [374]}, "(None, None, 422)": {"mod": [422]}, "(None, None, 438)": {"mod": [438]}, "(None, None, 440)": {"mod": [440]}, "(None, None, 465)": {"mod": [465]}, "(None, None, 472)": {"mod": [472]}, "(None, None, 492)": {"mod": [492]}, "(None, None, 540)": {"mod": [540]}, "(None, None, 556)": {"mod": [556]}, "(None, None, 558)": {"mod": [558]}, "(None, None, 583)": {"mod": [583]}, "(None, None, 590)": {"mod": [590]}, "(None, None, 610)": {"mod": [610]}, "(None, None, 658)": {"mod": [658]}, "(None, None, 678)": {"mod": [678]}, "(None, None, 680)": {"mod": [680]}, "(None, None, 707)": {"mod": [707]}, "(None, None, 714)": {"mod": [714]}, "(None, None, 734)": {"mod": [734]}, "(None, None, 782)": {"mod": [782]}, "(None, None, 802)": {"mod": [802]}, "(None, None, 804)": {"mod": [804]}, "(None, None, 831)": {"mod": [831]}, "(None, None, 838)": {"mod": [838]}, "(None, None, 858)": {"mod": [858]}, "(None, None, 906)": {"mod": [906]}, "(None, None, 922)": {"mod": [922]}, "(None, None, 924)": {"mod": [924]}, "(None, None, 949)": {"mod": [949]}, "(None, None, 956)": {"mod": [956]}, "(None, None, 976)": {"mod": [976]}, "(None, None, 1024)": {"mod": [1024]}, "(None, None, 1059)": {"mod": [1059]}, "(None, None, 1061)": {"mod": [1061]}, "(None, None, 1093)": {"mod": [1093]}, "(None, None, 1100)": {"mod": [1100]}, "(None, None, 1123)": {"mod": [1123]}, "(None, None, 1173)": {"mod": [1173]}, "(None, None, 1196)": {"mod": [1196]}, "(None, None, 1198)": {"mod": [1198]}, "(None, None, 1226)": {"mod": [1226]}, "(None, None, 1233)": {"mod": [1233]}}}, {"path": "tests/aws/services/apigateway/test_apigateway_lambda.validation.json", "status": "modified", "Loc": {"(None, None, 12)": {"mod": [12]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack-core/localstack/services/apigateway/next_gen/execute_api/integrations/aws.py", "localstack-core/localstack/services/apigateway/helpers.py", "tests/aws/services/apigateway/test_apigateway_lambda.snapshot.json", "tests/aws/services/apigateway/test_apigateway_lambda.validation.json", "localstack-core/localstack/services/apigateway/context.py"], "doc": [], "test": ["tests/aws/services/apigateway/test_apigateway_lambda.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "61535b7d970493d9bb6740a03d698d075dd0a3b9", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/11905", "iss_label": "type: bug\naws:kms\nstatus: backlog", "title": "bug: KMS DeriveSharedSecret does not work symmetrically", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nWhen creating two keys with the following command:\r\n```\r\nawslocal kms create-key --key-spec ECC_NIST_P256 --key-usage KEY_AGREEMENT --description \"ECC NIST P-256 Key Agreement Key <Number>\r\n```\r\n\r\nAnd then running the `derive-shared-secret` command twice like following:\r\n```\r\nawslocal kms derive-shared-secret \\\r\n --key-id $KEY1_ID \\\r\n --key-agreement-algorithm ECDH \\\r\n --public-key $PUB2\r\n```\r\n\r\n```\r\nawslocal kms derive-shared-secret \\\r\n --key-id $KEY2_ID \\\r\n --key-agreement-algorithm ECDH \\\r\n --public-key $PUB1\r\n```\r\n\r\nThe resulting `SharedSecret` values are different.\r\n\r\n\r\n\r\n### Expected Behavior\r\n\r\nRunning the following:\r\n```\r\nawslocal kms derive-shared-secret \\\r\n --key-id $KEY1_ID \\\r\n --key-agreement-algorithm ECDH \\\r\n --public-key $PUB2\r\n```\r\n\r\n```\r\nawslocal kms derive-shared-secret \\\r\n --key-id $KEY2_ID \\\r\n --key-agreement-algorithm ECDH \\\r\n --public-key $PUB1\r\n```\r\n\r\nThe resulting `SharedSecret` values should be the same.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n docker run localstack/localstack\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n```\r\n// get the value for ID and export it to FIRST_KEY_ID variable\r\nawslocal kms create-key --key-spec ECC_NIST_P256 --key-usage KEY_AGREEMENT --description \"ECC NIST P-256 Key Agreement Key\" --region us-east-1\r\n\r\n// get the value for ID and export it to SECOND_KEY_ID variable\r\nawslocal kms create-key --key-spec ECC_NIST_P256 --key-usage KEY_AGREEMENT --description \"ECC NIST P-256 Key Agreement Key 2\" --region us-east-1\r\n\r\n// get the value for PublicKey and export it to PUB1 variable\r\nawslocal kms get-public-key --key-id $FIRST_KEY_ID\r\n\r\n// get the value for PublicKey and export it to PUB2 variable\r\nawslocal kms get-public-key --key-id $SECOND_KEY_ID\r\n\r\n// the two values for \"SharedSecret\" from below commands should be the same\r\nawslocal kms derive-shared-secret --key-id $FIRST_KEY_ID --key-agreement-algorithm ECDH --public-key $PUB2\r\nawslocal kms derive-shared-secret --key-id $SECOND_KEY_ID --key-agreement-algorithm ECDH --public-key $PUB1\r\n```\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: macOS 14.7.1 (23H222)\r\n- LocalStack:\r\n LocalStack version: 3.8.2.dev155\r\n LocalStack Docker image sha: sha256:00e62cf9abaa00984b7bf835b411271822ddea2f44d209a24e734909db7ea29f\r\n LocalStack build date: 2024-11-21\r\n LocalStack build git hash: 6748e0e07\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n_No response_", "pr_html_url": "https://github.com/localstack/localstack/pull/12071", "file_loc": {"base_commit": "61535b7d970493d9bb6740a03d698d075dd0a3b9", "files": [{"path": "localstack-core/localstack/services/kms/models.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [24], "mod": [15]}, "('KmsKey', 'derive_shared_secret', 368)": {"add": [381], "mod": [388]}}}, {"path": "tests/aws/services/kms/test_kms.py", "status": "modified", "Loc": {"('TestKMS', 'test_derive_shared_secret', 1326)": {"add": [1332], "mod": [1337, 1338, 1341]}, "(None, None, None)": {"add": [1370]}}}, {"path": "tests/aws/services/kms/test_kms.snapshot.json", "status": "modified", "Loc": {"(None, None, None)": {"add": [1783], "mod": [1731]}}}, {"path": "tests/aws/services/kms/test_kms.validation.json", "status": "modified", "Loc": {"(None, None, None)": {"mod": [33]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack-core/localstack/services/kms/models.py", "tests/aws/services/kms/test_kms.validation.json", "tests/aws/services/kms/test_kms.snapshot.json"], "doc": [], "test": ["tests/aws/services/kms/test_kms.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "a23e2fc70542af481fb3a0bd7042627ff50f0802", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/737", "iss_label": "", "title": "Kinesis events to Lambda do not conform to spec", "body": "I am using Kinesis streams to trigger a Lambda function in localstack. The Kinesis records only include the `\"kinesis\"` block. AWS docs show several other metadata fields with each record:\r\n\r\n```{\r\n \"eventID\": \"shardId-000000000000:49545115243490985018280067714973144582180062593244200961\",\r\n \"eventVersion\": \"1.0\",\r\n \"kinesis\": {\r\n \"partitionKey\": \"partitionKey-3\",\r\n \"data\": \"SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4=\",\r\n \"kinesisSchemaVersion\": \"1.0\",\r\n \"sequenceNumber\": \"49545115243490985018280067714973144582180062593244200961\"\r\n },\r\n \"invokeIdentityArn\": identityarn,\r\n \"eventName\": \"aws:kinesis:record\",\r\n \"eventSourceARN\": eventsourcearn,\r\n \"eventSource\": \"aws:kinesis\",\r\n \"awsRegion\": \"us-east-1\"\r\n }\r\n```\r\n\r\nMy lambda is using the eventSourceARN to determine the source stream name. I can hack it for testing, but would prefer to test against proper live records. \r\n\r\n<!-- Love localstack? Please consider supporting our collective:\r\n\ud83d\udc49 https://opencollective.com/localstack/donate -->", "pr_html_url": null, "file_loc": {"base_commit": "85e39818ae11e8f35e24b8df88703ede1231b62e", "files": [{"path": "localstack/services/awslambda/lambda_api.py", "status": "modified", "Loc": {"(None, 'process_kinesis_records', 183)": {"add": [194]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": "https://github.com/localstack/localstack/commit/85e39818ae11e8f35e24b8df88703ede1231b62e", "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/awslambda/lambda_api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "28d3b76087979229f586911423307e6fd8995f19", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/2231", "iss_label": "", "title": "[IAM] AmazonIdentityManagement with null message is thrown instead of EntityAlreadyExistsException", "body": "# Type of request: This is a ...\r\n\r\n[X] bug report\r\n\r\n# Detailed description\r\n`EntityAlreadyExistsException` is not thrown correctly when creating IAM objects that are already present. `AmazonIdentityManagementException` with a null message is thrown instead\r\n\r\n## Expected behavior\r\nLocalstack should throw `EntityAlreadyExistsException` with a populated message (not null)\r\n\r\n## Actual behavior\r\n```\r\ncom.amazonaws.services.identitymanagement.model.AmazonIdentityManagementException: null (Service: AmazonIdentityManagement; Status Code: 409; Error Code: 409 Conflict; Request ID: null)\r\n```\r\n\r\n# Steps to reproduce\r\n- create an IAM role\r\n- try to re-create it, catch `EntityAlreadyExistsException` but `AmazonIdentityManagementException` with null message is thrown instead\r\n\r\n## Command used to start LocalStack\r\ndocker-compose up with `0.10.9`\r\n\r\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n```\r\ntry {\r\n localStackIAMClient.createRole(createRoleRequest);\r\n localStackIAMClient.createRole(createRoleRequest);\r\n} catch (EntityAlreadyExistsException e) {\r\n // AmazonIdentityManagementException with null is thrown instead\r\n}\r\n```\r\n", "pr_html_url": "https://github.com/localstack/localstack/pull/2316", "file_loc": {"base_commit": "28d3b76087979229f586911423307e6fd8995f19", "files": [{"path": ".dockerignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [6]}}}, {"path": "localstack/services/iam/iam_listener.py", "status": "modified", "Loc": {"('ProxyListenerIAM', 'return_response', 17)": {"add": [22]}, "('ProxyListenerIAM', None, 9)": {"add": [36]}}}, {"path": "tests/integration/test_iam.py", "status": "modified", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/iam/iam_listener.py"], "doc": [".dockerignore"], "test": ["tests/integration/test_iam.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "581980f89037694181765dfa400ce9f75c6a01ed", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/4409", "iss_label": "type: feature", "title": "feature request: ConfigService", "body": "`moto` provides support for several of the AWS ConfigService APIs. Would it be possible to provide that same support with LocalStack?", "pr_html_url": "https://github.com/localstack/localstack/pull/4500", "file_loc": {"base_commit": "581980f89037694181765dfa400ce9f75c6a01ed", "files": [{"path": "localstack/plugins.py", "status": "modified", "Loc": {"(None, 'do_register_localstack_plugins', 29)": {"add": [35, 85]}}}, {"path": "localstack/services/support/support_starter.py", "status": "modified", "Loc": {"(None, 'start_support', 4)": {"mod": [5]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [52]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/plugins.py", "localstack/services/support/support_starter.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "95f91f68c16cedbcfbf0a51725f88c113224de27", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/983", "iss_label": "type: bug\nstatus: triage needed", "title": "AWS lambda on localstack not seeing its dependencies ", "body": "<!-- Love localstack? Please consider supporting our collective:\n:point_right: https://opencollective.com/localstack/donate -->\n\nHi guys. I am running localstack 0.8.7 and i am encountering problems running a lambda function that has external dependencies. The zip file works well in a real AWS environment but fails in localstack because it cannot find the dependencies.\n\nAdding lambda\n`\naws --endpoint-url=http://localhost:4574 lambda create-function --function-name=myfunction --runtime=java8 --role=r1 --handler=com.my.UpdateHandler --zip-file fileb://my-lambda-0.1.0-1540476215-64df908.zip\n`\n\nExecuting lambda\n`\naws lambda --endpoint-url=http://localhost:4574 invoke --invocation-type RequestResponse --function-name myfunction --region eu-west-1 --payload {\\\"store\\\":\\\"9722\\\"\\,\\\"pos\\\":\\\"80\\\"\\,\\\"app\\\":\\\"price\\\"} out.txt\n`\n\nThis is the stacktrace\n`\nException: Lambda process returned error status code: 1. Output:\nException in thread \"main\" java.lang.reflect.InvocationTargetException\n at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:423)\n at cloud.localstack.LambdaExecutor.getHandler(LambdaExecutor.java:138)\n at cloud.localstack.LambdaExecutor.main(LambdaExecutor.java:52)\nCaused by: java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/ObjectMapper\n at my.create(Houston.java:56)\n at my.UpdateHandler.<init>(UpdateHandler.java:17)\n ... 6 more\nCaused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.databind.ObjectMapper\n at java.net.URLClassLoader.findClass(URLClassLoader.java:381)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:424)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:357)\n ... 8 more\n`\n\nI'm suspicious that the problem could be with localstack since the zip file with the structure\n\n-lib (dependencies)\n-com (lambda)\n\n works fine in AWS but has problems in localstack. \n\nHelp. \n\nA guy in need.\n\n\n\n\u2506Issue is synchronized with this [Jira Bug](https://localstack.atlassian.net/browse/LOC-321) by [Unito](https://www.unito.io/learn-more)\n", "pr_html_url": "https://github.com/localstack/localstack/pull/3704", "file_loc": {"base_commit": "95f91f68c16cedbcfbf0a51725f88c113224de27", "files": [{"path": "tests/integration/test_lambda.py", "status": "modified", "Loc": {"('TestJavaRuntimes', 'test_java_runtime_with_lib', 1476)": {"mod": [1489]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["tests/integration/test_lambda.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "debb24a792a7e2a1751ddf1f30d5c79f80b4885f", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/612", "iss_label": "type: bug\nstatus: triage needed", "title": "Uploading to S3 presigned URLs doesn't check Content-MD5 or other presigned constraints", "body": "If I generate a presigned URL for uploading into a bucket, and I specify a content type or a content MD5 to be encapsulated into the URL, these are not then enforced when I upload to that URL. I can set whatever `Content-MD5` header I like in the HTTP upload, and it's accepted.\r\n\r\nFurthermore, the `Content-MD5` header doesn't get checked even against the content being uploaded. I can set the header to `blah` and I don't get any errors.\r\n\r\nIs this expected?", "pr_html_url": "https://github.com/localstack/localstack/pull/772", "file_loc": {"base_commit": "debb24a792a7e2a1751ddf1f30d5c79f80b4885f", "files": [{"path": "localstack/services/generic_proxy.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 15], "mod": [1, 2, 5, 6, 8]}}}, {"path": "localstack/services/kinesis/kinesis_listener.py", "status": "modified", "Loc": {"('ProxyListenerKinesis', 'forward_request', 20)": {"mod": [24]}}}, {"path": "localstack/services/s3/s3_listener.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 281]}, "('ProxyListenerS3', 'forward_request', 338)": {"add": [339]}, "('ProxyListenerS3', 'return_response', 438)": {"mod": [442]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, None)": {"mod": [20]}}}, {"path": "tests/integration/test_s3.py", "status": "modified", "Loc": {"(None, 'test_s3_get_response_headers', 171)": {"add": [206]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/kinesis/kinesis_listener.py", "localstack/services/generic_proxy.py", "localstack/services/s3/s3_listener.py"], "doc": [], "test": ["tests/integration/test_s3.py"], "config": ["requirements.txt"], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "2641d910cc5f1a04f70dd60a7ebfc25cd716bcd6", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/1902", "iss_label": "status: triage needed", "title": "changeMessageVisibility function doesn't work", "body": "Hi,\r\n\r\nI'm using the changeMessageVisibility function in order to return a message to the queue, by calling\r\n`serviceName.changeMessageVisibility(recipientId, 0);`\r\n\r\nbut it doesn't work, the message doesn't reappear in the queue.", "pr_html_url": "https://github.com/localstack/localstack/pull/1914", "file_loc": {"base_commit": "2641d910cc5f1a04f70dd60a7ebfc25cd716bcd6", "files": [{"path": "localstack/services/sqs/sqs_listener.py", "status": "modified", "Loc": {"('ProxyListenerSQS', 'return_response', 81)": {"add": [96], "mod": [89, 90, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151]}}}, {"path": "localstack/utils/aws/aws_stack.py", "status": "modified", "Loc": {"(None, 'fix_account_id_in_arns', 279)": {"mod": [281]}}}, {"path": "tests/integration/test_sqs.py", "status": "modified", "Loc": {"('SQSTest', 'test_publish_get_delete_message', 49)": {"add": [59, 65]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/utils/aws/aws_stack.py", "localstack/services/sqs/sqs_listener.py"], "doc": [], "test": ["tests/integration/test_sqs.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "a258338f5c88f49b517f7ecf66be113e481a0afe", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/6551", "iss_label": "type: question\naws:ssm\naws:secretsmanager", "title": "bug: Can't get SSM secret parameter using localstack.", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nThis bug has been reported before here. https://github.com/localstack/localstack/issues/3128\r\nCurrently, I'm encountering the same issues.\r\nThe issue is that creating an SSM secret using the awslocal cli and trying to retrieve the secret using awslocal gives me a (ParameterNotFound) error.\r\n\r\n### Expected Behavior\r\n\r\nThe expected behaviour is that I should successfully retrieve a stored secret instead of getting a (ParameterNotFound) error. \r\n\r\n### How are you starting LocalStack?\r\n\r\nCustom (please describe below)\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\nStarting local stack from the local stack cli by running `localstack start`\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\nCreate secret\r\n`awslocal secretsmanager create-secret --name TestSecret --secret-string \"TT\"`\r\n\r\nTry to get secret\r\n`awslocal ssm get-parameter --name TestSecret`\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS:Windows 10 pro 20H2\r\n- Python version - 3.10\r\n- LocalStack version: 1.0.3.dev\r\n- LocalStack: latest\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\ndocker logs \r\n```\r\n2022-07-29T13:36:46.005 INFO --- [ asgi_gw_0] localstack.request.aws : AWS secretsmanager.CreateSecret => 200\r\n\r\n2022-07-29T13:37:28.515 INFO --- [ asgi_gw_1] localstack.request.aws : AWS ssm.GetParameter => 400 (ParameterNotFound)\r\n\r\n```", "pr_html_url": "https://github.com/localstack/localstack/pull/6564", "file_loc": {"base_commit": "a258338f5c88f49b517f7ecf66be113e481a0afe", "files": [{"path": "localstack/services/ssm/provider.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 23, 27], "mod": [4, 6]}, "('SsmProvider', None, 28)": {"add": [28], "mod": [37, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 155, 156, 157, 158, 159, 160, 162, 163, 164, 165, 166, 167, 168, 169, 170]}, "('SsmProvider', '_get_secrets_information', 45)": {"add": [53], "mod": [51, 56]}}}, {"path": "tests/integration/test_ssm.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 3], "mod": [19]}, "('TestSSM', None, 20)": {"add": [20, 25, 67, 110], "mod": [38, 39, 40, 51, 74]}, "(None, '_assert', 6)": {"mod": [6]}, "('TestSSM', 'test_put_parameters', 26)": {"mod": [36]}, "('TestSSM', 'test_hierarchical_parameter', 39)": {"mod": [48, 49]}, "('TestSSM', 'test_get_secret_parameter', 52)": {"mod": [60, 65, 66]}, "('TestSSM', 'test_get_inexistent_secret', 68)": {"mod": [69, 70, 71, 72]}, "('TestSSM', 'test_get_parameters_and_secrets', 75)": {"mod": [78, 100, 108]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/ssm/provider.py"], "doc": [], "test": ["tests/integration/test_ssm.py"], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "177fc797678664a0c06b8c6c434330cef44541a1", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/459", "iss_label": "type: bug", "title": "Underscore converted to hyphen while put it as metadata using amazon-sdk", "body": "Hi,\r\nWhen try to put object with metadata that include underscore(the metadata include underscore) we got the underscore converted to hyphen.\r\nThe same code in Amazon will return the metadata with underscore - not converted.\r\n\r\nfor example:\r\nwe put map of string as metadata - the \"__key1\" will convert to \"--key1\"\r\n\r\n```\r\npublic class MyAwsS3Tester {\r\n\r\n public static final String IP = \"10.0.0.24\";\r\n public static final String BUCKET_NAME = \"zanavi-test\";\r\n\r\n public static void main(String[] args) {\r\n AmazonS3 s3 = AmazonS3ClientBuilder.standard()\r\n .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(\"http://\" + IP + \":4572/\", \"us-east-1\"))\r\n .disableChunkedEncoding()\r\n .withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(\"zanavi\", \"1234\")))\r\n .build();\r\n if (s3.doesBucketExistV2(BUCKET_NAME)) {\r\n System.out.println(\"bucket zanavi exists\");\r\n }\r\n else {\r\n System.out.println(\"bucket \" + BUCKET_NAME + \" doesn't exists\");\r\n s3.createBucket(BUCKET_NAME);\r\n }\r\n\r\n String dummyStr = \"dummy-str\";\r\n\r\n Map<String, String> myMap = new HashMap<String, String>();\r\n myMap.put(\"__key1\", \"val1\");\r\n\r\n ObjectMetadata objectMetadata = new ObjectMetadata();\r\n objectMetadata.setUserMetadata(myMap);\r\n\r\n InputStream is = new ByteArrayInputStream(dummyStr.getBytes(StandardCharsets.UTF_8));\r\n s3.putObject(new PutObjectRequest(BUCKET_NAME, \"my-key1\", is, objectMetadata));\r\n\r\n S3Object getObj = s3.getObject(new GetObjectRequest(BUCKET_NAME, \"my-key1\"));\r\n ObjectMetadata objectMetadataResponse = getObj.getObjectMetadata();\r\n\r\n Map<String, String> myMap1 = objectMetadataResponse.getUserMetadata();\r\n\r\n System.out.println(\"done \" + myMap1);\r\n }\r\n```", "pr_html_url": "https://github.com/localstack/localstack/pull/482", "file_loc": {"base_commit": "177fc797678664a0c06b8c6c434330cef44541a1", "files": [{"path": "bin/Dockerfile.base", "status": "modified", "Loc": {"(None, None, None)": {"mod": [30, 32]}}}, {"path": "localstack/ext/java/src/test/java/cloud/localstack/S3HttpsConnectionTest.java", "status": "removed", "Loc": {}}, {"path": "localstack/ext/java/src/test/java/cloud/localstack/S3LifecycleTest.java", "status": "removed", "Loc": {}}, {"path": "localstack/services/generic_proxy.py", "status": "modified", "Loc": {"('GenericProxyHandler', 'forward', 158)": {"mod": [224, 225]}}}, {"path": "tests/integration/test_dynamodb.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8]}, "('DynamoDBIntegrationTest', 'test_non_ascii_chars', 14)": {"add": [34]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/generic_proxy.py"], "doc": [], "test": ["tests/integration/test_dynamodb.py"], "config": ["bin/Dockerfile.base"], "asset": ["localstack/ext/java/src/test/java/cloud/localstack/S3LifecycleTest.java", "localstack/ext/java/src/test/java/cloud/localstack/S3HttpsConnectionTest.java"]}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "b09c4f89481cec43b3d126c15050910cae81e9d1", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/5357", "iss_label": "type: bug\nstatus: triage needed\naws:firehose\naws:opensearch", "title": "bug: `AmazonopensearchserviceDestinationConfiguration` is not supported for Firehose-Streams", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nAs mentioned [here](https://github.com/localstack/localstack/issues/4834#issuecomment-1021009701) it seems like the `AmazonopensearchserviceDestinationConfiguration` was not added while implementing OpenSearch. I guess it just needs be added [here](https://github.com/localstack/localstack/blob/53b5c7788bf35b1882b6cb1949e17d27e198cf61/localstack/services/cloudformation/models/kinesisfirehose.py#L23-L27).\r\n\r\n### Expected Behavior\r\n\r\nI can (and should) use `AmazonopensearchserviceDestinationConfiguration` instead of `ElasticsearchDestinationConfiguration` (which I should only be able to use if I use ElasticSearch-Service).\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\nWell, just use `AmazonopensearchserviceDestinationConfiguration` and your stream will never be able to deliver the records to your (external) Cluster. But it works with `ElasticsearchDestinationConfiguration`.\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Windows mit WSL (Ubuntu 20.04)\r\n- LocalStack: latest\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n_No response_", "pr_html_url": "https://github.com/localstack/localstack/pull/5379", "file_loc": {"base_commit": "507c42709ce08911153840f8b2e43b74f52ee9a5", "files": [{"path": ".github/workflows/pro-integration.yml", "status": "modified", "Loc": {"(None, None, 81)": {"mod": [81]}, "(None, None, 92)": {"mod": [92]}}}, {"path": "localstack/services/firehose/provider.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [85], "mod": [83]}, "('FirehoseProvider', None, 139)": {"add": [544]}, "('FirehoseProvider', '_put_records', 432)": {"mod": [463, 464, 465, 466, 467, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 490, 491, 492, 493, 494, 495, 496, 498, 499, 500, 501, 502, 504, 505, 506, 507, 508, 509, 510, 511, 513, 514, 515, 516, 517]}}}, {"path": "localstack/utils/aws/aws_stack.py", "status": "modified", "Loc": {"(None, 'get_elasticsearch_endpoint', 1100)": {"mod": [1100, 1101, 1102, 1103, 1104, 1106]}, "(None, 'connect_elasticsearch', 1112)": {"mod": [1112, 1113, 1116, 1119, 1120, 1134, 1136, 1143]}}}, {"path": "requirements.txt", "status": "modified", "Loc": {"(None, None, 46)": {"mod": [46]}}}, {"path": "tests/integration/conftest.py", "status": "modified", "Loc": {"(None, 'pytest_runtestloop', 40)": {"mod": [48, 49, 50, 52]}}}, {"path": "tests/integration/test_firehose.py", "status": "modified", "Loc": {"('TestFirehoseIntegration', None, 147)": {"add": [248]}, "('TestFirehoseIntegration', 'assert_elasticsearch_contents', 222)": {"mod": [224]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["tests/integration/conftest.py", "localstack/utils/aws/aws_stack.py", "localstack/services/firehose/provider.py"], "doc": [], "test": ["tests/integration/test_firehose.py"], "config": ["requirements.txt", ".github/workflows/pro-integration.yml"], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "ae8db74df81821040e3ac654c62d2118da85255a", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/27", "iss_label": "priority: medium\ntype: feature", "title": "Use lambci/docker-lambda for local lambda execution?", "body": "Feel free to close this but might be worth considering using https://github.com/lambci/docker-lambda to execute lambdas. Seems they dumped the filesystem of a live lambda and made a container out of it. Neat.", "pr_html_url": null, "file_loc": {"base_commit": "2de054cf799e79021290e9590000eb6047f93bef", "files": [{"path": "Dockerfile", "status": "modified", "Loc": {"(None, None, 59)": {"add": [59]}, "(None, None, 7)": {"mod": [7]}, "(None, None, 54)": {"mod": [54]}, "(None, None, 61)": {"mod": [61]}, "(None, None, 74)": {"mod": [74, 76, 77]}}}, {"path": "Makefile", "status": "modified", "Loc": {"(None, None, 5)": {"add": [5]}, "(None, None, 68)": {"add": [68]}, "(None, None, 60)": {"mod": [60]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, 115)": {"add": [115]}, "(None, None, 238)": {"add": [238]}}}, {"path": "localstack/config.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1], "mod": [5, 6, 7, 10, 11, 12, 15, 16, 17]}}}, {"path": "localstack/constants.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [92]}}}, {"path": "localstack/mock/apis/lambda_api.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15, 28, 34, 37, 40, 50, 57], "mod": [22, 23, 24, 30, 232, 233, 234, 235, 236]}, "(None, 'add_event_source', 84)": {"add": [86], "mod": [95]}, "(None, 'set_function_code', 233)": {"add": [243], "mod": [248, 249, 270]}, "(None, 'use_docker', 100)": {"mod": [103]}, "(None, 'in_docker', 113)": {"mod": [117]}, "(None, 'process_kinesis_records', 121)": {"mod": [124, 130, 131, 133]}, "(None, 'get_event_sources', 141)": {"mod": [143]}, "(None, 'run_lambda', 150)": {"mod": [166, 176, 177, 178, 181, 182, 189, 192, 193]}, "(None, 'exec_lambda_code', 195)": {"mod": [208, 217]}, "(None, 'delete_function', 378)": {"mod": [390]}, "(None, 'update_function_code', 404)": {"mod": [411, 412]}}}, {"path": "localstack/mock/infra.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [57]}, "(None, 'start_elasticsearch', 63)": {"add": [70]}}}, {"path": "localstack/mock/proxy/dynamodb_listener.py", "status": "modified", "Loc": {"(None, 'update_dynamodb', 14)": {"mod": [82]}}}, {"path": "localstack/utils/aws/aws_stack.py", "status": "modified", "Loc": {"(None, 'connect_elasticsearch', 346)": {"add": [348]}, "(None, None, None)": {"mod": [3, 8, 12, 13]}}}, {"path": "localstack/utils/common.py", "status": "modified", "Loc": {"('ShellCommandThread', 'stop', 92)": {"add": [99]}, "(None, 'is_zip_file', 197)": {"add": [199]}, "(None, 'make_http_request', 277)": {"add": [277]}, "(None, None, None)": {"add": [290], "mod": [8, 9, 10, 11]}}}, {"path": "localstack/utils/testutil.py", "status": "modified", "Loc": {"(None, 'create_lambda_archive', 51)": {"add": [60], "mod": [52, 57, 74]}, "(None, None, None)": {"mod": [6, 8, 16]}, "(None, 'create_lambda_function', 77)": {"mod": [81]}}}, {"path": "setup.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [82]}}}, {"path": "tests/test_integration.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 24], "mod": [19]}, "(None, 'test_kinesis_lambda_ddb_streams', 109)": {"mod": [136, 137, 138, 139, 140, 142, 145, 146]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": "https://github.com/localstack/localstack/commit/2de054cf799e79021290e9590000eb6047f93bef", "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/mock/infra.py", "localstack/constants.py", "localstack/config.py", "setup.py", "localstack/utils/aws/aws_stack.py", "localstack/mock/apis/lambda_api.py", "localstack/mock/proxy/dynamodb_listener.py", "localstack/utils/common.py"], "doc": ["README.md"], "test": ["localstack/utils/testutil.py", "tests/test_integration.py"], "config": ["Makefile", "Dockerfile"], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "651f87eb51c36f7e58b421acf8e9966a8932feb1", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/2268", "iss_label": "", "title": "Displaying the version details in the logs", "body": "<!-- Love localstack? Please consider supporting our collective:\r\n\ud83d\udc49 https://opencollective.com/localstack/donate -->\r\n\r\n# Type of request: This is a ...\r\n\r\n[ ] bug report\r\n[X ] feature request\r\n\r\n# Detailed description\r\nAm running the localstack using docker-compose up and the logs are being printed at console\r\n\r\nIts good to have the below features in the logs while start-up (which is useful for debugging purpose).\r\n\r\n1. Display the localstack version number\r\n2. Display the docker container id.\r\n\r\n\r\n...\r\n\r\n## Expected behavior\r\n\r\n...\r\n\r\n## Actual behavior\r\n\r\n...\r\n\r\n# Steps to reproduce\r\n\r\n## Command used to start LocalStack\r\n\r\n...\r\n\r\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n...\r\n", "pr_html_url": "https://github.com/localstack/localstack/pull/2282", "file_loc": {"base_commit": "651f87eb51c36f7e58b421acf8e9966a8932feb1", "files": [{"path": "bin/localstack", "status": "modified", "Loc": {"(None, None, None)": {"add": [23, 36, 39]}}}, {"path": "localstack/utils/cli.py", "status": "modified", "Loc": {"(None, 'cmd_infra', 9)": {"add": [20]}, "(None, 'cmd_web', 37)": {"add": [47]}, "(None, None, None)": {"mod": [2]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/utils/cli.py"], "doc": [], "test": [], "config": [], "asset": ["bin/localstack"]}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "2d3a44fdb977213589ba202a5e495710097ce88b", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/1777", "iss_label": "type: bug", "title": "Lambda executor \"docker-reuse\" errors with \"tcp :9001: bind: address already in use\"", "body": "Hi I'm using localstack 0.8.5 and was using lambda executor in \"docker-reuse\" mode. This was working all along but suddenly started to give these port bind errors during execution. There don't seems to be any processes using this port however. If i use \"docker\" as the lambda executor this issue goes away, but i end up with another problem a huge number of containers one for each execution of the lambda. My integration tests essentially send events to a kinesis stream and the lambda reads from this stream so for each execution i get a new container. This is not ideal as it hogs up all the memory on the machine and the tests end up timing out.\r\n\r\nHas anyone come across this issue recently or know what changed. I don't see any changes to the 0.8.5 docker image.\r\n\r\nlocalstack_1 | 2019-11-20T05:25:59:WARNING:localstack.services.awslambda.lambda_api: Error executing Lambda function: Lambda process returned error status code: 1. Output:\r\nlocalstack_1 | 2019/11/20 05:25:59 listen tcp :9001: bind: address already in use\r\nlocalstack_1 | Traceback (most recent call last):\r\nlocalstack_1 | File \"/opt/code/localstack/localstack/services/awslambda/lambda_api.py\", line 250, in run_lambda\r\nlocalstack_1 | event, context=context, version=version, async=async)\r\nlocalstack_1 | File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 129, in execute\r\nlocalstack_1 | result, log_output = self.run_lambda_executor(cmd, environment, async)\r\nlocalstack_1 | File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 66, in run_lambda_executor\r\nlocalstack_1 | (return_code, log_output))\r\nlocalstack_1 | Exception: Lambda process returned error status code: 1. Output:\r\nlocalstack_1 | 2019/11/20 05:25:59 listen tcp :9001: bind: address already in use\r\nlocalstack_1 | \r\n\r\nThese errors happen sporadically, but the result of these errors is nondeterministic test failures :(\r\n\r\nDocker compose service:\r\n\r\n localstack:\r\n image: localstack/localstack:0.8.5\r\n ports:\r\n - \"4567-4583:4567-4583\"\r\n expose:\r\n - \"4567-4583\"\r\n environment:\r\n - SERVICES=sqs,kinesis,lambda,dynamodb\r\n - DEFAULT_REGION=us-east-1\r\n - LAMBDA_EXECUTOR=docker-reuse\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n volumes:\r\n - \"/private${TMPDIR}/localstack:/tmp/localstack\"\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"", "pr_html_url": "https://github.com/localstack/localstack/pull/1861", "file_loc": {"base_commit": "2d3a44fdb977213589ba202a5e495710097ce88b", "files": [{"path": "localstack/services/awslambda/lambda_executors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [45]}, "('LambdaExecutorSeparateContainers', None, 571)": {"add": [572]}, "('LambdaExecutorSeparateContainers', 'prepare_execution', 579)": {"add": [589], "mod": [586, 587, 597, 598, 599, 602]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/awslambda/lambda_executors.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "8433682f8ad29dc23a5e909cb229d0cb033beeaa", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/2329", "iss_label": "", "title": "s3.upload returns `Location: http://localhost:4566`", "body": "# Bug report\r\n\r\n# Detailed description\r\n\r\nThe `AWS.s3.upload()` (official SDK - https://github.com/aws/aws-sdk-js) returns an object with the `Location` key that points to 4566 instead of 4572 (LocalStack S3 port).\r\n\r\n## Expected behavior\r\n\r\nThe `Location` should point to the file on S3.\r\n\r\nExample:\r\n\r\n```\r\nLocation: http://localhost:4572/path/to/bucket.txt\r\n```\r\n\r\n## Actual behavior\r\n\r\nThe `Location` points to the LocalStack entrypoint.\r\n\r\nExample:\r\n\r\n```\r\nLocation: http://localhost:4566/path/to/bucket.txt\r\n```\r\n\r\n# Steps to reproduce\r\n\r\n- Upload a file to S3 using the official AWS SDK (https://github.com/aws/aws-sdk-js).\r\n- Check out the `Location` property.\r\n\r\n## Client code\r\n\r\n```javascript\r\nconst AWS = require('aws-sdk');\r\nconst s3 = new AWS.S3({\r\n region: 'us-west-1',\r\n endpoint: 'http://localhost:4566',\r\n apiVersion: '2006-03-01',\r\n s3ForcePathStyle: true,\r\n});\r\n\r\n(async () => {\r\n await s3\r\n .createBucket({ Bucket: 'my-bucket', ACL: 'private' })\r\n .promise();\r\n\r\n const { Location } = await s3\r\n .upload({ Key: 'file.txt', Body: 'test', Bucket: 'my-bucket' })\r\n .promise();\r\n\r\n console.assert(Location === 'http://localhost:4572/my-bucket/file.txt');\r\n})();\r\n```", "pr_html_url": "https://github.com/localstack/localstack/pull/2332", "file_loc": {"base_commit": "8433682f8ad29dc23a5e909cb229d0cb033beeaa", "files": [{"path": "localstack/services/edge.py", "status": "modified", "Loc": {"('ProxyListenerEdge', 'forward_request', 22)": {"add": [40]}}}, {"path": "tests/integration/test_lambda.py", "status": "modified", "Loc": {}}, {"path": "tests/unit/test_sns.py", "status": "modified", "Loc": {"('SNSTests', 'test_unsubscribe_should_remove_listener', 25)": {"mod": [26, 27, 34]}, "('SNSTests', 'test_only_one_subscription_per_topic_per_endpoint', 207)": {"mod": [208, 209, 217]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["localstack/services/edge.py"], "doc": [], "test": ["tests/integration/test_lambda.py", "tests/unit/test_sns.py"], "config": [], "asset": []}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "eded6a2ab6f46cd19caa1559ae23b528a70d1707", "iss_has_pr": 1, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/1495", "iss_label": "", "title": "Date format string error on Windows when using --os flag", "body": "### Describe the bug\n\nI encountered an error when running `interpreter --os` on Windows. The error occurs due to an incompatible date format string.\r\n\r\n## Environment\r\n- OS: Windows\r\n- Python version: 3.11\r\n- open-interpreter version: 0.4.1\r\n\r\n## Error Message\r\nValueError: Invalid format string\r\n\r\n## Error Location\r\nThe error occurs in `interpreter\\computer_use\\loop.py`, where the date format string uses `%-d` which is not supported on Windows.\r\n\r\n## Current code:\r\n```python\r\ndatetime.today().strftime('%A, %B %-d, %Y')\r\n\r\n## Solution\r\nChanging %-d to %d fixes the issue. This suggests that the code should handle platform-specific date formatting.\r\n\r\n## Suggestion\r\nConsider using one of these approaches to fix this cross-platform issue:\r\n\r\nUse %d instead of %-d\r\nAdd platform-specific handling for date formatting\r\nUse a cross-platform date formatting library\r\n\r\nThis would improve the Windows user experience with open-interpreter.\n\n### Reproduce\n\n1. Run `pip install open-interpreter`\r\n2. Run `interpreter --os`\r\n3. The error occurs due to incompatible date format string\r\n4. Error message shows: ValueError: Invalid format string\n\n### Expected behavior\n\nThe program should start normally without any date format errors when using the --os flag on Windows.\n\n### Screenshots\n\n![image](https://github.com/user-attachments/assets/4ffc3a11-78e4-48d5-b44b-9b384a54bd36)\r\n\n\n### Open Interpreter version\n\n0.4.1\n\n### Python version\n\n3.11\n\n### Operating System name and version\n\nWindows 11\n\n### Additional context\n\n_No response_", "pr_html_url": "https://github.com/OpenInterpreter/open-interpreter/pull/1496", "file_loc": {"base_commit": "eded6a2ab6f46cd19caa1559ae23b528a70d1707", "files": [{"path": "interpreter/computer_use/loop.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [104]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["interpreter/computer_use/loop.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "637fedd2dbe2964b09fb7ae9832bdbffed4494ca", "iss_has_pr": 1, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/303", "iss_label": "Bug", "title": "The code it generated repeated the same function call 20-30 times and used up $5 in a matter of minutes", "body": "### Describe the bug\n\nsame line was written multiple times, I was pressing y and enter and didnt realise i executed a code with 20-30 calls to a gpt-4 summary of long text and it ate $5.\n\n### Reproduce\n\nI am not sure, but i did paste a code snippet along with instruction to guide it because it was making same mistake multiple times.\n\n### Expected behavior\n\ni wanted to recreate this https://platform.openai.com/docs/tutorials/meeting-minutes/creating-an-automated-meeting-minutes-generator-with-whisper-and-gpt-4\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.1.3\n\n### Python version\n\n3.11.4\n\n### Operating System name and version\n\nmac 12.5\n\n### Additional context\n\n_No response_", "pr_html_url": "https://github.com/OpenInterpreter/open-interpreter/pull/316", "file_loc": {"base_commit": "637fedd2dbe2964b09fb7ae9832bdbffed4494ca", "files": [{"path": "interpreter/cli.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [30]}, "(None, 'cli', 44)": {"add": [120, 132]}}}, {"path": "interpreter/interpreter.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [36, 95]}, "('Interpreter', 'chat', 317)": {"add": [391]}, "('Interpreter', 'respond', 581)": {"add": [645], "mod": [656, 658]}}}, {"path": "poetry.lock", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1, 593, 598, 599, 604, 728, 733, 734, 1338]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13, 28]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["interpreter/cli.py", "interpreter/interpreter.py"], "doc": [], "test": [], "config": ["pyproject.toml", "poetry.lock"], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "82c448803f28aa3f5035e5302d78891dfcc661c0", "iss_has_pr": 1, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/5", "iss_label": "good first issue", "title": "Add a Dockerfile", "body": "so it's easier for people to get this up and running and then, running a bunch of different commands.", "pr_html_url": "https://github.com/abi/screenshot-to-code/pull/7", "file_loc": {"base_commit": "82c448803f28aa3f5035e5302d78891dfcc661c0", "files": [{"path": ".gitignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}}}, {"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [45], "mod": [18, 31]}}}, {"path": "frontend/src/generateCode.ts", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["frontend/src/generateCode.ts"], "doc": ["README.md"], "test": [], "config": [".gitignore"], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "94e52e1d1745003fa3a434ed74c1fe87cf8ef349", "iss_has_pr": 1, "iss_html_url": "https://github.com/pytorch/pytorch/issues/89", "iss_label": "todo", "title": "Containers should allow module assignments", "body": "Right now, after you created a Container, you can assign modules at a later time to it like this:\n\n``` python\ncontainer.add_module('linear', nn.Linear())\n```\n\nInstead, also allow this simpler interface:\n\n``` python\ncontainer.linear = nn.Linear()\n```\n", "pr_html_url": "https://github.com/pytorch/pytorch/pull/136", "file_loc": {"base_commit": "94e52e1d1745003fa3a434ed74c1fe87cf8ef349", "files": [{"path": "test/test_nn.py", "status": "modified", "Loc": {"('TestNN', 'test_add_module', 306)": {"mod": [319]}, "('TestNN', 'test_non_leaf_parameters', 335)": {"mod": [340]}}}, {"path": "torch/nn/modules/container.py", "status": "modified", "Loc": {"('Container', None, 14)": {"add": [70]}, "('Container', 'add_module', 56)": {"mod": [60]}}}, {"path": "torch/nn/modules/module.py", "status": "modified", "Loc": {"('Module', '__setattr__', 95)": {"mod": [96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["torch/nn/modules/container.py", "torch/nn/modules/module.py"], "doc": [], "test": ["test/test_nn.py"], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "e61f5b586bcf42010d42b0c20c0d2b159ce11d11", "iss_has_pr": 1, "iss_html_url": "https://github.com/pytorch/pytorch/issues/18626", "iss_label": "high priority\nmodule: cuda\ntriaged\nenhancement", "title": "[feature request] Set limit on GPU memory use", "body": "## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\nAllow user to easily specify a fraction of the GPU memory to use.\r\n\r\n## Motivation\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\nI recently switched from tensorflow to pytorch for what I saw as greater flexibility and user control. However, I have been recently frustrated by the inability to specify a cap on the fraction of GPU memory my pytorch process should be using. I have what I think is a fairly standard use case in performing hyper-parameter search by running multiple independent training processes in parallel (there is a whole ecosystem of packages for this). A modern GPU is large enough to train 4-8+ models of my size, but very rarely a configuration is selected which uses almost my full GPU memory. Instead of that memory-hungry process failing with an OOM by using more than its share (the sensible behavior), instead all the other training processes die, and as my script automatically generates more to take their place, they fail too, leading to a catastrophic global failure. \r\n\r\nThere are additional use cases like shared servers, training models with small batch sizes for statistical efficiency, and using very large GPUs effectively (like modern Voltas). \r\n\r\nIn tensorflow, this can be done simply by passing a `ConfigProto` with `gpu_memory_fraction=x`. \r\n\r\nThere seems to be a pain point for others as well (see Additional context). I know that for me, finding a way to do this will probably make the difference on switching back to TF. \r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\nIt would be awesome if an option like gpu_memory_fraction could be set somewhere in the pytorch flow. My current recommendation would be to allow an optional arg or kwarg to torch.device, like the index argument, which would specify the fraction of GPU memory to use. If more than `gpu_mem_fraction * total_gpu_mem` is attempted to be allocated, raise OOM. \r\n\r\n## Alternatives\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\nIt seems like you might be able to do some kind of hacky work around using https://pytorch.org/docs/stable/cuda.html#torch.cuda.max_memory_allocated and monitoring of each process being trained.\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\nhttps://discuss.pytorch.org/t/how-to-set-a-limit-to-gpu-usage/7271\r\nhttps://stackoverflow.com/questions/49529372/force-gpu-memory-limit-in-pytorch\r\nhttps://discuss.pytorch.org/t/limiting-gpu-usage/7662\r\n\n\ncc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @ngimel", "pr_html_url": null, "file_loc": {"base_commit": "47aa2536328afc51876b2e04384c0cfe71ee1f06", "files": [{"path": "c10/cuda/CUDACachingAllocator.cpp", "status": "modified", "Loc": {"(None, None, None)": {"add": [204, 640, 944]}, "(None, None, 221)": {"add": [243, 247, 270]}, "(None, None, 363)": {"add": [375]}, "(None, None, 630)": {"add": [632], "mod": [634]}, "(None, None, 649)": {"add": [667]}, "(None, None, 845)": {"add": [848]}}}, {"path": "c10/cuda/CUDACachingAllocator.h", "status": "modified", "Loc": {"(None, 'StatType', 54)": {"add": [113]}}}, {"path": "test/test_cuda.py", "status": "modified", "Loc": {"('TestCuda', None, 94)": {"add": [393]}}}, {"path": "torch/_C/__init__.pyi.in", "status": "modified", "Loc": {"(None, None, 582)": {"add": [582]}}}, {"path": "torch/csrc/cuda/Module.cpp", "status": "modified", "Loc": {"(None, None, None)": {"add": [265]}, "(None, None, 500)": {"add": [500]}}}, {"path": "torch/cuda/memory.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [74]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": "https://github.com/pytorch/pytorch/commit/47aa2536328afc51876b2e04384c0cfe71ee1f06", "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": null, "info_type": null}, "loctype": {"code": ["torch/cuda/memory.py", "c10/cuda/CUDACachingAllocator.h", "c10/cuda/CUDACachingAllocator.cpp", "torch/csrc/cuda/Module.cpp", "torch/_C/__init__.pyi.in"], "doc": [], "test": ["test/test_cuda.py"], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "443fe7ca0e6169b7178df18dbefd7823f1246f50", "iss_has_pr": 1, "iss_html_url": "https://github.com/pytorch/pytorch/issues/29984", "iss_label": "high priority\ntriaged\nmodule: cublas", "title": "Some cublas functions don't handle inputs with zero strides", "body": "## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```python\r\nimport torch\r\nimport torch.nn as nn\r\n\r\ntorch.set_default_tensor_type('torch.cuda.FloatTensor')\r\nx = nn.Parameter(torch.ones(2, 2))\r\n(x @ torch.ones(2)).sum().backward()\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-11-c3b66f275e9a> in <module>()\r\n 1 x = nn.Parameter(torch.ones(2, 2))\r\n----> 2 (x @ torch.ones(2)).sum().backward()\r\n\r\n1 frames\r\n/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)\r\n 97 Variable._execution_engine.run_backward(\r\n 98 tensors, grad_tensors, retain_graph, create_graph,\r\n---> 99 allow_unreachable=True) # allow_unreachable flag\r\n 100 \r\n 101 \r\n\r\nRuntimeError: cublas runtime error : an invalid numeric value was used as an argument at /pytorch/aten/src/THC/THCBlas.cu:120\r\n```\r\n## Expected behavior\r\n\r\nNo exception is raised.\r\n\r\n## Environment\r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.3.1+cu100\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.0.130\r\n\r\nOS: Ubuntu 18.04.3 LTS\r\nGCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0\r\nCMake version: version 3.12.0\r\n\r\nPython version: 3.6\r\nIs CUDA available: Yes\r\nCUDA runtime version: 10.0.130\r\nGPU models and configuration: GPU 0: Tesla T4\r\nNvidia driver version: 418.67\r\ncuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.17.4\r\n[pip3] torch==1.3.1+cu100\r\n[pip3] torchsummary==1.5.1\r\n[pip3] torchtext==0.3.1\r\n[pip3] torchvision==0.4.2+cu100\r\n[conda] Could not collect\r\n```\r\n\r\nThis code was run on Google Colab. The bug also reproduces on my server (Ubuntu 16.04LTS, 4x GTX1080Ti GPUs).\r\n\r\n## Additional context\r\n\r\nThe bug doesn't occur if:\r\n1. If I replace `sum()` with `mean()`\r\n```python\r\n(x @ torch.ones(2)).mean().backward()\r\n```\r\n\r\n2. If I use a different shape of the tensor\r\n```python\r\n(x @ torch.ones(2, 1)).sum().backward()\r\n```\r\n\r\n3. If I run the code on CPU\r\n\r\nTherefore, I believe that this is not intended behavior.\n\ncc @ezyang @gchanan @zou3519 @jerryzh168 @SsnL @albanD @gqchen", "pr_html_url": "https://github.com/pytorch/pytorch/pull/38321", "file_loc": {"base_commit": "52e9953faffe45d48660fc666db3b520b918c37c", "files": [{"path": "aten/src/ATen/native/cuda/Blas.cu", "status": "modified", "Loc": {"(None, None, 10)": {"add": [10]}, "(None, None, 16)": {"mod": [16]}, "(None, None, 22)": {"mod": [22]}, "(None, None, 28)": {"mod": [28]}}}, {"path": "test/test_autograd.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6154]}}}, {"path": "test/test_torch.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16620]}, "('TestTorchDeviceType', None, 5321)": {"mod": [13015]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["aten/src/ATen/native/cuda/Blas.cu"], "doc": [], "test": ["test/test_torch.py", "test/test_autograd.py"], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "ebc10fa465adc32b165afa5f968e4fb6bf26a8ea", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/617", "iss_label": "bug", "title": "No module named 'info'", "body": "ModuleNotFoundError: No module named 'info'\r\nTraceback:\r\nFile \"/Users/apple/.pyenv/versions/3.10.11/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py\", line 565, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"/Users/apple/Documents/AI/gpt4free/gui/streamlit_app.py\", line 7, in <module>\r\n from gpt4free import you\r\nFile \"/Users/apple/Documents/AI/gpt4free/gui/../gpt4free/__init__.py\", line 8, in <module>\r\n from gpt4free import aicolors\r\nFile \"/Users/apple/Documents/AI/gpt4free/gui/../gpt4free/aicolors/__init__.py\", line 4, in <module>\r\n from typings import AiColorsResponse\r\nFile \"/Users/apple/.pyenv/versions/3.10.11/lib/python3.10/site-packages/typings/__init__.py\", line 1, in <module>\r\n from .database import *\r\nFile \"/Users/apple/.pyenv/versions/3.10.11/lib/python3.10/site-packages/typings/database.py\", line 2, in <module>\r\n from info import DATABASE_NAME, DATABASE_URI, IMDB, IMDB_TEMPLATE, MELCOW_NEW_USERS, P_TTI_SHOW_OFF, SINGLE_BUTTON, SPELL_CHECK_REPLY, PROTECT_CONTENT", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/620", "file_loc": {"base_commit": "ebc10fa465adc32b165afa5f968e4fb6bf26a8ea", "files": [{"path": "gpt4free/aicolors/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["gpt4free/aicolors/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "c5691c5993f8595d90052e4a81b582d63fe81919", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/913", "iss_label": "bug\nstale", "title": "TypeError: unhashable type: 'Model'", "body": "import g4f, asyncio\r\n\r\nasync def run_async():\r\n _providers = [\r\n g4f.Provider.ChatgptAi,\r\n g4f.Provider.ChatgptLogin,\r\n g4f.Provider.DeepAi,\r\n g4f.Provider.Opchatgpts,\r\n g4f.Provider.Vercel,\r\n g4f.Provider.Wewordle,\r\n g4f.Provider.You,\r\n g4f.Provider.Yqcloud,\r\n ]\r\n responses = [\r\n provider.create_async(\r\n model=g4f.models.default,\r\n messages=[{\"role\": \"user\", \"content\": \"Hello\"}],\r\n )\r\n for provider in _providers\r\n ]\r\n responses = await asyncio.gather(*responses)\r\n for idx, provider in enumerate(_providers):\r\n print(f\"{provider.__name__}:\", responses[idx])\r\n\r\nasyncio.run(run_async())\r\n\r\nTypeError: unhashable type: 'Model'\r\n\r\nVersion 0.0.3.0", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/924", "file_loc": {"base_commit": "c5691c5993f8595d90052e4a81b582d63fe81919", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [241, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 277]}}}, {"path": "g4f/Provider/Aivvm.py", "status": "modified", "Loc": {"('Aivvm', 'create_async_generator', 31)": {"mod": [44]}}}, {"path": "g4f/Provider/Bard.py", "status": "modified", "Loc": {"('Bard', None, 12)": {"add": [15]}, "('Bard', 'create_async', 18)": {"mod": [34, 45, 46, 48, 49, 50, 51, 60]}}}, {"path": "g4f/Provider/ChatgptLogin.py", "status": "modified", "Loc": {"('ChatgptLogin', 'create_async', 16)": {"mod": [55]}}}, {"path": "g4f/Provider/CodeLinkAva.py", "status": "modified", "Loc": {"('CodeLinkAva', 'create_async_generator', 16)": {"mod": [43, 46, 47]}}}, {"path": "g4f/Provider/H2o.py", "status": "modified", "Loc": {"('H2o', None, 12)": {"add": [85]}, "('H2o', 'create_async_generator', 18)": {"mod": [26, 39, 46, 74]}}}, {"path": "g4f/Provider/HuggingChat.py", "status": "modified", "Loc": {"('HuggingChat', 'create_async_generator', 18)": {"add": [31], "mod": [28, 29, 40, 65, 79, 80, 92]}}}, {"path": "g4f/Provider/Vitalentum.py", "status": "modified", "Loc": {"('Vitalentum', 'create_async_generator', 16)": {"mod": [49]}}}, {"path": "g4f/Provider/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [26, 43, 69], "mod": [40]}}}, {"path": "g4f/Provider/base_provider.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 103], "mod": [8]}, "('BaseProvider', 'create_completion', 21)": {"mod": [24]}, "('AsyncProvider', 'create_completion', 42)": {"mod": [49]}, "('AsyncProvider', 'create_async', 53)": {"mod": [55]}, "('AsyncGeneratorProvider', 'create_completion', 63)": {"mod": [70, 72, 73, 81]}, "('AsyncGeneratorProvider', 'create_async', 86)": {"mod": [92]}}}, {"path": "g4f/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8], "mod": [3]}, "('ChatCompletion', 'create', 11)": {"add": [54], "mod": [16, 18, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 33, 34, 36, 37, 46, 47, 48, 50, 51]}}}, {"path": "g4f/models.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4, 7, 14, 17, 27, 33, 34, 35]}}}, {"path": "testing/test_chat_completion.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6, 8, 10, 11, 12]}}}, {"path": "testing/test_providers.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [3]}, "('Styles', None, 11)": {"mod": [11, 12, 13, 14]}, "(None, 'main', 16)": {"mod": [32, 36]}, "(None, 'get_providers', 39)": {"mod": [48, 49, 53]}, "(None, 'create_response', 56)": {"mod": [57, 58, 59, 60, 61, 62]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["g4f/Provider/CodeLinkAva.py", "g4f/Provider/H2o.py", "g4f/Provider/ChatgptLogin.py", "g4f/Provider/Aivvm.py", "g4f/Provider/HuggingChat.py", "g4f/Provider/__init__.py", "g4f/__init__.py", "g4f/models.py", "g4f/Provider/Vitalentum.py", "g4f/Provider/Bard.py", "g4f/Provider/base_provider.py"], "doc": ["README.md"], "test": ["testing/test_chat_completion.py", "testing/test_providers.py"], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "2dcdce5422cd01cd058490d4daef5f69300cca89", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2006", "iss_label": "bug\nstale", "title": "CORS not enabled for API", "body": "**Bug description**\r\n\r\nRun docker image\r\nTry to access the Completion API via Javascript console in Browser\r\n\r\n`fetch(\"http://localhost:1337/v1/chat/completions\", {\r\n \"headers\": {\r\n \"accept-language\": \"de-DE,de;q=0.9,en-DE;q=0.8,en;q=0.7,en-US;q=0.6\",\r\n \"cache-control\": \"no-cache\",\r\n \"content-type\": \"application/json\",\r\n \"pragma\": \"no-cache\",\r\n \"sec-ch-ua\": \"\\\"Google Chrome\\\";v=\\\"125\\\", \\\"Chromium\\\";v=\\\"125\\\", \\\"Not.A/Brand\\\";v=\\\"24\\\"\",\r\n \"sec-ch-ua-mobile\": \"?0\",\r\n \"sec-ch-ua-platform\": \"\\\"Windows\\\"\",\r\n \"sec-fetch-dest\": \"empty\",\r\n \"sec-fetch-mode\": \"cors\",\r\n \"sec-fetch-site\": \"same-origin\",\r\n },\r\n \"referrerPolicy\": \"no-referrer\",\r\n\"body\": JSON.stringify({model: \"gpt-3.5-turbo\",\r\n messages: [{\"role\": \"user\", \"content\": \"Hello\"}]}),\r\n\t\"method\": \"POST\",\r\n \"mode\": \"cors\"\r\n})`\r\n\r\n`Promise\u00a0{<pending>}\r\nAccess to fetch at 'http://localhost:1337/v1/chat/completions' from origin 'null' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.`\r\n\r\nIt is no option to set \"mode\": \"no-cors\" because for some reason Chrome reset the request \"content-type\": \"application/json\", to \"content-type\": \"text/plain\"\r\n\r\n**Screenshots**\r\n\r\n![image](https://github.com/xtekky/gpt4free/assets/8710166/60040772-b3e7-4880-a70a-84248aae12e2)\r\n\r\n\r\n**Environment**\r\nDocker\r\n\r\n**Additional context**\r\nSolution might be to just add a CORSMiddleware to the FastAPI\r\nhttps://fastapi.tiangolo.com/tutorial/cors/\r\n\r\nIf i have some time, i might create a PR\r\n", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/2281", "file_loc": {"base_commit": "2dcdce5422cd01cd058490d4daef5f69300cca89", "files": [{"path": "g4f/api/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}, "(None, 'create_app', 24)": {"add": [26]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["g4f/api/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "5d8e603095156303a016cc16e2811a8f2bc74f15", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1338", "iss_label": "bug", "title": "How to use providers via HTTP Request ?", "body": "I am trying to use the api version of this project, but, the providers option in my request is not working, am i doing something wrong?\r\n\r\n```js\r\nconst response = await axios.post(\r\n `${API_BASE}`,\r\n {\r\n provider: 'g4f.Provider.ChatgptAi',\r\n temperature:0.75,\r\n top_p: 0.6,\r\n model: 'gpt-3.5-long',\r\n messages: [ \r\n {role: 'user', content: msg }\r\n ],\r\n },\r\n {\r\n headers: {\r\n 'Content-Type': 'application/json',\r\n }\r\n }\r\n );\r\n return response.data.choices[0].message.content\r\n```\r\n\r\nthis is the code, i already tried with `provider: 'ChatgptAi'` and others providers, but the api always ignore the option providers.", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/1344", "file_loc": {"base_commit": "5d8e603095156303a016cc16e2811a8f2bc74f15", "files": [{"path": "g4f/api/__init__.py", "status": "modified", "Loc": {"('Api', 'chat_completions', 71)": {"add": [86, 94, 100]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["g4f/api/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "0d8e4ffa2c0706b0381f53c3985d04255b7170f5", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2334", "iss_label": "bug", "title": "Model \"command-r+\" returning 401 error: \"You have to be logged in\"", "body": "**Bug description**\r\n\r\nI'm experiencing an issue with the model \"command-r+\" not working. When attempting to use this model through the g4f API (running \"g4f api\"), I receive the following error:\r\n\r\n```\r\nERROR:root:Request failed with status code: 401, response: {\"error\":\"You have to be logged in.\"}\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.11/site-packages/g4f/api/__init__.py\", line 177, in chat_completions\r\n response = self.client.chat.completions.create(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/g4f/client/client.py\", line 241, in create\r\n return next(response)\r\n ^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/site-packages/g4f/client/client.py\", line 119, in iter_append_model_and_provider\r\n for chunk in response:\r\n File \"/usr/lib/python3.11/site-packages/g4f/client/client.py\", line 79, in iter_response\r\n for chunk in response:\r\n File \"/usr/lib/python3.11/site-packages/g4f/Provider/HuggingChat.py\", line 83, in create_completion\r\n raise RuntimeError(f\"Request failed with status code: {response.status_code}, response: {response.text}\")\r\nRuntimeError: Request failed with status code: 401, response: {\"error\":\"You have to be logged in.\"}\r\n```\r\n\r\nThe model \"command-r+\" was working perfectly until yesterday. I am using g4f with Docker.\r\n\r\n**Steps to Reproduce**\r\n\r\n1. Run \"g4f api\" with the \"command-r+\" model.\r\n2. Observe the error message in the terminal output.\r\n\r\n**Environment**\r\n- Python version: 3.11\r\n- Location: Not in a Cloudflare-flagged country\r\n- g4f version: [g4f v-0.3.3.3]\r\n\r\n**Additional context**\r\nNo recent changes were made on my side. The model was working without issues until yesterday.", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/2313", "file_loc": {"base_commit": "0d8e4ffa2c0706b0381f53c3985d04255b7170f5", "files": [{"path": "README.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [72], "mod": [31, 169, 186, 197, 198, 199, 200, 293, 299, 305, 773, 776]}}}, {"path": "docs/async_client.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [264], "mod": [60, 102, 233, 309]}}}, {"path": "docs/client.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [65, 107]}}}, {"path": "docs/docker.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [74]}}}, {"path": "docs/git.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [98]}}}, {"path": "docs/interference-api.md", "status": "modified", "Loc": {"(None, None, None)": {"mod": [71, 111, 138]}}}, {"path": "docs/providers-and-models.md", "status": "modified", "Loc": {"(None, None, None)": {"add": [11, 133, 192], "mod": [19, 21, 22, 23, 24, 27, 28, 31, 35, 36, 37, 38, 39, 40, 41, 44, 46, 47, 48, 49, 50, 51, 54, 56, 60, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 92, 107, 108, 109, 110, 111, 112, 113, 114, 143, 150, 173, 178, 202, 225]}}}, {"path": "g4f/Provider/AIUncensored.py", "status": "modified", "Loc": {"('AIUncensored', None, 11)": {"add": [27], "mod": [12, 18, 19, 20, 21, 23, 24, 25, 105, 106, 107, 108, 110, 111, 112]}, "('AIUncensored', 'get_model', 29)": {"add": [31]}, "(None, None, None)": {"mod": [4, 8]}, "('AIUncensored', 'create_async_generator', 36)": {"mod": [41, 46, 47, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103]}}}, {"path": "g4f/Provider/Ai4Chat.py", "status": "modified", "Loc": {"('Ai4Chat', None, 13)": {"mod": [17]}}}, {"path": "g4f/Provider/AiMathGPT.py", "status": "removed", "Loc": {}}, {"path": "g4f/Provider/Airforce.py", "status": "modified", "Loc": {"('Airforce', None, 13)": {"add": [17], "mod": [15, 16, 19, 21, 22, 23, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124]}, "('Airforce', 'create_async_generator', 115)": {"add": [125], "mod": [127, 128, 129, 130, 131, 132, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 152, 153, 155, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166]}, "(None, None, None)": {"mod": [2, 3, 4, 8, 10, 11]}, "('Airforce', 'get_model', 106)": {"mod": [110]}, "('Airforce', '_generate_image', 135)": {"mod": [168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196]}, "('Airforce', '_generate_text', 182)": {"mod": [198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 228, 229, 230, 231, 232, 234, 235, 236, 237, 238, 240, 241, 243, 244, 245]}}}, {"path": "g4f/Provider/Allyfy.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5], "mod": [3, 7]}, "('Allyfy', None, 11)": {"add": [14], "mod": [11]}, "('Allyfy', 'create_async_generator', 17)": {"add": [23, 42], "mod": [25, 26, 27, 28, 29, 30, 31, 32, 33, 36, 37, 38, 39, 44, 45, 47, 51, 55, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70]}}}, {"path": "g4f/Provider/Bing.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [20]}}}, {"path": "g4f/Provider/Blackbox.py", "status": "modified", "Loc": {"('Blackbox', None, 19)": {"add": [27, 114, 115, 121], "mod": [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 57, 61, 65, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 82, 83, 84, 85, 86, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 109, 110, 111, 112, 113, 119, 131, 132, 133, 134, 136, 137, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 161, 162, 163, 164, 165]}, "('Blackbox', 'create_async_generator', 168)": {"add": [229, 276], "mod": [172, 175, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 195, 200, 202, 204, 205, 206, 208, 209, 211, 213, 214, 215, 216, 217, 218, 220, 221, 223, 224, 226, 233, 239, 240, 241, 242, 244, 245, 246, 248, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 263, 264, 278, 281, 282, 283, 284, 285, 286, 287, 288, 289, 291, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 351, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372]}, "(None, None, None)": {"mod": [3, 4, 8, 10, 12]}}}, {"path": "g4f/Provider/ChatGot.py", "status": "removed", "Loc": {}}, {"path": "g4f/Provider/ChatGpt.py", "status": "modified", "Loc": {"('ChatGpt', None, 73)": {"add": [74, 78, 79, 85]}, "('ChatGpt', 'create_completion', 88)": {"add": [94], "mod": [96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 108, 110, 111, 113, 116, 117, 118, 140, 141, 146, 149, 151, 154, 157, 158, 159, 160, 161, 162, 163, 169, 192, 210, 212, 213, 216, 217, 220, 221, 223, 225]}, "(None, None, None)": {"mod": [6]}}}, {"path": "g4f/Provider/ChatGptEs.py", "status": "modified", "Loc": {"('ChatGptEs', 'create_async_generator', 37)": {"mod": [60]}}}, {"path": "g4f/Provider/ChatHub.py", "status": "removed", "Loc": {}}, {"path": "g4f/Provider/ChatifyAI.py", "status": "removed", "Loc": {}}, {"path": "g4f/Provider/Cloudflare.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "('Cloudflare', None, 12)": {"add": [12], "mod": [20, 21, 22, 25, 28, 31, 39, 42, 44, 46, 48, 51, 53, 55, 56, 57, 58, 60, 62, 64, 68, 71, 79, 84, 86, 88, 91, 92, 95, 98, 101, 102, 103, 107, 110, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212]}, "('Cloudflare', 'create_async_generator', 123)": {"add": [169, 189], "mod": [128, 129, 157, 160, 161, 166, 192, 193]}}}, {"path": "g4f/Provider/DarkAI.py", "status": "modified", "Loc": {"('DarkAI', None, 11)": {"mod": [12, 19, 21, 24]}, "('DarkAI', 'create_async_generator', 42)": {"mod": [54, 55, 80, 81, 82]}}}, {"path": "g4f/Provider/DeepInfraChat.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [9]}, "('DeepInfraChat', None, 12)": {"mod": [20, 22, 23, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55]}, "('DeepInfraChat', 'create_async_generator', 69)": {"mod": [100, 103, 104, 105, 106, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122]}}}, {"path": "g4f/Provider/Editee.py", "status": "removed", "Loc": {}}, {"path": "g4f/Provider/Free2GPT.py", "status": "modified", "Loc": {"('Free2GPT', None, 15)": {"mod": [19]}, "('Free2GPT', 'create_async_generator', 22)": {"mod": [52, 53, 54, 55, 57]}}}, {"path": "g4f/Provider/FreeChatgpt.py", "status": "removed", "Loc": {}}, {"path": "g4f/Provider/FreeGpt.py", "status": "modified", "Loc": {"('FreeGpt', None, 22)": {"mod": [27]}}}, {"path": "g4f/Provider/GizAI.py", "status": "modified", "Loc": {"('GizAI', None, 11)": {"add": [15], "mod": [12, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 70, 71, 72]}, "('GizAI', 'create_async_generator', 75)": {"add": [89], "mod": [101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 151]}, "(None, None, None)": {"mod": [3, 7]}}}, {"path": "g4f/Provider/HuggingChat.py", "status": "modified", "Loc": {"('HuggingChat', None, 11)": {"add": [21, 32]}, "('HuggingChat', 'create_completion', 49)": {"add": [85, 86, 144], "mod": [88, 89, 90, 91, 151]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["g4f/Provider/Ai4Chat.py", "g4f/Provider/ChatifyAI.py", "g4f/Provider/ChatGptEs.py", "g4f/Provider/DeepInfraChat.py", "g4f/Provider/AiMathGPT.py", "g4f/Provider/Allyfy.py", "g4f/Provider/ChatGpt.py", "g4f/Provider/Bing.py", "g4f/Provider/AIUncensored.py", "g4f/Provider/HuggingChat.py", "g4f/Provider/ChatHub.py", "g4f/Provider/FreeChatgpt.py", "g4f/Provider/Airforce.py", "g4f/Provider/DarkAI.py", "g4f/Provider/Blackbox.py", "g4f/Provider/ChatGot.py", "g4f/Provider/Editee.py", "g4f/Provider/GizAI.py", "g4f/Provider/FreeGpt.py", "g4f/Provider/Cloudflare.py", "g4f/Provider/Free2GPT.py"], "doc": ["docs/async_client.md", "docs/docker.md", "README.md", "docs/providers-and-models.md", "docs/git.md", "docs/interference-api.md", "docs/client.md"], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "b2bfc88218d3ffb367c6a4bcb14c0748666d348f", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1206", "iss_label": "bug\nstale", "title": "OpenaiChat:\\lib\\asyncio\\base_events.py\", line 498, in _make_subprocess_transport raise NotImplementedError", "body": "![image](https://github.com/xtekky/gpt4free/assets/37258899/c14b81f6-a429-4cb9-9029-4fab53d0e812)\r\nthis problem happens today after I update to the latest version!", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/1207", "file_loc": {"base_commit": "b2bfc88218d3ffb367c6a4bcb14c0748666d348f", "files": [{"path": "g4f/Provider/needs_auth/OpenaiChat.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4]}, "(None, 'get_arkose_token', 146)": {"mod": [147, 148, 149, 150, 151, 177, 178, 179, 180, 182, 186, 187]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["g4f/Provider/needs_auth/OpenaiChat.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "b2cf73c8f414cd9c30d920beb2e7a000934c1f92", "iss_has_pr": 1, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/354", "iss_label": "", "title": "target not found yay and python-pip.19.1.1-1", "body": "i have a problem when i try to run bash install.sh it says error target not found yay, python-pip.19.1.1-1 , i have installed the yay and i have no idea how to install python-pip so i need help.\r\n\r\nOS: Arch linux 64x_86X\r\nshell: bash 5.1.6\r\n![image](https://user-images.githubusercontent.com/127435098/224668316-8f1bfd93-6f01-4264-b70d-19ebef17382f.png)\r\n", "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/355", "file_loc": {"base_commit": "b2cf73c8f414cd9c30d920beb2e7a000934c1f92", "files": [{"path": "install.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [74, 96, 111]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["install.sh"]}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "1e088ad35b66dda0ee9139a5220627f86cb54365", "iss_has_pr": 1, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/347", "iss_label": "enhancement", "title": "Typos found by codespell", "body": "./tools/xss_attack.py:107: vulnerabilites ==> vulnerabilities\r\n./tools/information_gathering_tools.py:87: Scaning ==> Scanning\r\n./tools/information_gathering_tools.py:117: informations ==> information\r\n./tools/information_gathering_tools.py:168: informations ==> information\r\n./tools/forensic_tools.py:60: Aquire ==> Acquire\r\n./tools/wireless_attack_tools.py:51: bluetooh ==> bluetooth\r\n./tools/webattack.py:89: analizing ==> analyzing\r\n./tools/phising_attack.py:40: enginee ==> engine\r\n./tools/phising_attack.py:95: Engagment ==> Engagement\r\n./tools/payload_creator.py:89: writen ==> written\r\n./tools/others/socialmedia_finder.py:53: Usege ==> Usage", "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/350", "file_loc": {"base_commit": "1e088ad35b66dda0ee9139a5220627f86cb54365", "files": [{"path": ".github/workflows/lint_python.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [25]}}}, {"path": "tools/forensic_tools.py", "status": "modified", "Loc": {"('Guymager', None, 59)": {"mod": [60]}}}, {"path": "tools/information_gathering_tools.py", "status": "modified", "Loc": {"('ReconSpider', None, 86)": {"mod": [87]}, "('Infoga', None, 115)": {"mod": [117]}, "('Shodan', None, 166)": {"mod": [168]}}}, {"path": "tools/others/socialmedia_finder.py", "status": "modified", "Loc": {"('Sherlock', None, 50)": {"mod": [53]}}}, {"path": "tools/payload_creator.py", "status": "modified", "Loc": {"('Venom', None, 85)": {"mod": [89]}}}, {"path": "tools/phising_attack.py", "status": "modified", "Loc": {"('Setoolkit', None, 37)": {"mod": [40]}, "('ISeeYou', None, 92)": {"mod": [95]}}}, {"path": "tools/webattack.py", "status": "modified", "Loc": {"('Dirb', None, 84)": {"mod": [89]}}}, {"path": "tools/wireless_attack_tools.py", "status": "modified", "Loc": {"('BluePot', None, 49)": {"mod": [51]}}}, {"path": "tools/xss_attack.py", "status": "modified", "Loc": {"('XSSStrike', None, 105)": {"mod": [107]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["tools/wireless_attack_tools.py", "tools/xss_attack.py", "tools/phising_attack.py", "tools/forensic_tools.py", "tools/others/socialmedia_finder.py", "tools/payload_creator.py", "tools/webattack.py", "tools/information_gathering_tools.py"], "doc": [], "test": [], "config": [".github/workflows/lint_python.yml"], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_has_pr": 1, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/174", "iss_label": "", "title": "SyntaxError: invalid syntax", "body": "Traceback (most recent call last):\r\n File \"/home/kali/hackingtool/hackingtool.py\", line 11, in <module>\r\n from tools.ddos import DDOSTools\r\n File \"/home/kali/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n\r\nI'm getting this error someone can help? also in sudo in python3 ", "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "file_loc": {"base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "files": [{"path": "tools/ddos.py", "status": "modified", "Loc": {"('ddos', 'run', 20)": {"mod": [29]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "0e8e38e3b2f4b79f03fe8a3e655b9f506ab0f2a6", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/768", "iss_label": "", "title": "Arpack wrappers fail with new scipy", "body": "I have scipy 0.11.0.dev-c1ea274. This does not seem to play well with the current arpack wrappers.\nI'm a bit out of my depth there, though.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/802", "file_loc": {"base_commit": "0e8e38e3b2f4b79f03fe8a3e655b9f506ab0f2a6", "files": [{"path": "sklearn/utils/arpack.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [55]}, "(None, 'svds', 1540)": {"add": [1598], "mod": [1540]}, "(None, 'eigs', 1048)": {"mod": [1048]}, "(None, 'eigsh', 1264)": {"mod": [1264]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/utils/arpack.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "bb7e34bc52461749e6014787a05a9507eda11011", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/21668", "iss_label": "Build / CI\ncython", "title": "CI with boundscheck=False", "body": "I really dislike segmentation faults! Unfortunately, there are many issues reporting them.\r\nFindings in #21654, #21283 were easier with setting `boundscheck = True`.\r\n\r\n**Proposition**\r\nSet up one CI configuration that runs with `boundscheck = True` globally which should be easier now that #21512 is merged.", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/21779", "file_loc": {"base_commit": "c9e5067cb14de578ab48b64f399743b994e3ca94", "files": [{"path": "azure-pipelines.yml", "status": "modified", "Loc": {"(None, None, 202)": {"add": [202]}}}, {"path": "doc/computing/parallelism.rst", "status": "modified", "Loc": {"(None, None, 216)": {"add": [216]}}}, {"path": "sklearn/_build_utils/__init__.py", "status": "modified", "Loc": {"(None, 'cythonize_extensions', 40)": {"add": [72], "mod": [81]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/_build_utils/__init__.py"], "doc": ["doc/computing/parallelism.rst"], "test": [], "config": ["azure-pipelines.yml"], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "64ab789905077ba8990522688c11177442e5e91f", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/29358", "iss_label": "Documentation", "title": "Sprints page", "body": "### Describe the issue linked to the documentation\n\nThe following sprints are listed: \r\nhttps://scikit-learn.org/stable/about.html#sprints\r\n\r\nBut, that is a small subset, given the list here: \r\nhttps://blog.scikit-learn.org/sprints/\r\n\r\nAre the sprints posted on the \"About Us\" page of a certain criteria, such as Dev sprints only?\n\n### Suggest a potential alternative/fix\n\n_No response_", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/29418", "file_loc": {"base_commit": "64ab789905077ba8990522688c11177442e5e91f", "files": [{"path": "doc/about.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [548, 549, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561, 563, 564, 565]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["doc/about.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "41e129f1a6eb17a39ff0b25f682d903d0ae3c5af", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/5991", "iss_label": "Easy\nEnhancement", "title": "PERF : StratifiedShuffleSplit is slow when using large number of classes", "body": "When using large number of classes (e.g. > 10000, e.g for recommender systems), `StratifiedShuffleSplit` is very slow when compared to `ShuffleSplit`. Looking at the code, I believe that the following part: \n\n``` python\n for i, class_i in enumerate(classes):\n permutation = rng.permutation(class_counts[i])\n perm_indices_class_i = np.where((y == class_i))[0][permutation]\n```\n\n`l. 1070` in `sklearn.model_selection._split` is suboptimal : we should build an index matrix holding the indices for each class in the dataset (implying to do a single pass over data, maybe along with a `bincount(classes)`). Indeed np.where does a pass over `y` at each call, leading to a `O(n_classes * len(y))` complexity, whereas it could be `O(len(y))` only.\n\nI obtain a significant gain in perf doing:\n\n``` python\n\n class_indices = np.zeros((n_classes, class_counts.max()), dtype='int')\n count = np.zeros(n_classes, dtype='int')\n for i in range(len(y_indices)):\n class_indices[y_indices[i], count[y_indices[i]]] = i\n count[y_indices[i]] += 1\n```\n\nand subsequently replacing\n\n``` python\nperm_indices_class_i = np.where((y == class_i))[0][permutation]\n```\n\n by\n\n``` python\nperm_indices_class_i = class_indices[class_i,:class_counts[i]][permutation]\n```\n\nThis is suboptimal given we iterate over y values using within a Python loop. I believe that the proper way to do this would be to create a `bincount_with_ref` cython function that would both count the occurence of classes and accumulate class index in a `class_indices` array - in `arrayfuncs.pyx`. Memory usage goes up of `len(y) * sizeof('int')`, which is typically small when compared to `X` size.\n\nWould this be useful ? I'll have to provide benchmarks !\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/9197", "file_loc": {"base_commit": "41e129f1a6eb17a39ff0b25f682d903d0ae3c5af", "files": [{"path": "doc/whats_new.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [219]}}}, {"path": "sklearn/model_selection/_split.py", "status": "modified", "Loc": {"('StratifiedShuffleSplit', '_iter_indices', 1495)": {"add": [1523], "mod": [1536, 1538]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/model_selection/_split.py"], "doc": ["doc/whats_new.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "4143356c3c51831300789e4fdf795d83716dbab6", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/10336", "iss_label": "help wanted", "title": "Should mixture models have a clusterer-compatible interface", "body": "Mixture models are currently a bit different. They are basically clusterers, except they are probabilistic, and are applied to inductive problems unlike many clusterers. But they are unlike clusterers in API:\r\n* they have an `n_components` parameter, with identical purpose to `n_clusters`\r\n* they do not store the `labels_` of the training data\r\n* they do not have a `fit_predict` method\r\n\r\nAnd they are almost entirely documented separately.\r\n\r\nShould we make the MMs more like clusterers?", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/11281", "file_loc": {"base_commit": "4143356c3c51831300789e4fdf795d83716dbab6", "files": [{"path": "doc/whats_new/v0.20.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [583]}}}, {"path": "sklearn/mixture/base.py", "status": "modified", "Loc": {"('BaseMixture', 'fit', 172)": {"add": [190], "mod": [175, 243]}}}, {"path": "sklearn/mixture/tests/test_bayesian_mixture.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3, 9], "mod": [17]}, "(None, 'test_invariant_translation', 400)": {"add": [421]}}}, {"path": "sklearn/mixture/tests/test_gaussian_mixture.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 571]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/mixture/base.py"], "doc": ["doc/whats_new/v0.20.rst"], "test": ["sklearn/mixture/tests/test_bayesian_mixture.py", "sklearn/mixture/tests/test_gaussian_mixture.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "d7795a431e30d23f7e8499bdbe89dbdc6e9a068e", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/16001", "iss_label": "Bug\nEasy\ngood first issue\nhelp wanted", "title": "Possible infinite loop iterations in synthetic data sets generation module", "body": "Hello,\r\n\r\nI found two code snippets in https://github.com/scikit-learn/scikit-learn/blob/7e85a6d1f/sklearn/datasets/_samples_generator.py are susceptible to infinite loop iterations when using make_multilabel_classification():\r\n\r\n1) https://github.com/scikit-learn/scikit-learn/blob/7e85a6d1f/sklearn/datasets/_samples_generator.py#L357\r\n\r\n2) https://github.com/scikit-learn/scikit-learn/blob/7e85a6d1f/sklearn/datasets/_samples_generator.py#L371\r\n\r\nThese happen when the parameters of make_multilabel_classification functions are EITHER (allowed_unlabeled = False and n_classes = 0) OR length = 0.\r\n\r\nI am using the version 0.20.3 of scikit-learn.\r\n\r\nPlease let me know if you have any questions about this.\r\nThank You\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/16006", "file_loc": {"base_commit": "d7795a431e30d23f7e8499bdbe89dbdc6e9a068e", "files": [{"path": "doc/whats_new/v0.23.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [65]}}}, {"path": "sklearn/datasets/_samples_generator.py", "status": "modified", "Loc": {"(None, 'make_multilabel_classification', 263)": {"add": [344]}}}, {"path": "sklearn/datasets/tests/test_samples_generator.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [224]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/datasets/_samples_generator.py"], "doc": ["doc/whats_new/v0.23.rst"], "test": ["sklearn/datasets/tests/test_samples_generator.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "0e3cbbdcdfeec1c6b10aea11524add6350a8f4e0", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/933", "iss_label": "", "title": "Speed up tree construction", "body": "CC: @pprett @amueller @bdholt1 \n\nHi folks,\n\nEveryone will agree that tree-based methods have shown to perform quite well (e.g., the recent achievement of Peter!) and are increasingly used by our users. However, the tree module still has a major drawback: it is slow as hell in comparison to other machine learning packages. \n\nFor that reason, I think we should put some more effort into accelerating the tree module. In particular, I would like to suggest to move the whole `Tree` class (not the estimators, but only our struct-of-arrays representation) from tree.py into Cython in _tree.pyx. First the code would be a lot faster. But second, it could also actually be more readable and maintainable if the whole tree construction process was packaged into a single file, in a single class. Currently, the construction process is indeed split across 2 files, estimator classes, the Tree class and all the Cython routines. (imo, this is a mess.)\n\nTo show that indeed the construction process could be a lot faster, I profiled `recursive_partition` using line-profiler (see link below). Insignicant Python instructions do actually take quite some time in comparison to the important parts of the algorithm. E.g., line 314 vs line 320. A mere Python if-statement is only twice faster than finding the best threshold!!! \n\nI let you examine the rest of the profiling report by yourself, but as far as I am concerned, I am convinced that we could indeed significantly speed up the tree module (and be 5-10x faster at least). \n\nhttp://pastebin.com/0rC1QmPy (toggle text warping)\n\nWhat's your opinion about this? Since I am increasingly using the module myself, I can actually work on that in the days to come.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/946", "file_loc": {"base_commit": "0e3cbbdcdfeec1c6b10aea11524add6350a8f4e0", "files": [{"path": "doc/whats_new.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [11]}}}, {"path": "sklearn/ensemble/_gradient_boosting.c", "status": "modified", "Loc": {"(None, None, None)": {"add": [381, 637, 673, 746, 931, 973, 4913, 5993], "mod": [1, 608, 760, 975, 996, 1026, 1027, 1031, 1033, 1056, 1058, 1073, 1075, 1088, 1092, 1093, 1309, 1628, 3821, 3823, 3838, 3840]}, "(None, 'PyInit__gradient_boosting', 4052)": {"add": [4118], "mod": [4135, 4142, 4144, 4147, 4150, 4157, 4159, 4162, 4169, 4171]}, "(None, '__pyx_f_7sklearn_8ensemble_18_gradient_boosting__predict_regression_tree_inplace_fast', 1096)": {"mod": [1096, 1100, 1110, 1113, 1114, 1115, 1116, 1117, 1119, 1120, 1121, 1130, 1131, 1135, 1139, 1142, 1143, 1147, 1150, 1152, 1153, 1157, 1164, 1165, 1166, 1169, 1173, 1174, 1177, 1183, 1186, 1188, 1190, 1195, 1196, 1198, 1202, 1207, 1209]}, "(None, None, 1225)": {"mod": [1258, 1264, 1270, 1274, 1286, 1291, 1297, 1298, 1299]}, "(None, None, 1317)": {"mod": [1324, 1340, 1341, 1342, 1361, 1366, 1371, 1375, 1384, 1393, 1398, 1402, 1406, 1411, 1412, 1422, 1433, 1444, 1445, 1448, 1449, 1450, 1451, 1452, 1453, 1454, 1455, 1456, 1458, 1459, 1460, 1461, 1462, 1463, 1464, 1465, 1466, 1468, 1469, 1470, 1471, 1472, 1473, 1474, 1475, 1476, 1478, 1479, 1480, 1481, 1482, 1483, 1484, 1485, 1486, 1488, 1489, 1495, 1496, 1497, 1498, 1499, 1507, 1508, 1509, 1524]}, "(None, None, 1534)": {"mod": [1569, 1575, 1581, 1587, 1591, 1603, 1605, 1610, 1616, 1617, 1618]}, "(None, None, 1636)": {"mod": [1642, 1656, 1657, 1658, 1677, 1682, 1687, 1691, 1700, 1709, 1714, 1718, 1722, 1727, 1729, 1738, 1739, 1749, 1750, 1753, 1754, 1755, 1756, 1757, 1758, 1759, 1760, 1761, 1763, 1764, 1765, 1766, 1767, 1768, 1769, 1770, 1771, 1773, 1774, 1775, 1776, 1777, 1778, 1779, 1780, 1781, 1783, 1784, 1785, 1786, 1787, 1788, 1789, 1790, 1791, 1793, 1794, 1799, 1800, 1801, 1802, 1803, 1810, 1811, 1812, 1827]}, "(None, '__Pyx_InitCachedBuiltins', 3843)": {"mod": [3844]}, "(None, '__Pyx_InitCachedConstants', 3852)": {"mod": [3940, 3947, 3983, 3985, 3992, 4031]}, "(None, 'int', 5092)": {"mod": [5092, 5093, 5094, 5095, 5096, 5097, 5098, 5099, 5100, 5101, 5102]}}}, {"path": "sklearn/ensemble/_gradient_boosting.pyx", "status": "modified", "Loc": {"(None, None, None)": {"add": [14], "mod": [21, 22, 23, 24, 75, 79, 80, 83, 85, 104, 115, 116, 117, 118, 139, 145, 146, 147, 148]}}}, {"path": "sklearn/ensemble/forest.py", "status": "modified", "Loc": {"('BaseForest', 'fit', 209)": {"add": [259, 265], "mod": [227, 250]}, "(None, None, None)": {"mod": [47]}, "('ForestClassifier', 'predict_proba', 420)": {"mod": [439]}, "('ForestRegressor', 'predict', 528)": {"mod": [545]}}}, {"path": "sklearn/ensemble/gradient_boosting.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [35, 36, 38, 40]}, "('LossFunction', 'update_terminal_regions', 145)": {"mod": [165, 166, 167, 174]}, "('BaseGradientBoosting', 'fit_stage', 482)": {"mod": [494, 495, 496, 497]}}}, {"path": "sklearn/ensemble/tests/test_forest.py", "status": "modified", "Loc": {"(None, 'test_probability', 140)": {"add": [141]}, "(None, None, None)": {"add": [159, 358]}, "(None, 'test_multioutput', 305)": {"add": [306]}}}, {"path": "sklearn/ensemble/tests/test_gradient_boosting.py", "status": "modified", "Loc": {"(None, 'test_feature_importances', 195)": {"mod": [195, 196, 197, 198, 199, 201, 202, 204]}}}, {"path": "sklearn/tree/_tree.pyx", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 22, 24, 33, 91, 106, 195, 196, 236, 351, 358, 560, 597], "mod": [15, 16, 17, 18, 32, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 48, 50, 51, 52, 53, 54, 55, 62, 63, 64, 65, 66, 67, 76, 77, 86, 87, 93, 96, 99, 100, 120, 124, 142, 148, 149, 150, 151, 152, 153, 180, 181, 199, 200, 201, 202, 203, 204, 239, 245, 251, 253, 258, 259, 263, 307, 309, 349, 350, 362, 363, 365, 366, 368, 369, 371, 372, 374, 375, 377, 378, 402, 420, 430, 431, 432, 433, 434, 511, 512, 513, 514, 515, 516, 563, 567, 571, 573, 599, 600, 601, 604, 605, 607, 608, 609, 610, 611, 612, 613, 614, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 627, 628, 629, 630, 631, 632, 634, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 671, 672, 673, 674, 675, 676, 677, 678, 680, 681, 682, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 696, 698, 701, 702, 703, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 747, 748, 749, 750, 752, 753, 755, 756, 757, 759, 760, 761, 763, 764, 765, 767, 768, 770, 771, 773, 774, 776, 777, 779, 780, 781, 782, 783, 785, 786, 788, 789, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 808, 809, 810, 811, 812, 813, 814, 815, 816, 818, 819, 820, 821, 823, 824, 826, 828, 829, 830, 831, 832, 833, 835, 836, 837, 838, 839, 840, 842, 843, 845, 846, 847, 848, 850, 851, 852, 853, 854, 855, 856, 858, 859, 861, 862, 863, 864, 866, 868, 869, 870, 871, 872, 873, 874, 875, 877, 878, 880, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 895, 896, 898, 899, 901, 902, 903, 905, 906, 907, 909, 910, 911, 913, 914, 916, 917, 919, 920, 922, 923, 927, 928, 929, 931, 932, 934, 935, 937, 938, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 953, 954, 955, 956, 957, 958, 959, 960, 961, 963, 964, 965, 966, 968, 969, 971, 973, 974, 975, 976, 977, 978, 980, 981, 982, 983, 984, 985, 987, 988, 990, 991, 992, 993, 995, 996, 997, 999, 1000, 1002, 1003, 1004, 1005, 1006, 1008, 1009, 1011, 1012, 1013, 1014, 1016, 1018, 1019, 1020, 1022, 1023, 1025, 1026, 1027, 1028, 1030]}}}, {"path": "sklearn/tree/tests/test_tree.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [182]}, "(None, 'test_numerical_stability', 183)": {"add": [201, 202]}, "(None, 'test_min_samples_leaf', 316)": {"add": [317], "mod": [319, 321, 322, 323, 324, 325, 326, 328, 329]}}}, {"path": "sklearn/tree/tree.py", "status": "modified", "Loc": {"(None, 'node_to_str', 81)": {"add": [96], "mod": [82, 83, 84, 85, 91]}, "('BaseDecisionTree', 'fit', 465)": {"add": [501, 554], "mod": [487, 492, 512, 557, 558, 559, 560, 561, 562]}, "(None, 'recurse', 104)": {"mod": [105, 106, 107, 109, 113, 114, 117]}, "('Tree', None, 133)": {"mod": [133, 134, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 148, 149, 150, 151, 153, 154, 156, 157, 159, 160, 162, 163, 164, 166, 167, 168, 170, 171, 172, 174, 175, 177, 178, 179, 180, 182, 184, 185, 187, 188, 190, 191, 192, 194, 195, 196, 198, 199, 200, 201, 203, 204, 206, 207, 208, 209, 210, 211, 212, 213, 215, 216, 217, 219, 220, 221, 222, 223, 224, 225, 227, 228, 230, 231, 232, 234, 236, 237, 238, 239, 240, 241, 243, 245, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 259, 260, 261, 262, 264, 265, 267, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 293, 295, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 310, 311, 313, 314, 315, 316, 318, 319, 320, 321, 323, 324, 325, 326, 327, 329, 330, 331, 332, 334, 335, 337, 338, 339, 340, 341, 342, 343, 345, 346, 347, 348, 349, 350, 351, 353, 354, 355, 356, 357, 358, 359, 361, 363, 364, 366, 367, 369, 371, 372, 373, 375, 376, 377, 378, 379, 380, 382, 384, 385, 387, 389, 390, 391, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 413, 414, 415, 416, 417, 418, 419, 421, 423, 424, 425, 427]}, "('BaseDecisionTree', '__init__', 439)": {"mod": [460]}, "('BaseDecisionTree', 'predict', 570)": {"mod": [587]}, "('DecisionTreeClassifier', 'predict_proba', 727)": {"mod": [742]}, "('ExtraTreeClassifier', '__init__', 932)": {"mod": [949]}, "('ExtraTreeRegressor', '__init__', 978)": {"mod": [995]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/tree/_tree.pyx", "sklearn/ensemble/_gradient_boosting.pyx", "sklearn/ensemble/_gradient_boosting.c", "sklearn/ensemble/gradient_boosting.py", "sklearn/ensemble/forest.py", "sklearn/tree/tree.py"], "doc": ["doc/whats_new.rst"], "test": ["sklearn/tree/tests/test_tree.py", "sklearn/ensemble/tests/test_gradient_boosting.py", "sklearn/ensemble/tests/test_forest.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "77aeb825b6494de1e3a2c1e7233b182e05d55ab0", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/27982", "iss_label": "Documentation\ngood first issue\nhelp wanted", "title": "Ensure that we have an example in the docstring of each public function or class", "body": "We should make sure that we have a small example for all public functions or classes. Most of the missing examples are linked to functions.\r\n\r\nI could list the following classes and functions for which `numpydoc` did not find any example:\r\n\r\n- [x] sklearn.base.BaseEstimator\r\n- [x] sklearn.base.BiclusterMixin\r\n- [x] sklearn.base.ClassNamePrefixFeaturesOutMixin\r\n- [x] sklearn.base.ClassifierMixin\r\n- [x] sklearn.base.ClusterMixin\r\n- [x] sklearn.base.DensityMixin\r\n- [x] sklearn.base.MetaEstimatorMixin\r\n- [x] sklearn.base.OneToOneFeatureMixin\r\n- [x] sklearn.base.OutlierMixin\r\n- [x] sklearn.base.RegressorMixin\r\n- [x] sklearn.base.TransformerMixin\r\n- [x] sklearn.base.clone\r\n- [x] sklearn.base.is_classifier\r\n- [x] sklearn.base.is_regressor\r\n- [x] sklearn.cluster.affinity_propagation\r\n- [x] sklearn.cluster.cluster_optics_dbscan\r\n- [x] sklearn.cluster.cluster_optics_xi\r\n- [x] sklearn.cluster.compute_optics_graph\r\n- [x] sklearn.cluster.estimate_bandwidth\r\n- [x] sklearn.cluster.k_means\r\n- [x] sklearn.cluster.mean_shift\r\n- [x] sklearn.cluster.spectral_clustering\r\n- [x] sklearn.cluster.ward_tree\r\n- [x] sklearn.covariance.graphical_lasso\r\n- [x] sklearn.covariance.ledoit_wolf\r\n- [x] sklearn.covariance.ledoit_wolf_shrinkage\r\n- [x] sklearn.covariance.shrunk_covariance\r\n- [x] sklearn.datasets.clear_data_home\r\n- [x] sklearn.datasets.dump_svmlight_file\r\n- [x] sklearn.datasets.fetch_20newsgroups\r\n- [x] sklearn.datasets.fetch_20newsgroups_vectorized\r\n- [x] sklearn.datasets.fetch_california_housing\r\n- [x] sklearn.datasets.fetch_covtype\r\n- [x] sklearn.datasets.fetch_kddcup99\r\n- [x] sklearn.datasets.fetch_lfw_pairs\r\n- [x] sklearn.datasets.fetch_lfw_people\r\n- [x] sklearn.datasets.fetch_olivetti_faces\r\n- [x] sklearn.datasets.fetch_openml\r\n- [x] sklearn.datasets.fetch_rcv1\r\n- [x] sklearn.datasets.fetch_species_distributions\r\n- [x] sklearn.datasets.get_data_home\r\n- [x] sklearn.datasets.load_diabetes\r\n- [x] sklearn.datasets.load_files\r\n- [x] sklearn.datasets.load_linnerud\r\n- [x] sklearn.datasets.load_svmlight_files\r\n- [x] sklearn.datasets.make_biclusters\r\n- [x] sklearn.datasets.make_checkerboard\r\n- [x] sklearn.datasets.make_circles\r\n- [x] sklearn.datasets.make_classification\r\n- [x] sklearn.datasets.make_friedman1\r\n- [x] sklearn.datasets.make_friedman2\r\n- [x] sklearn.datasets.make_friedman3\r\n- [x] sklearn.datasets.make_gaussian_quantiles\r\n- [x] sklearn.datasets.make_hastie_10_2\r\n- [x] sklearn.datasets.make_low_rank_matrix\r\n- [x] sklearn.datasets.make_moons\r\n- [x] sklearn.datasets.make_multilabel_classification\r\n- [x] sklearn.datasets.make_s_curve\r\n- [x] sklearn.datasets.make_sparse_coded_signal\r\n- [x] sklearn.datasets.make_sparse_spd_matrix\r\n- [x] sklearn.datasets.make_sparse_uncorrelated\r\n- [x] sklearn.datasets.make_spd_matrix\r\n- [x] sklearn.datasets.make_swiss_roll\r\n- [x] sklearn.decomposition.dict_learning\r\n- [x] sklearn.decomposition.dict_learning_online\r\n- [x] sklearn.decomposition.sparse_encode\r\n- [x] sklearn.feature_extraction.image.grid_to_graph\r\n- [x] sklearn.feature_extraction.image.img_to_graph\r\n- [x] sklearn.feature_extraction.image.reconstruct_from_patches_2d\r\n- [x] sklearn.feature_selection.SelectorMixin\r\n- [x] sklearn.feature_selection.chi2\r\n- [x] sklearn.feature_selection.f_classif\r\n- [x] sklearn.feature_selection.f_regression\r\n- [x] sklearn.feature_selection.mutual_info_classif\r\n- [x] sklearn.feature_selection.mutual_info_regression\r\n- [x] sklearn.feature_selection.r_regression\r\n- [x] sklearn.gaussian_process.kernels.Kernel\r\n- [x] sklearn.get_config\r\n- [x] sklearn.isotonic.check_increasing\r\n- [x] sklearn.isotonic.isotonic_regression\r\n- [x] sklearn.linear_model.enet_path\r\n- [x] sklearn.linear_model.lars_path\r\n- [x] sklearn.linear_model.lars_path_gram\r\n- [x] sklearn.linear_model.orthogonal_mp\r\n- [x] sklearn.linear_model.orthogonal_mp_gram\r\n- [x] sklearn.linear_model.ridge_regression\r\n- [x] sklearn.manifold.locally_linear_embedding\r\n- [x] sklearn.manifold.smacof\r\n- [x] sklearn.manifold.spectral_embedding\r\n- [x] sklearn.manifold.trustworthiness\r\n- [x] sklearn.metrics.calinski_harabasz_score\r\n- [x] sklearn.metrics.check_scoring\r\n- [x] sklearn.metrics.cohen_kappa_score\r\n- [x] sklearn.metrics.consensus_score\r\n- [x] sklearn.metrics.coverage_error\r\n- [x] sklearn.metrics.davies_bouldin_score\r\n- [x] sklearn.metrics.get_scorer\r\n- [x] sklearn.metrics.get_scorer_names\r\n- [x] sklearn.metrics.homogeneity_completeness_v_measure\r\n- [x] sklearn.metrics.label_ranking_loss\r\n- [x] sklearn.metrics.mutual_info_score\r\n- [x] sklearn.metrics.pairwise.additive_chi2_kernel\r\n- [x] sklearn.metrics.pairwise.chi2_kernel\r\n- [x] sklearn.metrics.pairwise.cosine_distances\r\n- [x] sklearn.metrics.pairwise.cosine_similarity\r\n- [x] sklearn.metrics.pairwise.distance_metrics\r\n- [x] sklearn.metrics.pairwise.kernel_metrics\r\n- [x] sklearn.metrics.pairwise.laplacian_kernel\r\n- [x] sklearn.metrics.pairwise.linear_kernel\r\n- [x] sklearn.metrics.pairwise.paired_cosine_distances\r\n- [x] sklearn.metrics.pairwise.paired_euclidean_distances\r\n- [x] sklearn.metrics.pairwise.pairwise_kernels\r\n- [x] sklearn.metrics.pairwise.polynomial_kernel\r\n- [x] sklearn.metrics.pairwise.rbf_kernel\r\n- [x] sklearn.metrics.pairwise.sigmoid_kernel\r\n- [x] sklearn.metrics.pairwise_distances\r\n- [x] sklearn.metrics.pairwise_distances_argmin\r\n- [x] sklearn.metrics.pairwise_distances_argmin_min\r\n- [x] sklearn.metrics.silhouette_samples\r\n- [x] sklearn.metrics.silhouette_score\r\n- [x] sklearn.model_selection.check_cv\r\n- [x] sklearn.model_selection.permutation_test_score\r\n- [x] sklearn.model_selection.validation_curve\r\n- [x] sklearn.neighbors.sort_graph_by_row_values\r\n- [x] sklearn.preprocessing.binarize\r\n- [x] sklearn.preprocessing.maxabs_scale\r\n- [x] sklearn.preprocessing.minmax_scale\r\n- [x] sklearn.preprocessing.normalize\r\n- [x] sklearn.preprocessing.robust_scale\r\n- [x] sklearn.preprocessing.scale\r\n- [x] sklearn.set_config\r\n- [x] sklearn.show_versions\r\n- [x] sklearn.svm.l1_min_c\r\n- [x] sklearn.utils._safe_indexing\r\n- [x] sklearn.utils.arrayfuncs.min_pos\r\n- [x] sklearn.utils.as_float_array\r\n- [x] sklearn.utils.assert_all_finite\r\n- [x] sklearn.utils.check_X_y\r\n- [x] sklearn.utils.check_array\r\n- [x] sklearn.utils.check_consistent_length\r\n- [x] sklearn.utils.check_random_state\r\n- [x] sklearn.utils.check_scalar\r\n- [x] sklearn.utils.class_weight.compute_class_weight\r\n- [x] sklearn.utils.class_weight.compute_sample_weight\r\n- [x] sklearn.utils.deprecated\r\n- [x] sklearn.utils.discovery.all_displays\r\n- [x] sklearn.utils.discovery.all_estimators\r\n- [x] sklearn.utils.discovery.all_functions\r\n- [x] sklearn.utils.estimator_checks.check_estimator\r\n- [x] sklearn.utils.estimator_html_repr\r\n- [x] sklearn.utils.extmath.density\r\n- [x] sklearn.utils.extmath.randomized_range_finder\r\n- [x] sklearn.utils.extmath.safe_sparse_dot\r\n- [x] sklearn.utils.indexable\r\n- [x] sklearn.utils.metadata_routing.MetadataRequest\r\n- [x] sklearn.utils.metadata_routing.MetadataRouter\r\n- [x] sklearn.utils.metadata_routing.MethodMapping\r\n- [x] sklearn.utils.metadata_routing.get_routing_for_object\r\n- [x] sklearn.utils.metadata_routing.process_routing\r\n- [x] sklearn.utils.murmurhash3_32\r\n- [x] sklearn.utils.parallel.Parallel\r\n- [x] sklearn.utils.parallel.delayed\r\n- [x] sklearn.utils.parallel_backend\r\n- [x] sklearn.utils.random.sample_without_replacement\r\n- [x] sklearn.utils.register_parallel_backend\r\n- [x] sklearn.utils.safe_mask\r\n- [x] sklearn.utils.safe_sqr\r\n- [x] sklearn.utils.sparsefuncs.incr_mean_variance_axis\r\n- [x] sklearn.utils.sparsefuncs.inplace_column_scale\r\n- [x] sklearn.utils.sparsefuncs.inplace_csr_column_scale\r\n- [x] sklearn.utils.sparsefuncs.inplace_row_scale\r\n- [x] sklearn.utils.sparsefuncs.inplace_swap_column\r\n- [x] sklearn.utils.sparsefuncs.inplace_swap_row\r\n- [x] sklearn.utils.sparsefuncs.mean_variance_axis\r\n- [x] sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1\r\n- [x] sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l2\r\n- [x] sklearn.utils.validation.check_is_fitted\r\n- [x] sklearn.utils.validation.check_memory\r\n- [x] sklearn.utils.validation.check_symmetric\r\n- [x] sklearn.utils.validation.column_or_1d\r\n\r\nThe code used to find the list above is detailed below:\r\n\r\n<details>\r\n\r\n```python\r\nimport importlib\r\nimport inspect\r\nfrom pathlib import Path\r\n\r\nfrom numpydoc.docscrape import NumpyDocString\r\n\r\npath_sklearn_doc = Path(\r\n \"/{path_to_git_repo}/scikit-learn/doc/_build/html/stable/\"\r\n \"modules/generated\"\r\n)\r\n\r\nmissing_examples_name = []\r\nfor document in path_sklearn_doc.glob(\"*.html\"):\r\n extracted_doc = []\r\n full_name = document.stem\r\n try:\r\n module_name, class_or_function_name = full_name.rsplit(\".\", maxsplit=1)\r\n module = importlib.import_module(module_name)\r\n class_or_function = getattr(module, class_or_function_name)\r\n except (ValueError, AttributeError, ImportError):\r\n # This is due to the experimental module and function with\r\n # module name\r\n continue\r\n is_class = inspect.isclass(class_or_function)\r\n docstring = NumpyDocString(class_or_function.__doc__)\r\n if not docstring[\"Examples\"]:\r\n missing_examples_name.append(full_name)\r\n\r\nfor full_name in sorted(missing_examples_name):\r\n print(f\"- [ ] {full_name}\")\r\n```\r\n\r\n</details>", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/28564", "file_loc": {"base_commit": "d967cfe8124902181892411b18b50dce9921a32d", "files": [{"path": "sklearn/datasets/_samples_generator.py", "status": "modified", "Loc": {"(None, 'make_low_rank_matrix', 1359)": {"add": [1413]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/datasets/_samples_generator.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "e11c4d21a4579f0d49f414a4b76e386f80f0f074", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/19269", "iss_label": "New Feature\nmodule:datasets", "title": "sklearn.datasets.load_files select file extension", "body": "<!--\r\nIf you want to propose a new algorithm, please refer first to the scikit-learn\r\ninclusion criterion:\r\nhttps://scikit-learn.org/stable/faq.html#what-are-the-inclusion-criteria-for-new-algorithms\r\n-->\r\n\r\n#### Describe the workflow you want to enable\r\nWhen using load_files in a directory where there are different kinds of files (.txt, .png, ...), the user might want to load only certain files (*.txt for example). This feature would put load_files closer to the function `index_directory` from tensorflow.python.keras.preprocessing.dataset_utils.py. \r\n\r\n\r\nFor MacOs users, .DStore files also gets loaded which is an undesired behaviour.\r\n\r\n#### Describe your proposed solution\r\nAdd an argument to select the types of files to load.\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/22498", "file_loc": {"base_commit": "e11c4d21a4579f0d49f414a4b76e386f80f0f074", "files": [{"path": "doc/whats_new/v1.1.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [175]}}}, {"path": "sklearn/datasets/_base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13]}, "(None, 'load_files', 99)": {"add": [108, 145, 186, 214], "mod": [218]}}}, {"path": "sklearn/datasets/tests/test_base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [131]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/datasets/_base.py"], "doc": ["doc/whats_new/v1.1.rst"], "test": ["sklearn/datasets/tests/test_base.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "cdd693bf955acd2a97cce48011d168c6b1ef316d", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/8364", "iss_label": "Easy\nDocumentation\nSprint", "title": "Matplotlib update on CI makes example look different", "body": "The examples look different on the current dev website, in particular the classifier comparison that's on the landing pages looks a bit odd now:\r\nhttp://scikit-learn.org/dev/auto_examples/classification/plot_classifier_comparison.html\r\n\r\nI suspect the culprit is the CI upgrading to matplotlib v2. I think we should go through the examples and see how they are holding up with the new styles.", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/8516 https://github.com/scikit-learn/scikit-learn/pull/8369", "file_loc": {"base_commit": "676e8630243b894aa2976ef6fb6048f9880b8a23", "files": [{"path": "examples/svm/plot_separating_hyperplane.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [15, 20], "mod": [18, 19, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 43, 44, 45]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["examples/svm/plot_separating_hyperplane.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "839b356f45fac7724eab739dcc129a0c8f650a23", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/15005", "iss_label": "API", "title": "Implement SLEP009: keyword-only arguments", "body": "[SLEP009](https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep009/proposal.html) is all but accepted.\r\n\r\nIt proposes to make most parameters keyword-only.\r\n\r\nWe should do this by first:\r\n* [x] Merging #13311 \r\n* [x] Perhaps getting some stats on usage of positional arguments as per https://github.com/scikit-learn/enhancement_proposals/pull/19#issuecomment-514671933\r\n* [ ] applying the deprecation to each subpackage. Checked means PR opened at least.\r\n * [x] base\r\n * [x] calibration\r\n * [x] cluster\r\n * [x] compose\r\n * [x] covariance\r\n * [x] cross_decomposition\r\n * [x] datasets\r\n * [x] decomposition\r\n * [x] discriminant_analysis\r\n * [x] dummy\r\n * [x] ensemble\r\n * [x] feature_extraction\r\n * [x] feature_selection\r\n * [x] gaussian_process\r\n * [x] impute\r\n * [x] inspection\r\n * [x] isotonic\r\n * [x] kernel_approximation\r\n * [x] kernel_ridge\r\n * [x] linear_model\r\n * [x] manifold\r\n * [x] metrics\r\n * [x] metrics.pairwise\r\n * [x] mixture\r\n * [x] model_selection\r\n * [x] multiclass\r\n * [x] multioutput\r\n * [x] naive_bayes\r\n * [x] neighbors\r\n * [x] neural_network\r\n * [x] pipeline\r\n * [x] preprocessing\r\n * [x] random_projection\r\n * [x] semi_supervised\r\n * [x] svm\r\n * [x] tree\r\n * [x] utils\r\n\r\n\r\nWe might along the way establish rules of thumb and principles like \"are the semantics reasonably clear when the argument is passed positionally?\" As I noted on the mailing list, I think they are clear for PCA's components, for Pipeline's steps, and for GridSearchCV's estimator and parameter grid. Other parameters of those estimators seem more suitable for keyword-only. Trickier is whether n_components in TSNE should follow PCA in being positional... It's not as commonly set by users. ", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/17007 https://github.com/scikit-learn/scikit-learn/pull/17046 https://github.com/scikit-learn/scikit-learn/pull/17006 https://github.com/scikit-learn/scikit-learn/pull/17005 https://github.com/scikit-learn/scikit-learn/pull/13311 https://github.com/scikit-learn/scikit-learn/pull/16719", "file_loc": {"base_commit": "839b356f45fac7724eab739dcc129a0c8f650a23", "files": [{"path": "sklearn/datasets/_base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}, "(None, 'load_files', 83)": {"mod": [83]}, "(None, 'load_wine', 270)": {"mod": [270]}, "(None, 'load_iris', 384)": {"mod": [384]}, "(None, 'load_breast_cancer', 498)": {"mod": [498]}, "(None, 'load_digits', 622)": {"mod": [622]}, "(None, 'load_diabetes', 745)": {"mod": [745]}, "(None, 'load_linnerud', 837)": {"mod": [837]}, "(None, 'load_boston', 940)": {"mod": [940]}}}, {"path": "sklearn/datasets/_california_housing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [38]}, "(None, 'fetch_california_housing', 51)": {"mod": [51]}}}, {"path": "sklearn/datasets/_covtype.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [30]}, "(None, 'fetch_covtype', 43)": {"mod": [43]}}}, {"path": "sklearn/datasets/_kddcup99.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [25]}, "(None, 'fetch_kddcup99', 46)": {"mod": [46]}}}, {"path": "sklearn/datasets/_lfw.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [22]}, "(None, 'fetch_lfw_people', 218)": {"mod": [218]}, "(None, 'fetch_lfw_pairs', 388)": {"mod": [388]}}}, {"path": "sklearn/datasets/_olivetti_faces.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27]}, "(None, 'fetch_olivetti_faces', 38)": {"mod": [38]}}}, {"path": "sklearn/datasets/_openml.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [25]}, "(None, 'fetch_openml', 611)": {"mod": [611]}}}, {"path": "sklearn/datasets/_rcv1.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27]}, "(None, 'fetch_rcv1', 78)": {"mod": [78]}}}, {"path": "sklearn/datasets/_samples_generator.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [20]}, "(None, 'make_classification', 36)": {"mod": [36]}, "(None, 'make_multilabel_classification', 264)": {"mod": [264]}, "(None, 'make_hastie_10_2', 425)": {"mod": [425]}, "(None, 'make_regression', 473)": {"mod": [473]}, "(None, 'make_circles', 595)": {"mod": [595]}, "(None, 'make_moons', 671)": {"mod": [671]}, "(None, 'make_blobs', 734)": {"mod": [734]}, "(None, 'make_friedman1', 892)": {"mod": [892]}, "(None, 'make_friedman2', 954)": {"mod": [954]}, "(None, 'make_friedman3', 1019)": {"mod": [1019]}, "(None, 'make_low_rank_matrix', 1083)": {"mod": [1083]}, "(None, 'make_sparse_coded_signal', 1152)": {"mod": [1152]}, "(None, 'make_sparse_uncorrelated', 1214)": {"mod": [1214]}, "(None, 'make_spd_matrix', 1265)": {"mod": [1265]}, "(None, 'make_sparse_spd_matrix', 1298)": {"mod": [1298]}, "(None, 'make_swiss_roll', 1372)": {"mod": [1372]}, "(None, 'make_s_curve', 1424)": {"mod": [1424]}, "(None, 'make_gaussian_quantiles', 1466)": {"mod": [1466]}, "(None, 'make_biclusters', 1561)": {"mod": [1561]}, "(None, 'make_checkerboard', 1652)": {"mod": [1652]}}}, {"path": "sklearn/datasets/_species_distributions.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [52]}, "(None, 'fetch_species_distributions', 140)": {"mod": [140]}}}, {"path": "sklearn/datasets/_svmlight_format_io.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27]}, "(None, 'load_svmlight_file', 40)": {"mod": [40, 154, 155]}, "(None, 'load_svmlight_files', 199)": {"mod": [199]}, "(None, 'dump_svmlight_file', 383)": {"mod": [383]}}}, {"path": "sklearn/datasets/_twenty_newsgroups.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [47]}, "(None, 'fetch_20newsgroups', 149)": {"mod": [149]}, "(None, 'fetch_20newsgroups_vectorized', 325)": {"mod": [325]}}}, {"path": "sklearn/datasets/tests/test_base.py", "status": "modified", "Loc": {"(None, 'test_load_digits_n_class_lt_10', 154)": {"mod": [155]}}}, {"path": "sklearn/linear_model/tests/test_omp.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [21, 22]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/datasets/_openml.py", "sklearn/datasets/_california_housing.py", "sklearn/datasets/_base.py", "sklearn/datasets/_covtype.py", "sklearn/datasets/_twenty_newsgroups.py", "sklearn/datasets/_olivetti_faces.py", "sklearn/datasets/_samples_generator.py", "sklearn/datasets/_species_distributions.py", "sklearn/datasets/_svmlight_format_io.py", "sklearn/datasets/_rcv1.py", "sklearn/datasets/_kddcup99.py", "sklearn/datasets/_lfw.py"], "doc": [], "test": ["sklearn/linear_model/tests/test_omp.py", "sklearn/datasets/tests/test_base.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "62d205980446a1abc1065f4332fd74eee57fcf73", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/12779", "iss_label": "Easy\ngood first issue", "title": "Remove \"from __future__ import XXX\"", "body": "Given #12746, I think we should remove ``from __future__ import XXX``, right? @adrinjalali \r\n```\r\n$ git grep \"from __future__ import\" | wc -l\r\n147\r\n```", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/13079", "file_loc": {"base_commit": "62d205980446a1abc1065f4332fd74eee57fcf73", "files": [{"path": "sklearn/utils/_random.pyx", "status": "modified", "Loc": {"(None, None, None)": {"add": [0], "mod": [16]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/utils/_random.pyx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "45019594938f92f3344c80bb0d351793dd91334b", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/12306", "iss_label": "module:impute", "title": "SimpleImputer to Crash on Constant Imputation with string value when dataset is encoded Numerically", "body": "#### Description\r\nThe title kind of describes it. It might be pretty logical, but just putting it out here as it took a while for me to realize and debug what exactly happened. \r\n\r\nThe SimpleImputer has the ability to impute missing values with a constant. If the data is categorical, it is possible to impute with a string value. However, when fetching a dataset from OpenML (or many other datasets from different sources) the data is encoded numerically automatically as numeric. When applying the SimpleImputer and a string value, scikit-learn crashes. I assume there's not a lot that can be done about this, as everything behaves exactly as you would expect when you dive deep into the code, but maybe the documentation can be extended a little bit (probably on SimpleImputer side, or maybe on the side of the data sources). \r\n\r\nWhat do you think?\r\n\r\n#### Steps/Code to Reproduce\r\n```\r\nimport numpy as np\r\nimport sklearn.datasets\r\nimport sklearn.compose\r\nimport sklearn.tree\r\nimport sklearn.impute\r\n\r\nX, y = sklearn.datasets.fetch_openml('Australian', 4, return_X_y=True)\r\n\r\nnumeric_imputer = sklearn.impute.SimpleImputer(strategy='mean')\r\nnumeric_scaler = sklearn.preprocessing.StandardScaler()\r\n\r\nnominal_imputer = sklearn.impute.SimpleImputer(strategy='constant', fill_value='missing')\r\nnominal_encoder = sklearn.preprocessing.OneHotEncoder(handle_unknown='ignore')\r\n\r\nnumeric_idx = [1, 2, 7, 10, 13]\r\nnominal_idx = [0, 3, 4, 5, 6, 8, 9, 11, 12]\r\n\r\nprint('missing numeric vals:', np.count_nonzero(~np.isnan(X[:, numeric_idx])))\r\nprint('missing nominal vals:', np.count_nonzero(~np.isnan(X[:, nominal_idx])))\r\n\r\n\r\nclf_nom = sklearn.pipeline.make_pipeline(nominal_imputer, nominal_encoder)\r\nclf_nom.fit(X[:, nominal_idx], y)\r\n```\r\n\r\n#### Expected Results\r\nA fitted classifier? Depending on how you write the documentation, the current error could also be the expected result. \r\n\r\n#### Actual Results\r\n```\r\nmissing numeric vals: 3450\r\nmissing nominal vals: 6210\r\nTraceback (most recent call last):\r\n File \"/home/janvanrijn/projects/sklearn-bot/testjan.py\", line 23, in <module>\r\n clf_nom.fit(X[:, nominal_idx], y)\r\n File \"/home/janvanrijn/anaconda3/envs/sklearn-bot/lib/python3.6/site-packages/sklearn/pipeline.py\", line 265, in fit\r\n Xt, fit_params = self._fit(X, y, **fit_params)\r\n File \"/home/janvanrijn/anaconda3/envs/sklearn-bot/lib/python3.6/site-packages/sklearn/pipeline.py\", line 230, in _fit\r\n **fit_params_steps[name])\r\n File \"/home/janvanrijn/anaconda3/envs/sklearn-bot/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py\", line 329, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/janvanrijn/anaconda3/envs/sklearn-bot/lib/python3.6/site-packages/sklearn/pipeline.py\", line 614, in _fit_transform_one\r\n res = transformer.fit_transform(X, y, **fit_params)\r\n File \"/home/janvanrijn/anaconda3/envs/sklearn-bot/lib/python3.6/site-packages/sklearn/base.py\", line 465, in fit_transform\r\n return self.fit(X, y, **fit_params).transform(X)\r\n File \"/home/janvanrijn/anaconda3/envs/sklearn-bot/lib/python3.6/site-packages/sklearn/impute.py\", line 241, in fit\r\n \"data\".format(fill_value))\r\nValueError: 'fill_value'=missing is invalid. Expected a numerical value when imputing numerical data\r\n```\r\n\r\n#### Versions\r\n```\r\nPython=3.6.0\r\nnumpy==1.15.2\r\nscikit-learn==0.20.0\r\nscipy==1.1.0\r\n```\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/25081", "file_loc": {"base_commit": "45019594938f92f3344c80bb0d351793dd91334b", "files": [{"path": "sklearn/impute/_base.py", "status": "modified", "Loc": {"('SimpleImputer', None, 142)": {"mod": [179, 180, 181]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/impute/_base.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "5ad3421a5b5759ecfaaab93406592d988f5d487f", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/16556", "iss_label": "New Feature\nmodule:ensemble", "title": "Add Pre-fit Model to Stacking Model", "body": "<!--\r\nIf you want to propose a new algorithm, please refer first to the scikit-learn\r\ninclusion criterion:\r\nhttps://scikit-learn.org/stable/faq.html#what-are-the-inclusion-criteria-for-new-algorithms\r\n-->\r\n\r\n#### Describe the workflow you want to enable\r\n\r\nAllow pre-fit models to stacking model such as `StackingClassifier` and `StackingRegressor` so that the final estimator can use their predictions directly without fitting the model on the given training data. \r\n\r\nThe motivation for this functionality originates from situation in which it is not possible to fit model on the entire dataset (due to compliance or other non-technical restrictions) or simply a research question to test with different models trained on different data. I feel this added flexibility could be beneficial in the long term. \r\n\r\n#### Describe your proposed solution\r\n\r\nOne possible solution I have in mind is to exclude fitted estimators during fitting. We can iterate through the list of estimators and see if they have been fitted (which sklearn already has helper functions). If yes, we skip them when fitting the estimators. \r\n\r\n#### Describe alternatives you've considered, if relevant\r\n\r\nAnother option I thought about was to ask users to specify if they want to exclude fitting certain estimators. But in this case, I feel it is safer to check the estimators' status regardless, which makes the manual input somewhat redundant. \r\n\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/22215", "file_loc": {"base_commit": "5ad3421a5b5759ecfaaab93406592d988f5d487f", "files": [{"path": "doc/whats_new/v1.1.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [372]}}}, {"path": "sklearn/ensemble/_stacking.py", "status": "modified", "Loc": {"('StackingClassifier', None, 281)": {"add": [328], "mod": [309, 317, 366, 368]}, "('StackingRegressor', None, 579)": {"add": [614, 625], "mod": [606, 649, 651]}, "('_BaseStacking', 'fit', 123)": {"mod": [155, 156, 157, 158, 159, 160, 161, 162, 176, 177, 179, 180, 181, 182, 183, 184, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 204, 205, 206]}}}, {"path": "sklearn/ensemble/tests/test_stacking.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [45, 532]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/ensemble/_stacking.py"], "doc": ["doc/whats_new/v1.1.rst"], "test": ["sklearn/ensemble/tests/test_stacking.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "9b2aac9e5c8749243c73f2377519d2f2c407b095", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/7603", "iss_label": "", "title": "When min_samples_split and min_samples_leaf are greater than or equal to 1.0 and 0.5, no error is thrown.", "body": "<!-- Instructions For Filing a Bug: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#filing-bugs -->\n#### Description\n\nThis is a silent bug in version 0.18.0, as a result of the following change: \"Random forest, extra trees, decision trees and gradient boosting estimator accept the parameter min_samples_split and min_samples_leaf provided as a percentage of the training samples. By yelite and Arnaud Joly.\"\n\nThe bug is that no error is thrown when large float values are passed. In theory, it would be useless to set `min_samples_split` to 1.0 or more, or `min_samples_leaf` to 0.5 or more. For example, `min_samples_split=2` gives a very different result compared with `min_samples_split=2.0`. In this case, accidentally setting `min_samples_split=2.0` in 0.18.0 would produce a tree with no splits. In this example, the error would be completely silent, and difficult to debug. This would probably be an unexpected outcome, especially for users coming from version 0.17.1, where these two values (`2.0` and `2`) would behave identically.\n#### Steps/Code to Reproduce\n\nExample:\n\n```\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.datasets import load_iris\n\niris = load_iris()\nX, y = iris.data[:, [0,1,2]], iris.target\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.33)\n\nrf = RandomForestClassifier(n_estimators=5, min_samples_leaf=3.0)\nrf.fit(X_train, y_train)\nprint \"rf score %s\" % rf.score(X_test, y_test)\n```\n#### Expected Results\n\nThe RandomForestClassifier scores in the ~0.9 range in 0.17.1, and I believe an error should be thrown in 0.18.0.\n#### Actual Results\n\nThe RandomForestClassifier scores in the ~0.3 range in 0.18.0, with no error thrown.\n#### Versions\n\n```\nDarwin-15.6.0-x86_64-i386-64bit\n('Python', '2.7.11 (default, Jan 22 2016, 08:29:18) \\n[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)]')\n('NumPy', '1.11.2')\n('SciPy', '0.17.0')\n('Scikit-Learn', '0.18')\n```\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/7604", "file_loc": {"base_commit": "9b2aac9e5c8749243c73f2377519d2f2c407b095", "files": [{"path": "sklearn/tree/tests/test_tree.py", "status": "modified", "Loc": {"(None, 'test_error', 496)": {"add": [511, 523]}}}, {"path": "sklearn/tree/tree.py", "status": "modified", "Loc": {"('BaseDecisionTree', 'fit', 117)": {"add": [218, 220, 223, 225], "mod": [261, 262, 263, 264, 265, 266, 267, 268, 312, 313, 376]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/tree/tree.py"], "doc": [], "test": ["sklearn/tree/tests/test_tree.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "86476582a3759b82fd163d27522bd2de6ad95b6c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/11568", "iss_label": "", "title": "TST: optics function is not tested", "body": "Related to https://github.com/scikit-learn/scikit-learn/pull/1984 that was merged: it seems that the `optics` function (that @amueller added to the `cluster/__init__.py` in https://github.com/scikit-learn/scikit-learn/pull/11567) is not tested (at least not in `test_optics.py`)\r\n\r\n(so the function `optics` that wraps the `OPTICS` class)", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/13271", "file_loc": {"base_commit": "86476582a3759b82fd163d27522bd2de6ad95b6c", "files": [{"path": "doc/modules/classes.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [117]}}}, {"path": "sklearn/cluster/__init__.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [14, 35]}}}, {"path": "sklearn/cluster/dbscan_.py", "status": "modified", "Loc": {"(None, 'dbscan', 23)": {"mod": [98, 99, 100]}}}, {"path": "sklearn/cluster/optics_.py", "status": "modified", "Loc": {"(None, 'optics', 24)": {"mod": [24, 25, 26, 27, 28, 29, 30, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 44, 46, 47, 48, 49, 51, 52, 53, 55, 56, 57, 58, 59, 61, 62, 63, 65, 66, 67, 68, 69, 71, 73, 75, 76, 78, 79, 80, 81, 82, 84, 85, 87, 88, 89, 90, 91, 93, 94, 96, 97, 98, 99, 100, 102, 103, 104, 105, 106, 107, 109, 110, 111, 112, 113, 114, 115, 116, 117, 119, 120, 122, 123, 124, 125, 127, 128, 129, 130, 132, 133, 135, 136, 137, 138, 139, 141, 142, 144, 145, 146, 147, 148, 150, 151, 152, 153, 154, 156, 157, 158, 159, 161, 162, 164, 165, 166, 167, 168, 169, 170, 172, 173, 174, 175, 176, 177, 179, 180, 181, 182, 183, 184, 185]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/cluster/__init__.py", "sklearn/cluster/optics_.py", "sklearn/cluster/dbscan_.py"], "doc": ["doc/modules/classes.rst"], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "ebf2bf81075ae1f4eb47ea0f54981c512bda5ceb", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/5022", "iss_label": "", "title": "Deprecate n_iter in SGDClassifier and implement max_iter.", "body": "We should implement a stopping condition based on the scaled norm of the parameter update as done in the new SAG solver for LogisticRegression / Ridge. The convergence check should be done at the end of the each epoch to avoid introducing too much overhead.\n\nOther classes sharing the same underlying implementation should be updated as well, e.g.:\n- SGDRegressor\n- PassiveAggressiveClassifier\n- Perceptron\n\nmaybe others.\n\nWe should store the effective number of iterations in a new `n_iter_` attribute on the estimator at the end of `fit` as done in many other scikit-learn model that accept a `max_iter` hyperparam.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/5036", "file_loc": {"base_commit": "ebf2bf81075ae1f4eb47ea0f54981c512bda5ceb", "files": [{"path": "benchmarks/bench_covertype.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [105]}}}, {"path": "benchmarks/bench_sgd_regression.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23, 27], "mod": [1, 2, 4, 5, 6, 8, 78, 80, 81, 90, 92, 94, 95, 96]}}}, {"path": "benchmarks/bench_sparsify.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [66, 75, 76, 80]}}}, {"path": "doc/modules/kernel_approximation.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [66, 67, 68]}}}, {"path": "doc/modules/linear_model.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1268]}}}, {"path": "doc/modules/sgd.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [66, 67, 68]}}}, {"path": "doc/tutorial/text_analytics/working_with_text_data.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [355]}}}, {"path": "doc/whats_new.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [147]}}}, {"path": "examples/linear_model/plot_sgd_iris.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [41]}}}, {"path": "examples/linear_model/plot_sgd_separating_hyperplane.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [21]}}}, {"path": "examples/linear_model/plot_sgd_weighted_samples.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [30, 37]}}}, {"path": "sklearn/decomposition/tests/test_kernel_pca.py", "status": "modified", "Loc": {"(None, 'test_gridsearch_pipeline', 175)": {"mod": [181]}, "(None, 'test_gridsearch_pipeline_precomputed', 188)": {"mod": [194, 195]}, "(None, 'test_nested_circles', 202)": {"mod": [208, 221]}}}, {"path": "sklearn/ensemble/tests/test_bagging.py", "status": "modified", "Loc": {"(None, 'test_classification', 55)": {"mod": [68]}, "(None, 'test_base_estimator', 501)": {"mod": [522]}}}, {"path": "sklearn/ensemble/tests/test_base.py", "status": "modified", "Loc": {"(None, 'test_base', 25)": {"mod": [27, 28, 49]}, "(None, 'test_base_zero_n_estimators', 54)": {"mod": [57]}, "(None, 'test_base_not_int_n_estimators', 65)": {"mod": [68, 74]}, "(None, 'test_set_random_states', 81)": {"mod": [85, 94]}, "(None, 'make_steps', 100)": {"mod": [101, 102]}}}, {"path": "sklearn/feature_selection/tests/test_from_model.py", "status": "modified", "Loc": {"(None, 'test_invalid_input', 26)": {"mod": [27]}, "(None, 'test_input_estimator_unchanged', 34)": {"mod": [35, 36, 37]}, "(None, 'test_partial_fit', 108)": {"mod": [109]}, "(None, 'test_prefit', 137)": {"mod": [138, 139, 140, 143]}, "(None, 'test_threshold_without_refitting', 175)": {"mod": [176, 177]}}}, {"path": "sklearn/linear_model/passive_aggressive.py", "status": "modified", "Loc": {"('PassiveAggressiveClassifier', None, 9)": {"add": [85], "mod": [26, 100, 101, 102]}, "('PassiveAggressiveRegressor', None, 184)": {"add": [247], "mod": [205, 260, 261]}, "('PassiveAggressiveClassifier', '__init__', 100)": {"mod": [106, 114]}, "('PassiveAggressiveClassifier', 'partial_fit', 118)": {"mod": [153]}, "('PassiveAggressiveRegressor', '__init__', 260)": {"mod": [263, 270, 275]}, "('PassiveAggressiveRegressor', 'partial_fit', 279)": {"mod": [297]}}}, {"path": "sklearn/linear_model/perceptron.py", "status": "modified", "Loc": {"('Perceptron', None, 7)": {"add": [73], "mod": [28]}, "('Perceptron', '__init__', 91)": {"mod": [92, 93, 98, 107]}}}, {"path": "sklearn/linear_model/sgd_fast.pyx", "status": "modified", "Loc": {"(None, None, None)": {"add": [19, 401, 508, 563, 573, 692], "mod": [338, 366, 367, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 433, 466, 467, 519, 538, 599, 615, 616, 680, 681, 683, 700]}}}, {"path": "sklearn/linear_model/stochastic_gradient.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7, 19]}, "('BaseSGD', '__init__', 47)": {"add": [68], "mod": [48, 49, 51, 60]}, "(None, 'fit_binary', 238)": {"add": [259], "mod": [238, 263, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 288]}, "('BaseSGDClassifier', '_fit', 378)": {"add": [410], "mod": [408]}, "('BaseSGDClassifier', '_fit_multiclass', 438)": {"add": [454], "mod": [439, 449, 450, 453, 456]}, "('SGDClassifier', None, 542)": {"add": [679], "mod": [602, 603, 604, 692, 693, 694, 695]}, "('BaseSGDRegressor', '_fit_regressor', 1008)": {"add": [1023], "mod": [1009, 1026, 1036, 1048, 1058, 1066, 1076]}, "('SGDRegressor', None, 1080)": {"add": [1196], "mod": [1131, 1132, 1133, 1209, 1210, 1211]}, "('BaseSGD', '_validate_params', 80)": {"mod": [84, 85]}, "('BaseSGDClassifier', None, 291)": {"mod": [308, 309, 310, 311, 312]}, "('BaseSGDClassifier', '__init__', 308)": {"mod": [317, 324]}, "('BaseSGDClassifier', '_partial_fit', 334)": {"mod": [335, 367, 371]}, "('BaseSGDClassifier', '_fit_binary', 413)": {"mod": [414, 416, 417, 418, 419, 420, 422]}, "('BaseSGDClassifier', 'partial_fit', 467)": {"mod": [504]}, "('SGDClassifier', '__init__', 705)": {"mod": [706, 707, 708, 709, 712, 713, 716]}, "('BaseSGDRegressor', '__init__', 845)": {"mod": [846, 847, 848, 849, 853, 860]}, "('BaseSGDRegressor', '_partial_fit', 862)": {"mod": [863, 864, 890]}, "('BaseSGDRegressor', 'partial_fit', 894)": {"mod": [915, 916, 917]}, "('BaseSGDRegressor', '_fit', 919)": {"mod": [939, 940, 941]}, "('SGDRegressor', '__init__', 1218)": {"mod": [1219, 1220, 1221, 1222, 1226, 1233]}}}, {"path": "sklearn/linear_model/tests/test_huber.py", "status": "modified", "Loc": {"(None, 'test_huber_scaling_invariant', 120)": {"mod": [121, 122]}, "(None, 'test_huber_and_sgd_same_results', 138)": {"mod": [139, 154, 155]}}}, {"path": "sklearn/linear_model/tests/test_passive_aggressive.py", "status": "modified", "Loc": {"(None, 'test_classifier_accuracy', 70)": {"mod": [74, 75, 76, 77]}, "(None, 'test_classifier_partial_fit', 88)": {"mod": [92, 93, 94, 95]}, "(None, 'test_classifier_refit', 107)": {"mod": [109]}, "(None, 'test_classifier_correctness', 116)": {"mod": [122, 123, 124, 125, 129, 130, 131, 132]}, "(None, 'test_classifier_undefined_methods', 138)": {"mod": [139]}, "(None, 'test_class_weights', 144)": {"mod": [150, 156]}, "(None, 'test_partial_fit_weight_class_balanced', 166)": {"mod": [168]}, "(None, 'test_equal_class_weight', 172)": {"mod": [175, 179, 180, 183, 184]}, "(None, 'test_wrong_class_weight_label', 192)": {"mod": [198]}, "(None, 'test_wrong_class_weight_format', 202)": {"mod": [208, 211]}, "(None, 'test_regressor_mse', 215)": {"mod": [222, 223, 224, 225]}, "(None, 'test_regressor_partial_fit', 236)": {"mod": [242, 243, 244, 245]}, "(None, 'test_regressor_correctness', 257)": {"mod": [262, 263, 264, 265, 269, 270, 271, 272]}, "(None, 'test_regressor_undefined_methods', 278)": {"mod": [279]}}}, {"path": "sklearn/linear_model/tests/test_perceptron.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [5]}, "(None, 'test_perceptron_accuracy', 46)": {"mod": [48, 51]}, "(None, 'test_perceptron_correctness', 54)": {"mod": [61]}, "(None, 'test_undefined_methods', 67)": {"mod": [68]}}}, {"path": "sklearn/linear_model/tests/test_sgd.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16, 22, 1166]}, "('CommonTest', 'factory', 103)": {"add": [105]}, "('CommonTest', '_test_warm_start', 144)": {"mod": [146, 150, 157]}, "('CommonTest', 'test_input_format', 179)": {"mod": [181, 182]}, "('CommonTest', 'test_clone', 189)": {"mod": [191, 196]}, "('CommonTest', 'test_late_onset_averaging_reached', 232)": {"mod": [241, 244]}, "('DenseSGDClassifierTestCase', 'test_sgd', 270)": {"mod": [275]}, "('DenseSGDClassifierTestCase', None, 266)": {"mod": [311]}, "('DenseSGDClassifierTestCase', 'test_sgd_n_iter_param', 311)": {"mod": [313]}, "('DenseSGDClassifierTestCase', 'test_average_binary_computed_correctly', 342)": {"mod": [356]}, "('DenseSGDClassifierTestCase', 'test_sgd_at_least_two_labels', 380)": {"mod": [382]}, "('DenseSGDClassifierTestCase', 'test_sgd_multiclass', 398)": {"mod": [400]}, "('DenseSGDClassifierTestCase', 'test_sgd_multiclass_average', 407)": {"mod": [415]}, "('DenseSGDClassifierTestCase', 'test_sgd_multiclass_with_init_coef', 430)": {"mod": [432]}, "('DenseSGDClassifierTestCase', 'test_sgd_multiclass_njobs', 440)": {"mod": [442]}, "('DenseSGDClassifierTestCase', 'test_sgd_proba', 467)": {"mod": [473, 480, 493, 516]}, "('DenseSGDClassifierTestCase', 'test_sgd_l1', 534)": {"mod": [545]}, "('DenseSGDClassifierTestCase', 'test_class_weights', 563)": {"mod": [569, 575]}, "('DenseSGDClassifierTestCase', 'test_equal_class_weight', 583)": {"mod": [587, 592]}, "('DenseSGDClassifierTestCase', 'test_wrong_class_weight_label', 600)": {"mod": [602]}, "('DenseSGDClassifierTestCase', 'test_wrong_class_weight_format', 606)": {"mod": [608]}, "('DenseSGDClassifierTestCase', 'test_weights_multiplied', 611)": {"mod": [620, 621]}, "('DenseSGDClassifierTestCase', 'test_balanced_weight', 628)": {"mod": [639, 641, 642, 645, 648, 649, 663, 669, 670, 671, 672, 674, 675]}, "('DenseSGDClassifierTestCase', 'test_sample_weights', 680)": {"mod": [686]}, "('DenseSGDClassifierTestCase', 'test_wrong_sample_weights', 698)": {"mod": [700]}, "('DenseSGDClassifierTestCase', '_test_partial_fit_equal_fit', 766)": {"mod": [768]}, "('DenseSGDClassifierTestCase', 'test_multiple_fit', 816)": {"mod": [818, 819]}, "('DenseSGDRegressorTestCase', 'test_sgd', 842)": {"mod": [844]}, "('DenseSGDRegressorTestCase', 'test_sgd_averaged_computed_correctly', 859)": {"mod": [877]}, "('DenseSGDRegressorTestCase', 'test_sgd_averaged_partial_fit', 887)": {"mod": [904]}, "('DenseSGDRegressorTestCase', 'test_average_sparse', 915)": {"mod": [924]}, "('DenseSGDRegressorTestCase', 'test_sgd_least_squares_fit', 937)": {"mod": [946, 955]}, "('DenseSGDRegressorTestCase', 'test_sgd_epsilon_insensitive', 961)": {"mod": [971, 981]}, "('DenseSGDRegressorTestCase', 'test_sgd_huber_fit', 987)": {"mod": [996, 1005]}, "('DenseSGDRegressorTestCase', 'test_elasticnet_convergence', 1011)": {"mod": [1028]}, "('DenseSGDRegressorTestCase', '_test_partial_fit_equal_fit', 1054)": {"mod": [1055]}, "(None, 'test_l1_ratio', 1091)": {"mod": [1098, 1099, 1100, 1104, 1105, 1106]}, "(None, 'test_underflow_or_overlow', 1110)": {"mod": [1132]}, "(None, 'test_numerical_stability_large_gradient', 1145)": {"mod": [1148, 1150]}, "(None, 'test_large_regularization', 1156)": {"mod": [1161]}}}, {"path": "sklearn/model_selection/tests/test_search.py", "status": "modified", "Loc": {"(None, 'test_stochastic_gradient_loss_param', 1218)": {"mod": [1226, 1241]}}}, {"path": "sklearn/model_selection/tests/test_validation.py", "status": "modified", "Loc": {"(None, 'test_learning_curve_batch_and_incremental_learning_are_equal', 754)": {"mod": [759]}, "(None, 'test_learning_curve_with_shuffle', 820)": {"mod": [830]}}}, {"path": "sklearn/tests/test_learning_curve.py", "status": "modified", "Loc": {"(None, 'test_learning_curve_batch_and_incremental_learning_are_equal', 219)": {"mod": [224]}}}, {"path": "sklearn/tests/test_multiclass.py", "status": "modified", "Loc": {"(None, 'test_ovr_partial_fit', 82)": {"mod": [101, 102, 106, 107]}, "(None, 'test_ovo_ties', 605)": {"mod": [610]}, "(None, 'test_ovo_ties2', 629)": {"mod": [637]}}}, {"path": "sklearn/tests/test_multioutput.py", "status": "modified", "Loc": {"(None, 'test_multi_target_regression_partial_fit', 45)": {"mod": [53, 58]}, "(None, 'test_multi_target_sample_weight_partial_fit', 106)": {"mod": [111, 116]}, "(None, 'test_multi_output_classification_partial_fit_parallelism', 154)": {"mod": [155]}, "(None, 'test_multi_output_classification_partial_fit', 165)": {"mod": [169]}, "(None, 'test_multi_output_classifiation_partial_fit_no_first_classes_exception', 196)": {"mod": [196, 197]}, "(None, 'test_multi_output_classification_partial_fit_sample_weights', 309)": {"mod": [314, 321]}}}, {"path": "sklearn/utils/estimator_checks.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [44], "mod": [135, 395, 417, 432, 501, 551, 661, 671, 684, 847, 972, 993, 1149, 1196, 1228, 1262, 1290, 1349, 1386, 1404, 1435, 1469, 1488, 1526, 1619, 1658, 1684, 1709, 1720]}, "(None, 'check_class_weight_classifiers', 1350)": {"add": [1374]}, "(None, 'check_class_weight_balanced_classifiers', 1387)": {"add": [1391]}, "(None, 'check_class_weight_balanced_linear_classifier', 1405)": {"add": [1416]}, "(None, 'check_parameters_default_constructible', 1548)": {"add": [1603], "mod": [1553, 1608]}, "(None, 'set_checking_parameters', 283)": {"mod": [287]}, "(None, 'check_estimator_sparse_data', 353)": {"mod": [366, 373]}, "(None, 'check_estimators_nan_inf', 867)": {"mod": [885]}, "(None, 'check_classifiers_one_label', 1044)": {"mod": [1053]}}}, {"path": "sklearn/utils/weight_vector.pyx", "status": "modified", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/linear_model/sgd_fast.pyx", "benchmarks/bench_sparsify.py", "examples/linear_model/plot_sgd_separating_hyperplane.py", "sklearn/linear_model/passive_aggressive.py", "examples/linear_model/plot_sgd_weighted_samples.py", "sklearn/utils/weight_vector.pyx", "sklearn/linear_model/perceptron.py", "examples/linear_model/plot_sgd_iris.py", "benchmarks/bench_sgd_regression.py", "sklearn/linear_model/stochastic_gradient.py", "benchmarks/bench_covertype.py", "sklearn/utils/estimator_checks.py"], "doc": ["doc/tutorial/text_analytics/working_with_text_data.rst", "doc/modules/linear_model.rst", "doc/modules/sgd.rst", "doc/modules/kernel_approximation.rst", "doc/whats_new.rst"], "test": ["sklearn/feature_selection/tests/test_from_model.py", "sklearn/tests/test_multioutput.py", "sklearn/linear_model/tests/test_passive_aggressive.py", "sklearn/decomposition/tests/test_kernel_pca.py", "sklearn/tests/test_multiclass.py", "sklearn/model_selection/tests/test_validation.py", "sklearn/model_selection/tests/test_search.py", "sklearn/ensemble/tests/test_base.py", "sklearn/tests/test_learning_curve.py", "sklearn/linear_model/tests/test_sgd.py", "sklearn/linear_model/tests/test_perceptron.py", "sklearn/linear_model/tests/test_huber.py", "sklearn/ensemble/tests/test_bagging.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "dc1cad2b3fddb8b9069d7cfd89cb1039260baf8e", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/28976", "iss_label": "Documentation\nhelp wanted", "title": "`min_samples` in HDSCAN", "body": "### Describe the issue linked to the documentation\n\nI find the description of the `min_samples` argument in sklearn.cluster.HDBSCAN confusing.\r\n\r\nIt says \"The number of samples in a neighborhood for a point to be considered as a core point. This includes the point itself.\"\r\n\r\nBut if I understand everything correctly `min_samples` corresponds to the $k$ used to compute the core distance $\\text{core}_k\\left(x\\right)$ for every sample $x$ where the $k$'th core distance for some sample $x$ is defined as the distance to the $k$'th nearest-neighbor of $x$ (counting itself). (-> which exactly what is happening in the code here: https://github.com/scikit-learn-contrib/hdbscan/blob/fc94241a4ecf5d3668cbe33b36ef03e6160d7ab7/hdbscan/_hdbscan_reachability.pyx#L45-L47, where it is called `min_points`)\r\n\r\nI don't understand how both of these descriptions are equivalent. I would assume that other people might find that confusing as well.\r\n\r\nLink in Code: https://github.com/scikit-learn/scikit-learn/blob/8721245511de2f225ff5f9aa5f5fadce663cd4a3/sklearn/cluster/_hdbscan/hdbscan.py#L441-L444\r\n\r\nLink in Documentation: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html#sklearn.cluster.DBSCAN\n\n### Suggest a potential alternative/fix\n\n_No response_", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/29263", "file_loc": {"base_commit": "dc1cad2b3fddb8b9069d7cfd89cb1039260baf8e", "files": [{"path": "sklearn/cluster/_hdbscan/_reachability.pyx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [65, 66]}}}, {"path": "sklearn/cluster/_hdbscan/hdbscan.py", "status": "modified", "Loc": {"('HDBSCAN', None, 419)": {"mod": [444, 445]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/cluster/_hdbscan/hdbscan.py", "sklearn/cluster/_hdbscan/_reachability.pyx"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "127415b209ca1df3f8502bdf74de56c33aff2565", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/901", "iss_label": "", "title": "add predict and fit_predict to more clustering algorithms", "body": "We should add `predict` and `fit_predict` to other clustering algorithms than `KMeans`: they are useful to retrieve cluster labels independently of the underlying attribute names...\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/907", "file_loc": {"base_commit": "127415b209ca1df3f8502bdf74de56c33aff2565", "files": [{"path": "sklearn/base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [365]}}}, {"path": "sklearn/cluster/affinity_propagation_.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [12]}, "('AffinityPropagation', None, 168)": {"mod": [168]}}}, {"path": "sklearn/cluster/dbscan_.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [13]}, "('DBSCAN', None, 112)": {"mod": [112]}}}, {"path": "sklearn/cluster/hierarchical.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [18]}, "('Ward', None, 257)": {"mod": [257]}}}, {"path": "sklearn/cluster/k_means_.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [19]}, "('KMeans', None, 599)": {"mod": [599, 759, 760, 762, 763, 764, 765]}, "('MiniBatchKMeans', None, 963)": {"mod": [963]}}}, {"path": "sklearn/cluster/mean_shift_.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [12]}, "('MeanShift', None, 202)": {"mod": [202]}}}, {"path": "sklearn/cluster/spectral.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [10]}, "('SpectralClustering', None, 227)": {"mod": [227]}}}, {"path": "sklearn/cluster/tests/test_k_means.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [419]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/cluster/hierarchical.py", "sklearn/cluster/mean_shift_.py", "sklearn/base.py", "sklearn/cluster/affinity_propagation_.py", "sklearn/cluster/dbscan_.py", "sklearn/cluster/k_means_.py", "sklearn/cluster/spectral.py"], "doc": [], "test": ["sklearn/cluster/tests/test_k_means.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "9385c45c0379ceab913daa811b1e7d4128faee35", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/4700", "iss_label": "Bug", "title": "cross_val_predict AttributeError with lists", "body": "When calling the cross_val_predict with an X parameter that is a list type, an AttributeError is raised on line 1209. This is because it is checking for the shape of the X parameter, but a list does not have the shape attribute.\n\nThe documentation says that this function supports lists so I am supposing that it isn't intended behavior. Commenting out that line also makes the rest of the function work perfectly fine.\n\nAlso not that the cross_val_score function, that takes the same arguments, works fine.\n\nI can provide the dataset I used if necessary.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/4705", "file_loc": {"base_commit": "9385c45c0379ceab913daa811b1e7d4128faee35", "files": [{"path": "sklearn/cross_validation.py", "status": "modified", "Loc": {"(None, 'cross_val_predict', 958)": {"mod": [1027]}}}, {"path": "sklearn/tests/test_cross_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1037]}, "('MockClassifier', 'predict', 95)": {"mod": [98]}}}, {"path": "sklearn/utils/mocking.py", "status": "modified", "Loc": {"('CheckingClassifier', 'fit', 46)": {"add": [51]}, "(None, None, None)": {"mod": [1, 2]}, "('CheckingClassifier', None, 33)": {"mod": [33]}, "('CheckingClassifier', 'predict', 55)": {"mod": [58]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/cross_validation.py", "sklearn/utils/mocking.py"], "doc": [], "test": ["sklearn/tests/test_cross_validation.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "053d2d1af477d9dc17e69162b9f2298c0fda5905", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/19705", "iss_label": "", "title": "[RFC] Minimal scipy version for 1.0 (or 0.26) release", "body": "#### Proposal\r\nI'd like to propose to increase the minimal scipy version to 1.0.\r\n```python\r\nSCIPY_MIN_VERSION = '1.0.0'\r\n```\r\n\r\n#### Reasoning\r\n\r\n1. In case we should release scikit-learn 1.0, it would be a good fit:smirk:\r\n2. Linear quantile regression #9978 could make it into the next release. It uses `scipy.optimize.linprog` under the hood. Scipy 1.0.0 has introduced a new solver `method=\"interior-point\"` which is set as default method. Having it available would help us to avoid to support the `\"simplex\"` method in scikit-learn. Note, that scipy v1.3.0 introduced the `\"revised simplex\"` method and version 1.5 the `\"highs**\"` solvers which are much preferred.\r\n I think we should avoid the legacy simplex method.\r\n3. *Your reason for scipy 1.0.0.*\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/20069", "file_loc": {"base_commit": "053d2d1af477d9dc17e69162b9f2298c0fda5905", "files": [{"path": ".circleci/config.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [6, 11, 50, 99, 133]}}}, {"path": ".travis.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [43, 48, 49, 50, 51, 52, 53, 54]}}}, {"path": "azure-pipelines.yml", "status": "modified", "Loc": {"(None, None, None)": {"add": [121, 125], "mod": [14, 41, 60, 86, 108, 124, 135, 136, 137, 139, 143, 144, 146, 147, 149, 154, 155, 157, 158, 159, 160, 174, 183, 184, 185, 189, 190, 234, 235]}}}, {"path": "build_tools/azure/install.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [73, 75]}}}, {"path": "build_tools/azure/posix-32.yml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [48, 66]}}}, {"path": "build_tools/azure/test_script.sh", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7]}}}, {"path": "doc/conftest.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9]}, "(None, 'setup_preprocessing', 80)": {"add": [82]}}}, {"path": "doc/modules/sgd.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [133]}}}, {"path": "doc/tutorial/statistical_inference/supervised_learning.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [176]}}}, {"path": "doc/whats_new/v1.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [14]}}}, {"path": "pyproject.toml", "status": "modified", "Loc": {"(None, None, None)": {"mod": [14]}}}, {"path": "sklearn/_min_dependencies.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13], "mod": [8, 11, 12, 29, 30, 31]}}}, {"path": "sklearn/decomposition/_truncated_svd.py", "status": "modified", "Loc": {"('TruncatedSVD', None, 24)": {"mod": [90, 91, 92, 97, 99, 101]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/tests/test_loss.py", "status": "modified", "Loc": {"(None, 'test_derivatives', 66)": {"mod": [101]}}}, {"path": "sklearn/utils/tests/test_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [27, 52, 348, 371, 372]}, "(None, 'test_check_array_dtype_numeric_errors', 374)": {"mod": [376, 377, 378]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/decomposition/_truncated_svd.py", "doc/conftest.py", ".circleci/config.yml", "sklearn/_min_dependencies.py"], "doc": ["doc/whats_new/v1.0.rst", "doc/tutorial/statistical_inference/supervised_learning.rst", "doc/modules/sgd.rst"], "test": ["sklearn/utils/tests/test_validation.py", "sklearn/ensemble/_hist_gradient_boosting/tests/test_loss.py"], "config": [".travis.yml", "build_tools/azure/posix-32.yml", "pyproject.toml", "azure-pipelines.yml"], "asset": ["build_tools/azure/test_script.sh", "build_tools/azure/install.sh"]}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "a0ba256dbe9380b5d2cf9cee133482fc87768267", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/19304", "iss_label": "New Feature\nEasy\nmodule:ensemble", "title": "Poisson criterion in RandomForestRegressor", "body": "#### Describe the workflow you want to enable\r\nI want to officially use the Poisson splitting criterion in `RandomForestRegressor`.\r\n\r\n#### Describe your proposed solution\r\n#17386 implemented the poisson splitting criterion for `DecisionTreeRegressor` and `ExtraTreeRegressor`. This also enabled—somewhat silently—to do:\r\n```\r\nimport numpy as np\r\nfrom sklearn.ensemble import RandomForestRegressor\r\ny = [0, 1, 2]\r\nX = np.arange(6).reshape(3, 2)\r\nrf = RandomForestRegressor(criterion=\"poisson\")\r\nrf.fit(X, y)\r\n```\r\nNote: The same is true for `ensemble.ExtraTreesRegressor`.\r\n\r\nTasks:\r\n\r\n- [ ] Add the poisson splitting criterion to the docstring of `RandomForestRegressor`.\r\n- [ ] Add input validation (non-negative `y`) to `RandomForestRegressor`.\r\n- [ ] Expand the tests for `RandomForestRegressor`.", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/19464", "file_loc": {"base_commit": "a0ba256dbe9380b5d2cf9cee133482fc87768267", "files": [{"path": "sklearn/ensemble/_forest.py", "status": "modified", "Loc": {"('BaseForest', 'fit', 274)": {"add": [317]}, "('RandomForestRegressor', None, 1279)": {"mod": [1301, 1304, 1305, 1307, 1308]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/ensemble/_forest.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "8453daa6b983ee2fd73d537e81e58b3f6b0e3147", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/4846", "iss_label": "Bug", "title": "RidgeClassifier triggers data copy", "body": "RidgeClassifier always triggers a data copy even when not using sample weights.\n\nRegression introduced in #4838.\n\nSee:\nhttps://github.com/scikit-learn/scikit-learn/pull/4838#discussion_r32090535\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/4851", "file_loc": {"base_commit": "99d08b571e4813e8d91d809b851b46e8cd5dd88f", "files": [{"path": "sklearn/linear_model/ridge.py", "status": "modified", "Loc": {"('RidgeClassifier', 'fit', 575)": {"mod": [593, 594, 601, 602, 603]}, "('RidgeClassifierCV', 'fit', 1053)": {"mod": [1073, 1074, 1080, 1081, 1082]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/linear_model/ridge.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "c9e227b70d64f73b953d8d60629d6ac63e02a91c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/7467", "iss_label": "Bug", "title": "float numbers can't be set to RFECV's parameter \"step\"", "body": "#### Description\n\nWhen I use RFECV with parameter 'step' as a float number will cause warnings/errors \"rfe.py:203: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future\". And the analysis can't be finished until integer or 1/2.\n\nI read description of RFECV and learned that parameter 'step' can accept float. (introduction online: If greater than or equal to 1, then step corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), then step corresponds to the percentage (rounded down) of features to remove at each iteration.)\n\nAnd I didn't read any bugs from source script. Please tell. \n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/7469", "file_loc": {"base_commit": "c9e227b70d64f73b953d8d60629d6ac63e02a91c", "files": [{"path": "sklearn/feature_selection/rfe.py", "status": "modified", "Loc": {"('RFECV', 'fit', 378)": {"add": [398], "mod": [427]}}}, {"path": "sklearn/feature_selection/tests/test_rfe.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [184]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/feature_selection/rfe.py"], "doc": [], "test": ["sklearn/feature_selection/tests/test_rfe.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "9b42b0cc7d5cf6978805619bc2433e3888c38d0c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/17814", "iss_label": "Bug", "title": "l1_ratio in sklearn.linear_model's ElasticNet greater than 1?", "body": "I accidentally ran ElasticNet (from sklearn.linear_model) for l1_ratio >1, and no error or warning was raised. From the docsstring, it says that ``0 < l1_ratio < 1``. Should we raise a ValueError or something? Found this with @mathurinm.\r\n\r\nIf this turns out to be something to be done, I could help out if someone could point me towards the right direction. Thanks !\r\n\r\np/s: Not sure if this should be under bugs/documentations/others, so I listed it under others. Sklearn version is 0.22.1.", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/17846", "file_loc": {"base_commit": "9b42b0cc7d5cf6978805619bc2433e3888c38d0c", "files": [{"path": "sklearn/linear_model/_coordinate_descent.py", "status": "modified", "Loc": {"('ElasticNet', 'fit', 719)": {"add": [757]}}}, {"path": "sklearn/linear_model/tests/test_coordinate_descent.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [60]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/linear_model/_coordinate_descent.py"], "doc": [], "test": ["sklearn/linear_model/tests/test_coordinate_descent.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "38c7e93b1edcbfb85060cf7c14cca3ab47b9267c", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/8499", "iss_label": "Bug", "title": "Memory leak in LogisticRegression", "body": "Dear all,\r\n\r\nwhile running many logistic regressions, I encountered a continuous memory increase on several (Debian) machines. The problem is isolated in this code:\r\n\r\n```python\r\nimport sklearn\r\nfrom sklearn.linear_model import LogisticRegression\r\nimport numpy as np\r\nimport time\r\nimport psutil\r\nimport os\r\n\r\nif __name__ == \"__main__\":\r\n print(\"Sklearn version: %s\" % sklearn.__version__)\r\n n_samples = 2\r\n n_features = 2\r\n data = np.arange(n_samples*n_features).reshape((n_samples,n_features))\r\n labels = np.arange(n_samples)\r\n last_output_time = 0\r\n process = psutil.Process(os.getpid())\r\n for i in range(10000000):\r\n clf = LogisticRegression()\r\n clf.fit(X=data, y=labels)\r\n del clf\r\n if time.time()-last_output_time >= 5:\r\n print(process.get_memory_info()[0] / float(2 ** 20))\r\n last_output_time = time.time()\r\n```\r\nThis was Python 2.7 under Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1+deb8u1 (2017-02-22) x86_64 GNU/Linux, with scikit-learn 0.18.1. Is this reproducable?", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/9024", "file_loc": {"base_commit": "38c7e93b1edcbfb85060cf7c14cca3ab47b9267c", "files": [{"path": "sklearn/svm/src/liblinear/liblinear_helper.c", "status": "modified", "Loc": {"(None, 'free_problem', 217)": {"add": [221]}}}, {"path": "sklearn/svm/src/liblinear/linear.cpp", "status": "modified", "Loc": {"(None, 'free_model_content', 2907)": {"add": [2912]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/svm/src/liblinear/liblinear_helper.c", "sklearn/svm/src/liblinear/linear.cpp"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "e25e8e2119ab6c5aa5072b05c0eb60b10aee4b05", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/29906", "iss_label": "Bug", "title": "Incorrect sample weight handling in `KBinsDiscretizer`", "body": "### Describe the bug\r\n\r\nSample weights are not properly passed through when specifying subsample within KBinsDiscretizer.\r\n\r\n### Steps/Code to Reproduce\r\n\r\n```python\r\nfrom sklearn.datasets import make_blobs\r\nfrom sklearn.preprocessing import KBinsDiscretizer\r\nimport numpy as np\r\n\r\nrng = np.random.RandomState(42)\r\n\r\n# Four centres \r\ncentres = np.array([[0, 0], [0, 5], [3, 1], [2, 4], [8, 8]])\r\nX, _ = make_blobs(\r\n n_samples=100,\r\n cluster_std=0.5,\r\n centers=centres,\r\n random_state=10,\r\n )\r\n\r\n# Randomly generate sample weights\r\nsample_weight = rng.randint(0, 10, size=X.shape[0])\r\n\r\nest = KBinsDiscretizer(n_bins=4, strategy='quantile', subsample=20,\r\n random_state=10).fit(X, sample_weight=sample_weight)\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nNo error is thrown\r\n\r\n### Actual Results\r\n\r\n```\r\n[253](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/preprocessing/_discretization.py:253) if sample_weight is not None:\r\n--> [254](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/preprocessing/_discretization.py:254) sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype)\r\n [256](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/preprocessing/_discretization.py:256) bin_edges = np.zeros(n_features, dtype=object)\r\n [257](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/preprocessing/_discretization.py:257) for jj in range(n_features):\r\n\r\nFile ~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2133, in _check_sample_weight(sample_weight, X, dtype, copy, ensure_non_negative)\r\n [2130](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2130) raise ValueError(\"Sample weights must be 1D array or scalar\")\r\n [2132](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2132) if sample_weight.shape != (n_samples,):\r\n-> [2133](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2133) raise ValueError(\r\n [2134](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2134) \"sample_weight.shape == {}, expected {}!\".format(\r\n [2135](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2135) sample_weight.shape, (n_samples,)\r\n [2136](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2136) )\r\n [2137](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2137) )\r\n [2139](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2139) if ensure_non_negative:\r\n [2140](https://file+.vscode-resource.vscode-cdn.net/Users/shrutinath/sklearn-dev/~/sklearn-dev/scikit-learn/sklearn/utils/validation.py:2140) check_non_negative(sample_weight, \"`sample_weight`\")\r\n\r\nValueError: sample_weight.shape == (100,), expected (20,)!\r\n```\r\n\r\n### Versions\r\n\r\n```shell\r\nSystem:\r\n python: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:13:44) [Clang 16.0.6 ]\r\nexecutable: /Users/shrutinath/micromamba/envs/scikit-learn/bin/python\r\n machine: macOS-14.3-arm64-arm-64bit\r\n\r\nPython dependencies:\r\n sklearn: 1.6.dev0\r\n pip: 24.0\r\n setuptools: 70.1.1\r\n numpy: 2.0.0\r\n scipy: 1.14.0\r\n Cython: 3.0.10\r\n pandas: 2.2.2\r\n matplotlib: 3.9.0\r\n joblib: 1.4.2\r\nthreadpoolctl: 3.5.0\r\n\r\nBuilt with OpenMP: True\r\n\r\nthreadpoolctl info:\r\n user_api: blas\r\n internal_api: openblas\r\n num_threads: 8\r\n prefix: libopenblas\r\n...\r\n num_threads: 8\r\n prefix: libomp\r\n filepath: /Users/shrutinath/micromamba/envs/scikit-learn/lib/libomp.dylib\r\n version: None\r\nOutput is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...\r\n```\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/29907", "file_loc": {"base_commit": "e25e8e2119ab6c5aa5072b05c0eb60b10aee4b05", "files": [{"path": "sklearn/ensemble/_hist_gradient_boosting/tests/test_gradient_boosting.py", "status": "modified", "Loc": {"(None, 'make_missing_value_data', 564)": {"mod": [571]}}}, {"path": "sklearn/inspection/tests/test_permutation_importance.py", "status": "modified", "Loc": {"(None, 'test_permutation_importance_equivalence_array_dataframe', 303)": {"mod": [314]}}}, {"path": "sklearn/preprocessing/_discretization.py", "status": "modified", "Loc": {"('KBinsDiscretizer', None, 25)": {"add": [59, 177]}, "('KBinsDiscretizer', '__init__', 183)": {"add": [188, 195]}, "('KBinsDiscretizer', 'fit', 201)": {"add": [219, 242, 247, 248, 256, 276], "mod": [216, 234, 235, 236, 237, 238, 239, 245, 253, 254, 259, 273, 275, 279, 280]}, "(None, None, None)": {"mod": [14]}}}, {"path": "sklearn/preprocessing/tests/test_discretization.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [13, 14, 26, 27, 29, 31, 41, 46, 121, 126, 144, 286, 295, 304, 482], "mod": [20, 22, 23, 24, 37, 115, 117, 118, 119, 123, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 248, 250, 251, 252, 277, 343]}, "(None, 'test_KBD_inverse_transform_Xt_deprecation', 484)": {"add": [500], "mod": [484, 486]}, "(None, 'test_fit_transform', 52)": {"mod": [52, 53, 54, 55]}, "(None, 'test_valid_n_bins', 58)": {"mod": [59, 60, 61, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73]}, "(None, 'test_invalid_n_bins_array', 76)": {"mod": [79, 86, 93, 104]}, "(None, 'test_fit_transform_n_bins_array', 150)": {"mod": [150, 152, 154]}, "(None, 'test_kbinsdiscretizer_effect_sample_weight', 164)": {"mod": [169, 171, 172]}, "(None, 'test_kbinsdiscretizer_no_mutating_sample_weight', 176)": {"mod": [178]}, "(None, 'test_same_min_max', 186)": {"mod": [189]}, "(None, 'test_transform_1d_behavior', 199)": {"mod": [201, 205]}, "(None, 'test_numeric_stability', 212)": {"mod": [218]}, "(None, 'test_encode_options', 222)": {"mod": [223, 225, 234]}, "(None, 'test_nonuniform_strategies', 255)": {"mod": [256, 261, 266, 271]}, "(None, 'test_inverse_transform', 309)": {"mod": [309, 310]}, "(None, 'test_transform_outside_fit_range', 317)": {"mod": [319]}, "(None, 'test_overwrite', 328)": {"mod": [332]}, "(None, 'test_redundant_bins', 345)": {"mod": [345, 347]}, "(None, 'test_percentile_numeric_stability', 354)": {"mod": [358]}, "(None, 'test_consistent_dtype', 370)": {"mod": [372]}, "(None, 'test_32_equal_64', 389)": {"mod": [395, 400]}, "(None, 'test_kbinsdiscretizer_subsample_default', 407)": {"mod": [410]}, "(None, 'test_kbinsdiscrtizer_get_feature_names_out', 446)": {"mod": [452]}, "(None, 'test_kbinsdiscretizer_subsample', 463)": {"mod": [467, 468, 469]}}}, {"path": "sklearn/preprocessing/tests/test_polynomial.py", "status": "modified", "Loc": {"(None, 'test_spline_transformer_kbindiscretizer', 377)": {"mod": [389]}}}, {"path": "sklearn/preprocessing/tests/test_target_encoder.py", "status": "modified", "Loc": {"(None, 'test_invariance_of_encoding_under_label_permutation', 554)": {"mod": [564, 565, 566]}}}, {"path": "sklearn/tests/test_docstring_parameters.py", "status": "modified", "Loc": {"(None, 'test_fit_docstring_attributes', 181)": {"add": [226]}}}, {"path": "sklearn/utils/_indexing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [16, 416]}, "(None, 'resample', 420)": {"add": [453, 523], "mod": [420, 434, 526]}}}, {"path": "sklearn/utils/_test_common/instance_generator.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [565], "mod": [962, 963, 964, 965, 966, 967, 968, 969, 970]}}}, {"path": "sklearn/utils/stats.py", "status": "modified", "Loc": {"(None, '_weighted_percentile', 9)": {"add": [72]}}}, {"path": "sklearn/utils/tests/test_indexing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6, 497, 548]}}}, {"path": "sklearn/utils/tests/test_stats.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1], "mod": [5]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/utils/_indexing.py", "sklearn/utils/stats.py", "sklearn/preprocessing/_discretization.py", "sklearn/utils/_test_common/instance_generator.py"], "doc": [], "test": ["sklearn/preprocessing/tests/test_discretization.py", "sklearn/tests/test_docstring_parameters.py", "sklearn/preprocessing/tests/test_target_encoder.py", "sklearn/utils/tests/test_indexing.py", "sklearn/inspection/tests/test_permutation_importance.py", "sklearn/utils/tests/test_stats.py", "sklearn/ensemble/_hist_gradient_boosting/tests/test_gradient_boosting.py", "sklearn/preprocessing/tests/test_polynomial.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "dcfb3df9a3df5aa2a608248316d537cd6b3643ee", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/6656", "iss_label": "New Feature\nmodule:ensemble", "title": "var.monotone option in GradientBoosting", "body": "Hi, is it possible to add the equivalent of the var.monotone option in R GBM package to the GradientBoostingClassifier/Regressor? Sometimes it is really useful when we know/want some factors to have monotonic effect to avoid overfitting and non-intuitive results.\n\nThanks!\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/15582", "file_loc": {"base_commit": "dcfb3df9a3df5aa2a608248316d537cd6b3643ee", "files": [{"path": "doc/modules/ensemble.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1052], "mod": [900]}}}, {"path": "doc/whats_new/v0.23.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [186]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/common.pxd", "status": "modified", "Loc": {"(None, None, None)": {"add": [32]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py", "status": "modified", "Loc": {"('BaseHistGradientBoosting', '__init__', 30)": {"add": [41], "mod": [32, 33]}, "('BaseHistGradientBoosting', None, 26)": {"add": [84]}, "('BaseHistGradientBoosting', 'fit', 85)": {"add": [360]}, "('HistGradientBoostingRegressor', None, 725)": {"add": [792]}, "('HistGradientBoostingClassifier', None, 910)": {"add": [980]}, "('HistGradientBoostingRegressor', '__init__', 867)": {"mod": [870, 871, 878, 879]}, "('HistGradientBoostingClassifier', '__init__', 1059)": {"mod": [1061, 1062, 1070, 1071]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/grower.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}, "('TreeNode', '__init__', 90)": {"add": [97], "mod": [91]}, "('TreeGrower', '__init__', 167)": {"add": [191, 202], "mod": [170, 171, 198]}, "('TreeGrower', None, 116)": {"add": [254]}, "('TreeNode', None, 25)": {"mod": [74]}, "('TreeGrower', '_intilialize_root', 255)": {"mod": [268]}, "('TreeGrower', '_compute_best_split_and_push', 286)": {"mod": [297]}, "('TreeGrower', 'split_next', 304)": {"mod": [332, 337, 375, 377, 378]}, "('TreeGrower', '_finalize_leaf', 414)": {"mod": [415, 417, 418, 420, 421, 422, 423, 424, 425]}, "(None, '_fill_predictor_node_array', 455)": {"mod": [467, 470]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/splitting.pyx", "status": "modified", "Loc": {"(None, None, None)": {"add": [21, 26, 41, 83, 128, 143, 154, 368, 388, 432, 458, 485, 492, 544, 570, 668], "mod": [73, 353, 381, 407, 415, 484, 489, 490, 491, 522, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 568, 574, 575, 576, 607, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 628, 641, 642, 643, 644, 645, 648, 649, 650, 651, 652]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/tests/test_gradient_boosting.py", "status": "modified", "Loc": {"(None, 'test_early_stopping_on_test_set_with_warm_start', 650)": {"add": [661]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/tests/test_grower.py", "status": "modified", "Loc": {"(None, 'test_grow_tree', 77)": {"add": [136]}, "(None, 'test_split_on_nan_with_infinite_values', 358)": {"mod": [396, 397]}}}, {"path": "sklearn/ensemble/_hist_gradient_boosting/tests/test_splitting.py", "status": "modified", "Loc": {"(None, 'test_histogram_split', 13)": {"add": [45, 50, 56], "mod": [59]}, "(None, 'test_gradient_and_hessian_sanity', 72)": {"add": [108, 115, 121, 122], "mod": [111, 117, 125, 128]}, "(None, 'test_split_indices', 172)": {"add": [208, 217], "mod": [211, 219]}, "(None, 'test_min_gain_to_split', 239)": {"add": [265, 272], "mod": [268, 274]}, "(None, 'test_splitting_missing_values', 368)": {"add": [402, 405, 410], "mod": [412]}, "(None, None, None)": {"mod": [7]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/ensemble/_hist_gradient_boosting/grower.py", "sklearn/ensemble/_hist_gradient_boosting/splitting.pyx", "sklearn/ensemble/_hist_gradient_boosting/common.pxd", "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py"], "doc": ["doc/modules/ensemble.rst", "doc/whats_new/v0.23.rst"], "test": ["sklearn/ensemble/_hist_gradient_boosting/tests/test_grower.py", "sklearn/ensemble/_hist_gradient_boosting/tests/test_splitting.py", "sklearn/ensemble/_hist_gradient_boosting/tests/test_gradient_boosting.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "417788c6a54c39614b82acf1a04b1f97f8a32199", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/6783", "iss_label": "", "title": "\"scoring must return a number\" error with custom scorer", "body": "#### Description\n\nI'm encountering the same error (`ValueError: scoring must return a number, got [...] (<class 'numpy.core.memmap.memmap'>) instead.`) as #6147, despite running v0.17.1. This is because I'm creating my own scorer, following the example in this [article](http://bigdataexaminer.com/data-science/dealing-with-unbalanced-classes-svm-random-forests-and-decision-trees-in-python/).\n#### Steps/Code to Reproduce\n\n``` python\nimport pandas as pd\nimport numpy as np\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_classification\nfrom functools import partial\n\ndef cutoff_predict(clf, X, cutoff):\n return (clf.predict_proba(X)[:, 1] > cutoff).astype(int)\n\ndef perc_diff_score(y, ypred, X=None):\n values = X[:,0]\n actual_value = np.sum(np.multiply(y, values))\n predict_value = np.sum(np.multiply(ypred, values))\n difference = predict_value - actual_value\n percent_diff = abs(difference * 100 / actual_value )\n return -1*percent_diff\n\ndef perc_diff_cutoff(clf, X, y, cutoff=None):\n ypred = cutoff_predict(clf, X, cutoff)\n return perc_diff_score(y, ypred, X)\n\ndef perc_diff_score_cutoff(cutoff):\n return partial(perc_diff_cutoff, cutoff=cutoff)\n\nclf = RandomForestClassifier()\nX_train, y_train = make_classification(n_samples=int(1e6), n_features=5, random_state=0)\nvalues = abs(100000 * np.random.randn(len(X_train))).reshape((X_train.shape[0], 1))\nX_train = np.append(values, X_train, 1)\n\ncutoff = 0.1\nvalidated = cross_val_score(clf, X_train, y_train, scoring=perc_diff_score_cutoff(cutoff),\n verbose=3,\n n_jobs=-1,\n )\n```\n#### Expected Results\n\nNo error.\n#### Actual Results\n\nSame error as in #6147 :\n\n```\n/home/gillesa/anaconda2/lib/python2.7/site-packages/sklearn/cross_validation.pyc in _score(estimator=ExtraTreesClassifier(bootstrap=False, class_weig..., random_state=None, verbose=0, warm_start=False), X_test=memmap([[ 0., 9., 56., ..., 1., 0., 0.... [ 0., 6., 57., ..., 1., 0., 0.]]), y_test=memmap([0, 0, 0, ..., 0, 0, 0]), scorer=make_scorer(roc_auc_score, needs_threshold=True))\n 1604 score = scorer(estimator, X_test)\n 1605 else:\n 1606 score = scorer(estimator, X_test, y_test)\n 1607 if not isinstance(score, numbers.Number):\n 1608 raise ValueError(\"scoring must return a number, got %s (%s) instead.\"\n-> 1609 % (str(score), type(score)))\n 1610 return score\n 1611\n 1612\n 1613 def _permutation_test_score(estimator, X, y, cv, scorer):\n\nValueError: scoring must return a number, got 0.671095795498 (<class 'numpy.core.memmap.memmap'>) instead.\n```\n#### Workaround\n\nUpdated `perc_diff_score()` as follows to add cast to `float`.:\n\n``` python\ndef perc_diff_score(y, ypred, X=None):\n values = X[:,0]\n actual_value = np.sum(np.multiply(y, values))\n predict_value = np.sum(np.multiply(ypred, values))\n difference = predict_value - actual_value\n percent_diff = np.float(abs(difference * 100 / actual_value ))\n return -1*percent_diff\n```\n#### Versions\n\nDarwin-15.4.0-x86_64-i386-64bit\nPython 3.5.1 |Anaconda 4.0.0 (x86_64)| (default, Dec 7 2015, 11:24:55) \n[GCC 4.2.1 (Apple Inc. build 5577)]import numpy; print(\"NumPy\", numpy.**version**)\nNumPy 1.11.0\nSciPy 0.17.0\nScikit-Learn 0.17.1\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/6789", "file_loc": {"base_commit": "417788c6a54c39614b82acf1a04b1f97f8a32199", "files": [{"path": "sklearn/cross_validation.py", "status": "modified", "Loc": {"(None, '_score', 1645)": {"add": [1650]}}}, {"path": "sklearn/model_selection/_validation.py", "status": "modified", "Loc": {"(None, '_score', 298)": {"add": [303]}}}, {"path": "sklearn/model_selection/tests/test_validation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5]}, "(None, 'test_cross_val_predict_with_method', 746)": {"add": [771]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/model_selection/_validation.py", "sklearn/cross_validation.py"], "doc": [], "test": ["sklearn/model_selection/tests/test_validation.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "3f49cee020a91a0be5d0d5602d29b3eefce9d758", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/3722", "iss_label": "Bug\nEasy", "title": "preprocessing.scale provides consistent results on arrays with zero variance", "body": "I'm using Python 2.7, NumPy 1.8.2 and scikit-learn 0.14.1 on x64 linux (all installed through Anaconda) and getting very inconsistent results for preprocessing.scale function:\n\n> print preprocessing.scale(np.zeros(6) + np.log(1e-5))\n> [ 0. 0. 0. 0. 0. 0.]\n> \n> print preprocessing.scale(np.zeros(8) + np.log(1e-5))\n> [-1. -1. -1. -1. -1. -1. -1. -1.]\n> \n> print preprocessing.scale(np.zeros(22) + np.log(1e-5))\n> [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n\nI would guess this is not is supposed to be happening. Quick investigation, points to the fact that np.std() of second and third array is not exactly zero, but very close to machine zero. sklearn still uses it to divide data (it doesn't go into the \"std == 0.0\" case in the code).\n\nNote that in the case of the array, this can be easily fixed by passing with_std=False, but when that happens for one of the many features in 2D matrix this is not an option.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/4436", "file_loc": {"base_commit": "ad26ae47057885415f74893d6329a481b0ce01bd", "files": [{"path": "doc/whats_new.rst", "status": "modified", "Loc": {"(None, None, 231)": {"add": [231]}, "(None, None, 3378)": {"add": [3378]}}}, {"path": "sklearn/preprocessing/_weights.py", "status": "modified", "Loc": {}}, {"path": "sklearn/preprocessing/data.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9, 20]}, "(None, 'scale', 69)": {"add": [143, 145]}, "(None, '_mean_and_std', 44)": {"mod": [60]}}}, {"path": "sklearn/preprocessing/tests/test_data.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [4, 16, 101]}, "(None, 'test_one_hot_encoder_unknown_transform', 816)": {"mod": [831]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/preprocessing/_weights.py", "sklearn/preprocessing/data.py"], "doc": ["doc/whats_new.rst"], "test": ["sklearn/preprocessing/tests/test_data.py"], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "eda99f3cec70ba90303de0ef3ab7f988657fadb9", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/13362", "iss_label": "Bug\nBlocker", "title": "return_intercept==True in ridge_regression raises an exception", "body": "<!--\r\nIf your issue is a usage question, submit it here instead:\r\n- StackOverflow with the scikit-learn tag: https://stackoverflow.com/questions/tagged/scikit-learn\r\n- Mailing List: https://mail.python.org/mailman/listinfo/scikit-learn\r\nFor more information, see User Questions: http://scikit-learn.org/stable/support.html#user-questions\r\n-->\r\n\r\n<!-- Instructions For Filing a Bug: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#filing-bugs -->\r\n\r\n#### Description\r\n<!-- Example: Joblib Error thrown when calling fit on LatentDirichletAllocation with evaluate_every > 0-->\r\n\r\n#### Steps/Code to Reproduce\r\n\r\n```python\r\nfrom sklearn.linear_model import ridge_regression\r\nridge_regression([[0], [1], [3]], [0, 1, 3], 1, solver='auto', return_intercept=True)\r\n```\r\n\r\n#### Expected Results\r\n<!-- Example: No error is thrown. Please paste or describe the expected results.-->\r\n\r\n`(array([1]), 0)` (the values can differ, but at least no exception should be raised)\r\n\r\n#### Actual Results\r\n<!-- Please paste or specifically describe the actual output or traceback. -->\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnboundLocalError Traceback (most recent call last)\r\n<ipython-input-5-84df44249e86> in <module>\r\n----> 1 ridge_regression([[0], [1], [3]], [1, 2, 3], 1, solver='auto', return_intercept=True)\r\n\r\n~/.pyenv/versions/3.7.2/envs/kaggle-3.7.2/lib/python3.7/site-packages/sklearn/linear_model/ridge.py in ridge_regression(X, y, alpha, sample_weight, solver, max_iter, tol, verbose, random_state, return_n_iter, return_intercept)\r\n 450 return coef, n_iter, intercept\r\n 451 elif return_intercept:\r\n--> 452 return coef, intercept\r\n 453 elif return_n_iter:\r\n 454 return coef, n_iter\r\n\r\nUnboundLocalError: local variable 'intercept' referenced before assignment\r\n```\r\n\r\n#### Versions\r\n<!--\r\nPlease run the following snippet and paste the output below.\r\nFor scikit-learn >= 0.20:\r\nimport sklearn; sklearn.show_versions()\r\nFor scikit-learn < 0.20:\r\nimport platform; print(platform.platform())\r\nimport sys; print(\"Python\", sys.version)\r\nimport numpy; print(\"NumPy\", numpy.__version__)\r\nimport scipy; print(\"SciPy\", scipy.__version__)\r\nimport sklearn; print(\"Scikit-Learn\", sklearn.__version__)\r\n-->\r\n\r\n```\r\nLinux-4.20.8-arch1-1-ARCH-x86_64-with-arch\r\nPython 3.7.2 (default, Feb 22 2019, 18:13:04) \r\n[GCC 8.2.1 20181127]\r\nNumPy 1.16.1\r\nSciPy 1.2.1\r\nScikit-Learn 0.21.dev0\r\n```\r\n\r\n\r\n\r\n<!-- Thanks for contributing! -->\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/13363", "file_loc": {"base_commit": "eda99f3cec70ba90303de0ef3ab7f988657fadb9", "files": [{"path": "doc/whats_new/v0.21.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [342]}}}, {"path": "sklearn/linear_model/ridge.py", "status": "modified", "Loc": {"(None, '_ridge_regression', 366)": {"mod": [371, 372, 373, 374, 375, 376, 407, 409, 410, 411, 412, 413, 414, 435, 436]}, "('_BaseRidge', 'fit', 527)": {"mod": [558]}}}, {"path": "sklearn/linear_model/tests/test_ridge.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [9]}, "(None, 'test_raises_value_error_if_solver_not_supported', 774)": {"mod": [781]}, "(None, 'test_ridge_fit_intercept_sparse', 816)": {"mod": [835, 836, 837]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["sklearn/linear_model/ridge.py"], "doc": ["doc/whats_new/v0.21.rst"], "test": ["sklearn/linear_model/tests/test_ridge.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "df2fb490a58f272067b33aad372bb4fe2393bb93", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/7261", "iss_label": "Bug\nMissing-data\nDtype Conversions", "title": "API: Should Index.min and max use nanmin and nanmax?", "body": "Index and Series `min` and `max` handles `nan` and `NaT` differently. Even though `min` and `max` are defined in `IndexOpsMixin`, `Series` doesn't use them and use `NDFrame` definitions.\n\n```\npd.Index([np.nan, 1.0]).min()\n# nan\n\npd.Index([np.nan, 1.0]).max()\n# nan\n\npd.DatetimeIndex([pd.NaT, '2011-01-01']).min()\n# NaT\n\npd.DatetimeIndex([pd.NaT, '2011-01-01']).max()\n#2011-01-01 00:00:00\n\n# Series excludes nan and NaT\npd.Series([np.nan, 1.0]).min()\n#1.0\n\npd.Series([np.nan, 1.0]).max()\n#1.0\n\npd.Series([pd.NaT, pd.Timestamp('2011-01-01')]).min()\n#2011-01-01 00:00:00\n\npd.Series([pd.NaT, pd.Timestamp('2011-01-01')]).max()\n#2011-01-01 00:00:00\n```\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/7279", "file_loc": {"base_commit": "df2fb490a58f272067b33aad372bb4fe2393bb93", "files": [{"path": "doc/source/v0.14.1.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [67]}}}, {"path": "pandas/core/base.py", "status": "modified", "Loc": {"('IndexOpsMixin', 'max', 237)": {"mod": [239]}, "('IndexOpsMixin', 'min', 241)": {"mod": [243]}}}, {"path": "pandas/tests/test_base.py", "status": "modified", "Loc": {"('TestIndexOps', None, 192)": {"add": [212]}, "(None, None, None)": {"mod": [2]}}}, {"path": "pandas/tseries/index.py", "status": "modified", "Loc": {"('DatetimeIndex', 'min', 1757)": {"mod": [1761, 1762, 1764]}, "('DatetimeIndex', 'max', 1767)": {"mod": [1771, 1772, 1774]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/base.py", "pandas/tseries/index.py"], "doc": ["doc/source/v0.14.1.txt"], "test": ["pandas/tests/test_base.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "abd5333e7a3332921707888de9621c52dd3408e6", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/7943", "iss_label": "Enhancement\nAPI Design\nTimezones", "title": "tz_localize should support is_dst input array", "body": "When storing datetimes with timezone information in mysql I split out the is_dst flag into a separate column. Then when reconstructing the Timestamps I am either forced to iterate through each row and call pytz.timezone.localize on every Timestamp which is very slow or do some magic with localizing what I can and then manually dealing with the fall transition time (note that infer_dst won't work because there could be many rows that have transitions in them). I would much rather create the DatetimeIndex from the column of dates and then call tz_localize with the is_dst column. This would then appropriately set the offset.\n\n```\ndi = DatetimeIndex(frame['DateColumn'])\ndi = di.tz_localize(TimeZone, is_dst_flat=frame['IsDstColumn'])\n```\n\nThoughts?\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/7963", "file_loc": {"base_commit": "abd5333e7a3332921707888de9621c52dd3408e6", "files": [{"path": "doc/source/timeseries.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1359, 1490, 1509], "mod": [1492, 1493, 1494, 1503, 1507, 1511, 1512, 1513, 1514, 1516, 1517]}}}, {"path": "doc/source/v0.15.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [468, 547]}}}, {"path": "pandas/core/generic.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [26]}, "('NDFrame', None, 68)": {"mod": [3562]}, "('NDFrame', 'tz_localize', 3562)": {"mod": [3575, 3576, 3584, 3600, 3605]}, "('NDFrame', '_tz_localize', 3584)": {"mod": [3593]}}}, {"path": "pandas/tseries/index.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [8], "mod": [21]}, "('DatetimeIndex', None, 122)": {"add": [147, 182], "mod": [1648]}, "('DatetimeIndex', '__new__', 183)": {"mod": [187, 191, 217, 243, 312]}, "('DatetimeIndex', '_generate', 335)": {"mod": [336, 450]}, "('DatetimeIndex', 'tz_localize', 1648)": {"mod": [1659, 1674]}}}, {"path": "pandas/tseries/tests/test_timezones.py", "status": "modified", "Loc": {"('TestTimeZoneSupportPytz', 'test_infer_dst', 426)": {"add": [443, 449], "mod": [432, 433, 438, 439, 440, 441, 448]}, "('TestTimeZoneSupportPytz', None, 58)": {"add": [450], "mod": [426]}}}, {"path": "pandas/tseries/tests/test_tslib.py", "status": "modified", "Loc": {"('TestTimestamp', 'test_tz', 216)": {"add": [234]}}}, {"path": "pandas/tslib.pyx", "status": "modified", "Loc": {"(None, None, None)": {"add": [378, 381, 2201, 2222, 2309], "mod": [362, 372, 373, 383, 1768, 1769, 2186, 2313]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/generic.py", "pandas/tslib.pyx", "pandas/tseries/index.py"], "doc": ["doc/source/timeseries.rst", "doc/source/v0.15.0.txt"], "test": ["pandas/tseries/tests/test_timezones.py", "pandas/tseries/tests/test_tslib.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "a9421af1aac906cc38d025ed5db4a2b55cb8b9bc", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16773", "iss_label": "Performance\nSparse", "title": "SparseDataFrame constructor has horrible performance for df with many columns", "body": "#### Code Sample\r\n\r\nThis is an example taken directly from the [docs](https://pandas.pydata.org/pandas-docs/stable/sparse.html#sparsedataframe), only that I've changed the sparsity of the arrays from 90% to 99%.\r\n\r\n```python\r\nimport pandas as pd\r\nfrom scipy.sparse import csr_matrix\r\nimport numpy as np\r\n\r\narr = np.random.random(size=(1000, 5))\r\narr[arr < .99] = 0\r\nsp_arr = csr_matrix(arr)\r\n%timeit sdf = pd.SparseDataFrame(sp_arr)\r\n```\r\n```\r\n 4.78 ms \u00b1 381 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\r\n```\r\n\r\nNow, here's what happens when I increase the number of columns from 5 to 2000:\r\n\r\n```python\r\nimport pandas as pd\r\nfrom scipy.sparse import csr_matrix\r\nimport numpy as np\r\n\r\narr = np.random.random(size=(1000, 2000))\r\narr[arr < .99] = 0\r\nsp_arr = csr_matrix(arr)\r\n%timeit sdf = pd.SparseDataFrame(sp_arr)\r\n```\r\n```\r\n8.69 s \u00b1 208 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\r\n```\r\n\r\nNote that initializing a the `scipy.sparse.csr_matrix` object itself is way (!!!) faster:\r\n\r\n```python\r\nimport pandas as pd\r\nfrom scipy.sparse import csr_matrix\r\nimport numpy as np\r\n\r\narr = np.random.random(size=(1000, 2000))\r\narr[arr < .99] = 0\r\n%timeit sp_arr = csr_matrix(arr)\r\n```\r\n```\r\n13 ms \u00b1 248 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\r\n```\r\n\r\n#### Problem description\r\n\r\nThe construction of a SparseDataFrame with many columns is ridiculously slow. I've traced the problem to [this line](https://github.com/pandas-dev/pandas/blob/1c0b63281db0486aa8182d550e9bceb641e5f9a4/pandas/core/sparse/frame.py#L162) in the `SparseDataFrame._init_dict()` function. I don't know why the data frame is constructed by assigning individual columns of a `DataFrame` object. I think the `DataFrame._init_dict` method uses a much more efficient method.\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.5.3.final.0\r\npython-bits: 64\r\nOS: Linux\r\nOS-release: 4.10.0-24-generic\r\nmachine: x86_64\r\nprocessor: x86_64\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.20.2\r\npytest: 3.1.2\r\npip: 9.0.1\r\nsetuptools: 36.0.1\r\nCython: 0.25.2\r\nnumpy: 1.12.1\r\nscipy: 0.19.0\r\nxarray: None\r\nIPython: 6.1.0\r\nsphinx: 1.6.1\r\npatsy: None\r\ndateutil: 2.6.0\r\npytz: 2017.2\r\nblosc: None\r\nbottleneck: None\r\ntables: None\r\nnumexpr: None\r\nfeather: None\r\nmatplotlib: None\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: 0.9.6\r\nlxml: None\r\nbs4: 4.6.0\r\nhtml5lib: 0.999999999\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: None\r\njinja2: 2.9.6\r\ns3fs: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/16883", "file_loc": {"base_commit": "a9421af1aac906cc38d025ed5db4a2b55cb8b9bc", "files": [{"path": "asv_bench/benchmarks/sparse.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [0, 29]}}}, {"path": "doc/source/whatsnew/v0.21.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [137]}}}, {"path": "pandas/core/sparse/frame.py", "status": "modified", "Loc": {"('SparseDataFrame', '_init_dict', 131)": {"mod": [146, 166, 167, 168, 169, 170]}}}, {"path": "pandas/tests/reshape/test_reshape.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [645]}}}, {"path": "pandas/tests/sparse/test_frame.py", "status": "modified", "Loc": {"('TestSparseDataFrame', None, 29)": {"add": [1097]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["asv_bench/benchmarks/sparse.py", "pandas/core/sparse/frame.py"], "doc": ["doc/source/whatsnew/v0.21.0.txt"], "test": ["pandas/tests/reshape/test_reshape.py", "pandas/tests/sparse/test_frame.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "ba48fc4a033f11513fa2dd44c946e18b7bc27ad2", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/26058", "iss_label": "Docs\nCI", "title": "DOC: test new sphinx 2 release", "body": "The docs are currently being built with sphinx 1.8.5 (see eg https://travis-ci.org/pandas-dev/pandas/jobs/518832177 for a recent build on master).\r\n\r\nSphinx has released 2.0.0 (http://www.sphinx-doc.org/en/master/changes.html#release-2-0-0-released-mar-29-2019), and it would be good to test our docs with this new release, and see if we need to make changes / report regressions to sphinx.\r\n\r\nFor somebody wanting to tackle this:\r\n- test it locally to see if there are big problems with building the docs\r\n- make a PR that ensures sphinx 2 is installed in the doc environment, so we can check the build log on travis (I am actually not fully sure why it is not yet picking up sphinx 2 on travis, since we don't pin the version in the [travis-36-doc.yaml file](https://github.com/pandas-dev/pandas/blob/a07ed594ec6a5befc967fb1b18244bbeb3bc2bf1/ci/deps/travis-36-doc.yaml#L36)", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/26519", "file_loc": {"base_commit": "ba48fc4a033f11513fa2dd44c946e18b7bc27ad2", "files": [{"path": "pandas/core/indexes/base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [54]}, "('Index', None, 165)": {"add": [2790]}}}, {"path": "pandas/core/indexes/interval.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [11]}, "('IntervalIndex', None, 127)": {"mod": [808]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/indexes/interval.py", "pandas/core/indexes/base.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "45d8d77f27cf0dbc8cefe932f8fb64f6982b9527", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/10078", "iss_label": "good first issue\nNeeds Tests", "title": "Pandas attempts to convert some strings to timestamps when grouping by a timestamp and aggregating?", "body": "I am working through logs of web requests, and when I want to find the most common, say, user agent string for a (disguised) user, I run something like the following:\n\n```\nfrom pandas import Series, DataFrame, Timestamp\n\ntdf = DataFrame({'day': {0: Timestamp('2015-02-24 00:00:00'), 1: Timestamp('2015-02-24 00:00:00'),\n 2: Timestamp('2015-02-24 00:00:00'), 3: Timestamp('2015-02-24 00:00:00'),\n 4: Timestamp('2015-02-24 00:00:00')},\n 'userAgent': {0: 'some UA string', 1: 'some UA string', 2: 'some UA string',\n 3: 'another UA string', 4: 'some UA string'},\n 'userId': {0: '17661101', 1: '17661101', 2: '17661101', 3: '17661101', 4: '17661101'}})\n\ndef most_common_values(df):\n return Series({c: s.value_counts().index[0] for c,s in df.iteritems()})\n\ntdf.groupby('day').apply(most_common_values)\n```\n\nNote that in this (admittedly unusual) example, all of the lines are identical. I'm not sure if that is necessary to recreate the issue. And, I'm obscuring the exact purpose of this code, but it reproduces the bug: The 'userId' comes back as a Timestamp, not a string. This happens after the function most_common_values returns, since that userId string is not returned as a timestamp. if we change the value of the userId to an int:\n\n```\ntdf['userId'] = tdf.userId.astype(int)\n```\n\nor if the value of the associated integer is small enough:\n\n```\ntdf['userId'] = '15320104`\n```\n\nthen the results are what we'd expect (the most common value as its original type is returned.)\n\nI imagine that for some reason something like a dateutil parser is being called on strings by default but that probably shoulnd't be happening...\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/30646", "file_loc": {"base_commit": "45d8d77f27cf0dbc8cefe932f8fb64f6982b9527", "files": [{"path": "pandas/tests/frame/test_constructors.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2427], "mod": [2]}}}, {"path": "pandas/tests/frame/test_missing.py", "status": "modified", "Loc": {"('TestDataFrameInterpolate', 'test_interp_ignore_all_good', 948)": {"add": [972]}}}, {"path": "pandas/tests/groupby/test_apply.py", "status": "modified", "Loc": {"(None, 'test_apply_datetime_issue', 704)": {"add": [716]}}}, {"path": "pandas/tests/groupby/test_categorical.py", "status": "modified", "Loc": {"(None, 'test_series_groupby_on_2_categoricals_unobserved_zeroes_or_nans', 1309)": {"add": [1332]}}}, {"path": "pandas/tests/groupby/test_groupby.py", "status": "modified", "Loc": {"(None, 'test_groupby_crash_on_nunique', 2011)": {"add": [2025]}}}, {"path": "pandas/tests/indexing/multiindex/test_loc.py", "status": "modified", "Loc": {"(None, 'test_loc_nan_multiindex', 416)": {"add": [439]}}}, {"path": "pandas/tests/indexing/test_loc.py", "status": "modified", "Loc": {"(None, 'test_loc_setitem_float_intindex', 974)": {"add": [985]}}}, {"path": "pandas/tests/io/parser/test_index_col.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [7]}, "(None, 'test_multi_index_naming_not_all_at_beginning', 163)": {"add": [174]}}}, {"path": "pandas/tests/reshape/test_concat.py", "status": "modified", "Loc": {"(None, 'test_concat_datetimeindex_freq', 2719)": {"add": [2732]}}}, {"path": "pandas/tests/reshape/test_pivot.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1967]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["pandas/tests/reshape/test_pivot.py", "pandas/tests/groupby/test_apply.py", "pandas/tests/indexing/test_loc.py", "pandas/tests/frame/test_constructors.py", "pandas/tests/indexing/multiindex/test_loc.py", "pandas/tests/groupby/test_groupby.py", "pandas/tests/reshape/test_concat.py", "pandas/tests/frame/test_missing.py", "pandas/tests/io/parser/test_index_col.py", "pandas/tests/groupby/test_categorical.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "636dd01fdacba0c8f0e7b5aaa726165983fc861d", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/21356", "iss_label": "IO JSON\ngood first issue", "title": "JSON nested_to_record Silently Drops Top-Level None Values", "body": "xref https://github.com/pandas-dev/pandas/pull/21164#issuecomment-394510095\r\n\r\n`nested_to_record` is silently dropping `None` values that appear at the top of the JSON. This is IMO unexpected and undesirable.\r\n\r\n#### Code Sample, a copy-pastable example if possible\r\n\r\n```python\r\nIn [3]: data = {\r\n ...: \"id\": None,\r\n ...: \"location\": {\r\n ...: \"country\": None\r\n ...: }\r\n ...: }\r\n\r\nIn [5]: nested_to_record(data)\r\nOut[5]: {'location.country': None}\r\n```\r\n#### Problem description\r\n\r\nThe top level `None` value should not be dropped but rather preserved along with lower levels for consistency.\r\n\r\n#### Expected Output\r\n```python\r\nIn [5]: nested_to_record(data)\r\nOut[5]: {'id': None, 'location.country': None}\r\n```\r\n\r\nNote this will break a few tests in `pandas/test_normalize.py`\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: ab6aaf73a848a8725a23bb880be5221dd5ef5b3d\r\npython: 3.6.4.final.0\r\npython-bits: 64\r\nOS: Darwin\r\nOS-release: 17.5.0\r\nmachine: x86_64\r\nprocessor: i386\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.24.0.dev0+67.gab6aaf73a\r\npytest: 3.4.1\r\npip: 10.0.1\r\nsetuptools: 38.5.1\r\nCython: 0.27.3\r\nnumpy: 1.14.1\r\nscipy: 1.0.0\r\npyarrow: 0.8.0\r\nxarray: 0.10.0\r\nIPython: 6.2.1\r\nsphinx: 1.7.0\r\npatsy: 0.5.0\r\ndateutil: 2.6.1\r\npytz: 2018.3\r\nblosc: None\r\nbottleneck: 1.2.1\r\ntables: 3.4.2\r\nnumexpr: 2.6.4\r\nfeather: 0.4.0\r\nmatplotlib: 2.1.2\r\nopenpyxl: 2.5.0\r\nxlrd: 1.1.0\r\nxlwt: 1.3.0\r\nxlsxwriter: 1.0.2\r\nlxml: 4.1.1\r\nbs4: 4.6.0\r\nhtml5lib: 1.0.1\r\nsqlalchemy: 1.2.5\r\npymysql: 0.8.0\r\npsycopg2: 2.7.4 (dt dec pq3 ext lo64)\r\njinja2: 2.10\r\ns3fs: 0.1.3\r\nfastparquet: 0.1.4\r\npandas_gbq: 0.4.1\r\npandas_datareader: None\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/21363", "file_loc": {"base_commit": "636dd01fdacba0c8f0e7b5aaa726165983fc861d", "files": [{"path": "doc/source/whatsnew/v0.23.1.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [33]}}}, {"path": "pandas/io/json/normalize.py", "status": "modified", "Loc": {"(None, 'nested_to_record', 24)": {"mod": [83, 84]}}}, {"path": "pandas/tests/io/json/test_normalize.py", "status": "modified", "Loc": {"('TestNestedToRecord', 'test_nonetype_top_level_bottom_level', 379)": {"add": [397]}, "('TestNestedToRecord', 'test_nonetype_multiple_levels', 406)": {"add": [425]}, "('TestJSONNormalize', 'test_missing_field', 240)": {"mod": [241, 242, 245, 249]}, "('TestNestedToRecord', None, 258)": {"mod": [354, 355, 356]}, "('TestNestedToRecord', 'test_nonetype_dropping', 354)": {"mod": [370]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/io/json/normalize.py"], "doc": ["doc/source/whatsnew/v0.23.1.txt"], "test": ["pandas/tests/io/json/test_normalize.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "19f715c51d16995fc6cd0c102fdba2f213a83a0f", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/24607", "iss_label": "Missing-data\nComplex", "title": "DES: Should util.is_nan check for complex('nan')?", "body": "It doesn't at the moment. A handful of functions in libs.missing _do_ check for complex nan, and could be simplified/de-duplicated if we make util.is_nan also catch the complex case.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/24628", "file_loc": {"base_commit": "d106e9975100cd0f2080d7b1a6111f20fb64f906", "files": [{"path": "pandas/_libs/missing.pyx", "status": "modified", "Loc": {"(None, None, 15)": {"mod": [15]}, "(None, None, 23)": {"mod": [23, 24, 25, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]}, "(None, None, 65)": {"mod": [65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76]}, "(None, None, 104)": {"mod": [104, 105, 106, 107, 108]}, "(None, None, 110)": {"mod": [110, 111, 112, 113, 114, 115]}, "(None, None, 131)": {"mod": [131]}, "(None, None, 157)": {"mod": [157]}, "(None, None, 192)": {"mod": [192]}, "(None, None, 302)": {"mod": [302]}, "(None, None, 312)": {"mod": [312]}}}, {"path": "pandas/_libs/tslibs/nattype.pxd", "status": "modified", "Loc": {"(None, None, 20)": {"mod": [20]}}}, {"path": "pandas/_libs/tslibs/nattype.pyx", "status": "modified", "Loc": {"(None, None, 16)": {"add": [16]}, "(None, None, 695)": {"add": [695]}, "(None, None, 704)": {"add": [704]}, "(None, None, 689)": {"mod": [689]}, "(None, None, 701)": {"mod": [701]}, "(None, None, 706)": {"mod": [706]}, "(None, None, 708)": {"mod": [708, 709]}}}, {"path": "pandas/_libs/tslibs/util.pxd", "status": "modified", "Loc": {"(None, None, 218)": {"mod": [218]}, "(None, None, 228)": {"mod": [228]}}}, {"path": "pandas/tests/dtypes/test_missing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [3], "mod": [10]}, "('TestNAObj', 'test_empty_like', 389)": {"add": [394]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/_libs/missing.pyx", "pandas/_libs/tslibs/nattype.pyx", "pandas/_libs/tslibs/nattype.pxd", "pandas/_libs/tslibs/util.pxd"], "doc": [], "test": ["pandas/tests/dtypes/test_missing.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "a797b28c87d90a439dfa2c12b4a11e62bf0d6db2", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/7778", "iss_label": "Bug\nDatetime\nDtype Conversions\nTimedelta", "title": "BUG: df.apply handles np.timedelta64 as timestamp, should be timedelta", "body": "I think there may be a bug with the row-wise handling of `numpy.timedelta64` data types when using `DataFrame.apply`. As a check, the problem does not appear when using `DataFrame.applymap`. The problem may be related to #4532, but I'm unsure. I've included an example below.\n\nThis is only a minor problem for my use-case, which is cross-checking timestamps from a counter/timer card. I can easily work around the issue with `DataFrame.itertuples` etc.\n\nThank you for your time and for making such a useful package!\n#### Example\n##### Version\n\nImport and check versions.\n\n```\n$ date\nThu Jul 17 16:28:38 CDT 2014\n$ conda update pandas\nFetching package metadata: ..\n# All requested packages already installed.\n# packages in environment at /Users/harrold/anaconda:\n#\npandas 0.14.1 np18py27_0 \n$ ipython\nPython 2.7.8 |Anaconda 2.0.1 (x86_64)| (default, Jul 2 2014, 15:36:00) \nType \"copyright\", \"credits\" or \"license\" for more information.\n\nIPython 2.1.0 -- An enhanced Interactive Python.\nAnaconda is brought to you by Continuum Analytics.\nPlease check out: http://continuum.io/thanks and https://binstar.org\n? -> Introduction and overview of IPython's features.\n%quickref -> Quick reference.\nhelp -> Python's own help system.\nobject? -> Details about 'object', use 'object??' for extra details.\n\nIn [1]: from __future__ import print_function\n\nIn [2]: import numpy as np\n\nIn [3]: import pandas as pd\n\nIn [4]: pd.util.print_versions.show_versions()\n\nINSTALLED VERSIONS\n------------------\ncommit: None\npython: 2.7.8.final.0\npython-bits: 64\nOS: Darwin\nOS-release: 11.4.2\nmachine: x86_64\nprocessor: i386\nbyteorder: little\nLC_ALL: None\nLANG: en_US.UTF-8\n\npandas: 0.14.1\nnose: 1.3.3\nCython: 0.20.1\nnumpy: 1.8.1\nscipy: 0.14.0\nstatsmodels: 0.5.0\nIPython: 2.1.0\nsphinx: 1.2.2\npatsy: 0.2.1\nscikits.timeseries: None\ndateutil: 1.5\npytz: 2014.4\nbottleneck: None\ntables: 3.1.1\nnumexpr: 2.3.1\nmatplotlib: 1.3.1\nopenpyxl: 1.8.5\nxlrd: 0.9.3\nxlwt: 0.7.5\nxlsxwriter: 0.5.5\nlxml: 3.3.5\nbs4: 4.3.1\nhtml5lib: 0.999\nhttplib2: 0.8\napiclient: 1.2\nrpy2: None\nsqlalchemy: 0.9.4\npymysql: None\npsycopg2: None\n```\n##### Create test data\n\nUsing subset of original raw data as example.\n\n```\nIn [5]: datetime_start = np.datetime64(u'2014-05-31T01:23:19.9600345Z')\n\nIn [6]: timedeltas_elapsed = [30053400, 40053249, 50053098]\n```\n\nCompute datetimes from elapsed timedeltas, then create differential timedeltas from datetimes. All elements are either type `numpy.datetime64` or `numpy.timedelta64`.\n\n```\nIn [7]: df = pd.DataFrame(dict(datetimes = timedeltas_elapsed))\n\nIn [8]: df = df.applymap(lambda elt: np.timedelta64(elt, 'us'))\n\nIn [9]: df = df.applymap(lambda elt: np.datetime64(datetime_start + elt))\n\nIn [10]: df['differential_timedeltas'] = df['datetimes'] - df['datetimes'].shift()\n\nIn [11]: print(df)\n datetimes differential_timedeltas\n0 2014-05-31 01:23:50.013434500 NaT\n1 2014-05-31 01:24:00.013283500 00:00:09.999849\n2 2014-05-31 01:24:10.013132500 00:00:09.999849\n```\n##### Expected behavior\n\nWith element-wise handling using `DataFrame.applymap`, all elements are correctly identified as datetimes (timestamps) or timedeltas.\n\n```\nIn [12]: print(df.applymap(lambda elt: type(elt)))\n datetimes differential_timedeltas\n0 <class 'pandas.tslib.Timestamp'> <type 'numpy.timedelta64'>\n1 <class 'pandas.tslib.Timestamp'> <type 'numpy.timedelta64'>\n2 <class 'pandas.tslib.Timestamp'> <type 'numpy.timedelta64'>\n```\n##### Bug\n\nWith row-wise handling using `DataFrame.apply`, all elements are type `pandas.tslib.Timestamp`. I expected 'differential_timedeltas' to be type `numpy.timedelta64` or another type of timedelta, not a type of datetime (timestamp).\n\n```\nIn [13]: # For 'datetimes':\n\nIn [14]: print(df.apply(lambda row: type(row['datetimes']), axis=1))\n0 <class 'pandas.tslib.Timestamp'>\n1 <class 'pandas.tslib.Timestamp'>\n2 <class 'pandas.tslib.Timestamp'>\ndtype: object\n\nIn [15]: # For 'differential_timedeltas':\n\nIn [16]: print(df.apply(lambda row: type(row['differential_timedeltas']), axis=1))\n0 <class 'pandas.tslib.NaTType'>\n1 <class 'pandas.tslib.Timestamp'>\n2 <class 'pandas.tslib.Timestamp'>\ndtype: object\n```\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/7779", "file_loc": {"base_commit": "a797b28c87d90a439dfa2c12b4a11e62bf0d6db2", "files": [{"path": "doc/source/v0.15.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [189]}}}, {"path": "pandas/core/frame.py", "status": "modified", "Loc": {"('DataFrame', '_apply_standard', 3516)": {"add": [3541], "mod": [3550]}}}, {"path": "pandas/core/internals.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1292], "mod": [28]}, "('BlockManager', 'as_matrix', 2589)": {"mod": [2598]}, "(None, '_interleaved_dtype', 3620)": {"mod": [3650, 3652, 3673, 3674, 3675, 3676]}}}, {"path": "pandas/core/series.py", "status": "modified", "Loc": {"('Series', None, 89)": {"mod": [240]}, "('Series', 'from_array', 240)": {"mod": [247]}}}, {"path": "pandas/tests/test_frame.py", "status": "modified", "Loc": {"('TestDataFrame', None, 1921)": {"add": [9637]}}}, {"path": "pandas/tests/test_internals.py", "status": "modified", "Loc": {"(None, 'create_block', 34)": {"add": [46, 71], "mod": [44, 70]}, "(None, None, None)": {"mod": [6]}, "('TestBlockManager', 'test_interleave', 558)": {"mod": [559]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/internals.py", "pandas/core/frame.py", "pandas/core/series.py"], "doc": ["doc/source/v0.15.0.txt"], "test": ["pandas/tests/test_frame.py", "pandas/tests/test_internals.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "fcb0263762a31724ba6db39bf1564569dda068a0", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16991", "iss_label": "Bug\nIndexing", "title": "ValueError on df.columns.isin(pd.Series())", "body": "#### Code Sample, a copy-pastable example if possible\r\n\r\n```python\r\n df = pd.DataFrame(columns=list('ab'))\r\n s1 = pd.Series(['a'])\r\n s2 = pd.Series()\r\n df.columns.isin(s1)\r\n df.columns.isin(s2)\r\n\r\n```\r\n#### Problem description\r\n\r\nThe second call to `df.columns.isin(s2)` fails with \r\n\r\n D:\\Anaconda\\envs\\py3k\\lib\\site-packages\\pandas\\core\\algorithms.py in <lambda>(x, y)\r\n 402 # work-around for numpy < 1.8 and comparisions on py3\r\n 403 # faster for larger cases to use np.in1d\r\n --> 404 f = lambda x, y: htable.ismember_object(x, values)\r\n 405 if (_np_version_under1p8 and compat.PY3) or len(comps) > 1000000:\r\n 406 f = lambda x, y: np.in1d(x, y)\r\n\r\n pandas\\_libs\\hashtable_func_helper.pxi in pandas._libs.hashtable.ismember_object (pandas\\_libs\\hashtable.c:30162)()\r\n\r\n ValueError: Buffer dtype mismatch, expected 'Python object' but got 'double'\r\n\r\n#### Expected Output\r\n\r\n array([ False, False], dtype=bool)\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n INSTALLED VERSIONS\r\n ------------------\r\n commit: None\r\n python: 3.5.3.final.0\r\n python-bits: 64\r\n OS: Windows\r\n OS-release: 10\r\n machine: AMD64\r\n\r\n pandas: 0.20.3\r\n numpy: 1.13.1\r\n\r\n\r\nMight be linked to [#16394](https://github.com/pandas-dev/pandas/issues/16394)\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/17006", "file_loc": {"base_commit": "fcb0263762a31724ba6db39bf1564569dda068a0", "files": [{"path": "doc/source/whatsnew/v0.21.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [206]}}}, {"path": "pandas/core/algorithms.py", "status": "modified", "Loc": {"(None, '_ensure_data', 41)": {"add": [67]}}}, {"path": "pandas/tests/frame/test_analytics.py", "status": "modified", "Loc": {"('TestDataFrameAnalytics', None, 27)": {"mod": [1154]}, "('TestDataFrameAnalytics', 'test_isin_empty', 1154)": {"mod": [1156, 1157]}}}, {"path": "pandas/tests/indexes/test_base.py", "status": "modified", "Loc": {"('TestIndex', None, 32)": {"add": [1409]}}}, {"path": "pandas/tests/series/test_analytics.py", "status": "modified", "Loc": {"('TestSeriesAnalytics', None, 35)": {"add": [1137]}}}, {"path": "pandas/tests/test_algos.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [599]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/algorithms.py"], "doc": ["doc/source/whatsnew/v0.21.0.txt"], "test": ["pandas/tests/test_algos.py", "pandas/tests/indexes/test_base.py", "pandas/tests/series/test_analytics.py", "pandas/tests/frame/test_analytics.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "0e8331f85cde8db2841aad92054d8e896e88fcef", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/51236", "iss_label": "Docs\ngood first issue", "title": "DOC fix EX02 errors in docstrings", "body": "pandas has a script for validating docstrings\r\n\r\nhttps://github.com/pandas-dev/pandas/blob/ced983358b06576af1a73c3e936171cc6dc98a6d/ci/code_checks.sh#L560-L568\r\n\r\nwhich can be run with\r\n```\r\n./ci/code_checks.sh docstrings\r\n```\r\n\r\nCurrently, many functions fail the EX02 check, and so are excluded from the check.\r\n\r\nThe task here is:\r\n1. pick 2-3 functions\r\n2. run `./ci/code_checks.sh docstrings`\r\n3. fixup the docstrings according to whatever error is reported\r\n4. stage, commit, push, open pull request \ud83d\ude80 \r\n\r\n**Please don't comment `take` as multiple people can work on this simultaneously**. You also don't need to ask for permission to work on this, feel free to just start \ud83d\ude04 Though if you're working on some set of functions you can comment that\r\n\r\nIf you're new here, please check the contributing guide https://pandas.pydata.org/docs/dev/development/contributing.html\r\n\r\nTIP: `./ci/code_checks.sh docstrings` may take a while to run - you may want to comment-out the `docstrings` check which checks `EX01` and the part which checks all the other codes (these are currently lines 86 - 577)", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/51724", "file_loc": {"base_commit": "ce3260110f8f5e17c604e7e1a67ed7f8fb07f5fc", "files": [{"path": "ci/code_checks.sh", "status": "modified", "Loc": {"(None, None, 82)": {"mod": [82, 83]}, "(None, None, 560)": {"mod": [560, 561, 562, 563, 564, 565, 566, 567, 568]}}}, {"path": "pandas/core/dtypes/common.py", "status": "modified", "Loc": {"(None, 'is_datetime64tz_dtype', 309)": {"add": [324, 333]}, "(None, 'is_datetime64_any_dtype', 873)": {"add": [888]}, "(None, 'is_datetime64_ns_dtype', 915)": {"add": [930]}}}, {"path": "pandas/plotting/_core.py", "status": "modified", "Loc": {"('PlotAccessor', None, 613)": {"mod": [992, 993]}}}, {"path": "pandas/plotting/_misc.py", "status": "modified", "Loc": {"(None, 'parallel_coordinates', 391)": {"mod": [450, 451]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/plotting/_core.py", "pandas/plotting/_misc.py", "pandas/core/dtypes/common.py"], "doc": [], "test": [], "config": [], "asset": ["ci/code_checks.sh"]}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "2e087c7841aec84030fb489cec9bfeb38fe8086f", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/10043", "iss_label": "Indexing", "title": "iloc breaks on read-only dataframe", "body": "This is picking up #9928 again. I don't know if the behavior is expected, but it is a bit odd to me. Maybe I'm doing something wrong, I'm not that familiar with the pandas internals.\n\nWe call `df.iloc[indices]` and that breaks with a read-only dataframe. I feel that it shouldn't though, as it is not writing.\n\nMinimal reproducing example:\n\n``` python\nimport pandas as pd\nimport numpy as np\narray = np.eye(10)\narray.setflags(write=False)\n\nX = pd.DataFrame(array)\nX.iloc[[1, 2, 3]]\n```\n\n> ValueError buffer source array is read-only\n\nIs there a way to slice the rows of the dataframe in another way that doesn't need a writeable array?\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/10070", "file_loc": {"base_commit": "2e087c7841aec84030fb489cec9bfeb38fe8086f", "files": [{"path": "pandas/src/generate_code.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [148, 170], "mod": [96, 97, 98, 99, 100, 101, 143, 145]}}}, {"path": "pandas/tests/test_common.py", "status": "modified", "Loc": {"('TestTake', '_test_dtype', 631)": {"add": [632]}, "('TestTake', 'test_2d_with_out', 630)": {"mod": [631, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/src/generate_code.py"], "doc": [], "test": ["pandas/tests/test_common.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "89b3d6b201b5d429a202b5239054d5a70c8b5071", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/38495", "iss_label": "Performance\nRegression", "title": "Major Performance regression of df.groupby(..).indices", "body": "I'm experiencing major performance regressions with pandas=1.1.5 versus 1.1.3\r\n\r\nVersion 1.1.3:\r\n```\r\nPython 3.7.9 | packaged by conda-forge | (default, Dec 9 2020, 20:36:16) [MSC v.1916 64 bit (AMD64)]\r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.\r\nPyDev console: using IPython 7.19.0\r\nPython 3.7.9 | packaged by conda-forge | (default, Dec 9 2020, 20:36:16) [MSC v.1916 64 bit (AMD64)] on win32\r\nIn[2]: import time\r\n ... : import numpy as np\r\n ... : import pandas as pd\r\n ... : pd.__version__\r\nOut[2]: '1.1.3'\r\nIn[3]: numel = 10000000\r\n ... : df = pd.DataFrame(dict(a=np.random.rand(numel), b=np.random.randint(0,4000, numel)))\r\n ... : start = time.time()\r\n ... : groupby_indices = df.groupby('b').indices\r\n ... : time.time() - start\r\nOut[3]: 0.46085023880004883\r\n```\r\n\r\nVersion 1.1.5:\r\n```\r\nPython 3.7.9 | packaged by conda-forge | (default, Dec 9 2020, 20:36:16) [MSC v.1916 64 bit (AMD64)]\r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.\r\nPyDev console: using IPython 7.19.0\r\nPython 3.7.9 | packaged by conda-forge | (default, Dec 9 2020, 20:36:16) [MSC v.1916 64 bit (AMD64)] on win32\r\nIn[2]: import time\r\n ... : import numpy as np\r\n ... : import pandas as pd\r\n ... : pd.__version__\r\nOut[2]: '1.1.5'\r\nIn[3]: numel = 10000000\r\n ... : df = pd.DataFrame(dict(a=np.random.rand(numel), b=np.random.randint(0,4000, numel)))\r\n ... : start = time.time()\r\n ... : groupby_indices = df.groupby('b').indices\r\n ... : time.time() - start\r\nOut[3]: 57.36550998687744\r\n```", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/38892", "file_loc": {"base_commit": "89b3d6b201b5d429a202b5239054d5a70c8b5071", "files": [{"path": "asv_bench/benchmarks/groupby.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [128]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["asv_bench/benchmarks/groupby.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "03e58585036c83ca3d4c86d7d3d7ede955c15130", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/37748", "iss_label": "Bug\nIndexing", "title": "BUG: ValueError is mistakenly raised if a numpy array is assigned to a pd.Series of dtype=object and both have the same length", "body": "- [x] I have checked that this issue has not already been reported.\r\n\r\n- [x] I have confirmed this bug exists on the latest version of pandas.\r\n\r\n- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.\r\n\r\n---\r\n\r\n#### Code Sample, a copy-pastable example\r\n\r\n```python\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\npd.__version__ # '1.1.3'\r\npdseries = pd.Series(index=[1,2,3,4], dtype=object)\r\npdseries.loc[1] = np.zeros(100) # this works fine\r\npdseries.loc[3] = np.zeros(4) # this raises a value error because len(pdseries)==len(np.zeros(4))\r\n```\r\n\r\nTypeError: only size-1 arrays can be converted to Python scalars\r\nThe above exception was the direct cause of the following exception:\r\nTraceback (most recent call last):\r\n File \"/Users/daniel/.conda/envs/production_system/lib/python3.7/site-packages/IPython/core/interactiveshell.py\", line 2878, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-40-460230264bf1>\", line 1, in <module>\r\n pdseries.loc[3] = np.zeros(4)\r\n File \"/Users/daniel/.conda/envs/production_system/lib/python3.7/site-packages/pandas/core/indexing.py\", line 670, in __setitem__\r\n iloc._setitem_with_indexer(indexer, value)\r\n File \"/Users/daniel/.conda/envs/production_system/lib/python3.7/site-packages/pandas/core/indexing.py\", line 1802, in _setitem_with_indexer\r\n self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value)\r\n File \"/Users/daniel/.conda/envs/production_system/lib/python3.7/site-packages/pandas/core/internals/managers.py\", line 534, in setitem\r\n return self.apply(\"setitem\", indexer=indexer, value=value)\r\n File \"/Users/daniel/.conda/envs/production_system/lib/python3.7/site-packages/pandas/core/internals/managers.py\", line 406, in apply\r\n applied = getattr(b, f)(**kwargs)\r\n File \"/Users/daniel/.conda/envs/production_system/lib/python3.7/site-packages/pandas/core/internals/blocks.py\", line 887, in setitem\r\n values = values.astype(arr_value.dtype, copy=False)\r\nValueError: setting an array element with a sequence.\r\n\r\n#### Problem description\r\n\r\nIt is possible to assign (numpy) arrays to elements of pandas.Series ofd type=object. Unfortunately, in case the array is of the same size as the Series a ValueError is raised.\r\n\r\nHow can one avoid this error?\r\n\r\n#### Expected Output\r\n\r\nThe interesting thing is that the assignment takes place as expected:\r\nIn[42]: pdseries\r\nOut[42]: \r\n1 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...\r\n2 NaN\r\n3 [0.0, 0.0, 0.0, 0.0]\r\n4 NaN\r\n\r\nOne might argue that a warning could be useful but an error is misleading and tricky to debug.\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit : db08276bc116c438d3fdee492026f8223584c477\r\npython : 3.7.8.final.0\r\npython-bits : 64\r\nOS : Darwin\r\nOS-release : 19.6.0\r\nVersion : Darwin Kernel Version 19.6.0: Mon Aug 31 22:12:52 PDT 2020; root:xnu-6153.141.2~1/RELEASE_X86_64\r\nmachine : x86_64\r\nprocessor : i386\r\nbyteorder : little\r\nLC_ALL : None\r\nLANG : None\r\nLOCALE : None.UTF-8\r\npandas : 1.1.3\r\nnumpy : 1.19.2\r\npytz : 2020.1\r\ndateutil : 2.8.1\r\npip : 20.2.4\r\nsetuptools : 49.6.0.post20201009\r\nCython : 0.29.21\r\npytest : None\r\nhypothesis : None\r\nsphinx : None\r\nblosc : None\r\nfeather : None\r\nxlsxwriter : None\r\nlxml.etree : None\r\nhtml5lib : None\r\npymysql : None\r\npsycopg2 : 2.8.6 (dt dec pq3 ext lo64)\r\njinja2 : 2.11.2\r\nIPython : 5.8.0\r\npandas_datareader: None\r\nbs4 : None\r\nbottleneck : None\r\nfsspec : None\r\nfastparquet : None\r\ngcsfs : None\r\nmatplotlib : 3.3.2\r\nnumexpr : 2.7.1\r\nodfpy : None\r\nopenpyxl : None\r\npandas_gbq : None\r\npyarrow : None\r\npytables : None\r\npyxlsb : None\r\ns3fs : None\r\nscipy : 1.2.1\r\nsqlalchemy : 1.3.20\r\ntables : 3.6.1\r\ntabulate : None\r\nxarray : None\r\nxlrd : None\r\nxlwt : None\r\nnumba : None\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/38266", "file_loc": {"base_commit": "03e58585036c83ca3d4c86d7d3d7ede955c15130", "files": [{"path": "doc/source/whatsnew/v1.2.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [680]}}}, {"path": "pandas/core/indexers.py", "status": "modified", "Loc": {"(None, 'is_scalar_indexer', 68)": {"add": [81]}}}, {"path": "pandas/tests/indexing/test_indexers.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [30]}}}, {"path": "pandas/tests/indexing/test_loc.py", "status": "modified", "Loc": {"('TestLocSeries', 'test_loc_setitem_dt64tz_values', 2054)": {"add": [2074]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/indexers.py"], "doc": ["doc/source/whatsnew/v1.2.0.rst"], "test": ["pandas/tests/indexing/test_indexers.py", "pandas/tests/indexing/test_loc.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "f09d514cf0b09e65baf210a836de04e69b208cef", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/49247", "iss_label": "Bug\nReshaping\nWarnings", "title": "BUG: Getting FutureWarning for Groupby.mean when using .pivot_table", "body": "### Pandas version checks\n\n- [X] I have checked that this issue has not already been reported.\n\n- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.\n\n- [X] I have confirmed this bug exists on the main branch of pandas.\n\n\n### Reproducible Example\n\n```python\nimport pandas as pd\r\ndf = pd.DataFrame({\"C1\": [\"a\", \"b\", \"c\"],\r\n \"C2\": [1, 2, 3]})\r\ntable = pd.pivot_table(df, columns=['C2'])\n```\n\n\n### Issue Description\n\nGetting FutureWarning:\r\n\r\n\"<stdin>:1: FutureWarning: The default value of numeric_only in DataFrameGroupBy.mean is deprecated. In a future version, numeric_only will default to False. Either specify numeric_only or select only columns which should be valid for the function.\"\n\n### Expected Behavior\n\npivot_table is internally using DataFrameGroupBy.mean, but does not allow a user to pass a numeric_only argument as suggested in the FutureWarning\n\n### Installed Versions\n\n<details>\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit : 91111fd99898d9dcaa6bf6bedb662db4108da6e6\r\npython : 3.9.13.final.0\r\npython-bits : 64\r\nOS : Linux\r\nOS-release : 4.4.0-19041-Microsoft\r\nVersion : #1237-Microsoft Sat Sep 11 14:32:00 PST 2021\r\nmachine : x86_64\r\nprocessor : x86_64\r\nbyteorder : little\r\nLC_ALL : None\r\nLANG : C.UTF-8\r\nLOCALE : en_US.UTF-8\r\n\r\npandas : 1.5.1\r\nnumpy : 1.23.4\r\npytz : 2022.5\r\ndateutil : 2.8.2\r\nsetuptools : 65.5.0\r\npip : 22.3\r\nCython : None\r\npytest : 7.1.3\r\nhypothesis : None\r\nsphinx : None\r\nblosc : None\r\nfeather : None\r\nxlsxwriter : None\r\nlxml.etree : None\r\nhtml5lib : None\r\npymysql : None\r\npsycopg2 : None\r\njinja2 : 3.1.2\r\nIPython : 8.5.0\r\npandas_datareader: None\r\nbs4 : 4.11.1\r\nbottleneck : None\r\nbrotli :\r\nfastparquet : None\r\nfsspec : 2022.10.0\r\ngcsfs : None\r\nmatplotlib : 3.6.1\r\nnumba : None\r\nnumexpr : None\r\nodfpy : None\r\nopenpyxl : None\r\npandas_gbq : None\r\npyarrow : None\r\npyreadstat : None\r\npyxlsb : None\r\ns3fs : None\r\nscipy : 1.9.2\r\nsnappy : None\r\nsqlalchemy : None\r\ntables : None\r\ntabulate : None\r\nxarray : None\r\nxlrd : None\r\nxlwt : None\r\nzstandard : None\r\ntzdata : None\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/49615", "file_loc": {"base_commit": "f09d514cf0b09e65baf210a836de04e69b208cef", "files": [{"path": "pandas/core/reshape/pivot.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23]}, "(None, '__internal_pivot_table', 113)": {"mod": [167]}}}, {"path": "pandas/tests/reshape/test_pivot.py", "status": "modified", "Loc": {"('TestPivotTable', 'test_pivot_table_nocols', 146)": {"mod": [150]}, "('TestPivotTable', 'test_no_col', 909)": {"mod": [914]}, "('TestPivotTable', 'test_margin_with_only_columns_defined', 954)": {"mod": [978]}, "('TestPivotTable', 'test_pivot_string_func_vs_func', 2003)": {"mod": [2007]}}}, {"path": "pandas/util/_exceptions.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 6]}, "(None, 'find_stack_level', 28)": {"add": [49]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/reshape/pivot.py", "pandas/util/_exceptions.py"], "doc": [], "test": ["pandas/tests/reshape/test_pivot.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "e226bacd9e0d69ce3a81abfa09ae850f4610f888", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/8169", "iss_label": "Bug\nGroupby\nDtype Conversions", "title": "BUG: groupby.count() on different dtypes seems buggy", "body": "from [SO](http://stackoverflow.com/questions/25648923/groupby-count-returns-different-values-for-pandas-dataframe-count-vs-describ)\n\nsomething odd going on here:\n\n```\nvals = np.hstack((np.random.randint(0,5,(100,2)), np.random.randint(0,2,(100,2))))\ndf = pd.DataFrame(vals, columns=['a', 'b', 'c', 'd'])\ndf[df==2] = np.nan\ndf2 = df.copy()\ndf2['a'] = df2['a'].astype('float32')\ndf2['b'] = df2['b'].astype('float32')\n```\n\n```\ndf.groupby(['c', 'd']).count()\ndf2.groupby(['c','d']).count()\n```\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/8171", "file_loc": {"base_commit": "e226bacd9e0d69ce3a81abfa09ae850f4610f888", "files": [{"path": "doc/source/v0.15.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [671]}}}, {"path": "pandas/core/groupby.py", "status": "modified", "Loc": {"(None, '_count_compat', 149)": {"mod": [150, 151, 152, 153]}, "('BaseGrouper', 'aggregate', 1491)": {"mod": [1530, 1537]}, "('NDFrameGroupBy', '_cython_agg_blocks', 2467)": {"mod": [2480]}}}, {"path": "pandas/tests/test_groupby.py", "status": "modified", "Loc": {"('TestGroupBy', None, 62)": {"add": [2216]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/groupby.py"], "doc": ["doc/source/v0.15.0.txt"], "test": ["pandas/tests/test_groupby.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "9ea0d4485e77c95ff0d8766990ab55d43472b66e", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/4312", "iss_label": "Indexing\nDtype Conversions", "title": "BUG: astype assignment via iloc/loc not working", "body": "http://stackoverflow.com/questions/17778139/pandas-unable-to-change-column-data-type/17778560#17778560\n\nThis might be trying to coerce `object` dtype to a real dtype (int/float) and is failing\nShould prob raise for now (or work). Not working with iloc/loc.\n\n```\nIn [66]: df = DataFrame([['1','2','3','.4',5,6.,'foo']],columns=list('ABCDEFG'))\n\nIn [67]: df.dtypes\nOut[67]: \nA object\nB object\nC object\nD object\nE int64\nF float64\nG object\ndtype: object\n\nIn [68]: df.iloc[:,0:3] = df.iloc[:,0:3].astype(int)\n\nIn [69]: df.dtypes\nOut[69]: \nA object\nB object\nC object\nD object\nE int64\nF float64\nG object\ndtype: object\n\nIn [70]: df.iloc[:,0:3] = df.iloc[:,0:3].convert_objects(convert_numeric=True)\n\nIn [71]: df.dtypes\nOut[71]: \nA object\nB object\nC object\nD object\nE int64\nF float64\nG object\ndtype: object\n\n```\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/4624", "file_loc": {"base_commit": "9ea0d4485e77c95ff0d8766990ab55d43472b66e", "files": [{"path": "doc/source/release.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [267]}}}, {"path": "pandas/core/common.py", "status": "modified", "Loc": {"(None, '_possibly_downcast_to_dtype', 960)": {"add": [989]}, "(None, '_maybe_upcast_indexer', 895)": {"mod": [895, 896, 897, 898, 900, 901, 903, 904, 905, 906, 907, 908, 909, 911, 912, 914, 915, 916, 918, 919, 920, 922, 923, 924, 925, 927]}}}, {"path": "pandas/core/groupby.py", "status": "modified", "Loc": {"('SeriesGroupBy', 'transform', 1521)": {"mod": [1560]}}}, {"path": "pandas/core/indexing.py", "status": "modified", "Loc": {"('_NDFrameIndexer', 'setter', 126)": {"mod": [127, 128, 129, 130, 131]}}}, {"path": "pandas/core/internals.py", "status": "modified", "Loc": {"('Block', None, 29)": {"add": [41], "mod": [456]}, "('DatetimeBlock', None, 1106)": {"add": [1106]}, "('DatetimeBlock', '_try_coerce_args', 1131)": {"add": [1139], "mod": [1136, 1137, 1138]}, "(None, None, None)": {"add": [1440]}, "('Block', '_try_cast_result', 456)": {"mod": [459]}, "('Block', 'setitem', 512)": {"mod": [516, 517, 518, 520, 521, 522, 523, 524, 525, 526, 527, 528, 530, 531, 532, 533, 534, 536]}, "('Block', 'create_block', 565)": {"mod": [588]}, "('NumericBlock', None, 841)": {"mod": [845, 846]}, "('DatetimeBlock', '_can_hold_element', 1119)": {"mod": [1122, 1123]}}}, {"path": "pandas/tests/test_common.py", "status": "modified", "Loc": {"(None, 'test_nan_to_nat_conversions', 121)": {"mod": [130, 131, 132, 137]}}}, {"path": "pandas/tests/test_frame.py", "status": "modified", "Loc": {"('TestDataFrame', 'test_where', 7642)": {"mod": [7675]}}}, {"path": "pandas/tests/test_indexing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1199]}, "('TestIndexing', 'test_ix_assign_column_mixed', 955)": {"mod": [967, 968, 969, 970, 971]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/internals.py", "pandas/core/common.py", "pandas/core/indexing.py", "pandas/core/groupby.py"], "doc": ["doc/source/release.rst"], "test": ["pandas/tests/test_common.py", "pandas/tests/test_indexing.py", "pandas/tests/test_frame.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "70435eba769c6bcf57332306455eb70db9fa1111", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/40730", "iss_label": "Bug\ncut\nNA - MaskedArrays", "title": "BUG: qcut fails with Float64Dtype", "body": "- [x] I have checked that this issue has not already been reported.\r\n\r\n- [x] I have confirmed this bug exists on the latest version of pandas.\r\n\r\n- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.\r\n\r\n---\r\n\r\n#### Code Sample, a copy-pastable example\r\n\r\n```python\r\nseries = pd.Series([1.0, 2.0, 3.0, 4.4], dtype=pd.Float64Dtype())\r\npd.qcut(series, 2)\r\n```\r\n\r\n#### Problem description\r\n`pd.qcut` currently accepts the nullable `Int64Dtype` as well as `'float64'`, so I would expect it to work with the `Float64Dtype` as well. Instead the following error is produced:\r\n\r\n```python-traceback\r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n<ipython-input-29-1db98f70db38> in <module>\r\n 1 series = pd.Series([1.0,2.0,3.0,4.0], dtype=pd.Float64Dtype())\r\n----> 2 pd.qcut(series, 2)\r\n\r\n~/.pyenv/versions/3.8.2/envs/woodwork/lib/python3.8/site-packages/pandas/core/reshape/tile.py in qcut(x, q, labels, retbins, precision, duplicates)\r\n 356 quantiles = q\r\n 357 bins = algos.quantile(x, quantiles)\r\n--> 358 fac, bins = _bins_to_cuts(\r\n 359 x,\r\n 360 bins,\r\n\r\n~/.pyenv/versions/3.8.2/envs/woodwork/lib/python3.8/site-packages/pandas/core/reshape/tile.py in _bins_to_cuts(x, bins, right, labels, precision, include_lowest, dtype, duplicates, ordered)\r\n 408 \r\n 409 if include_lowest:\r\n--> 410 ids[x == bins[0]] = 1\r\n 411 \r\n 412 na_mask = isna(x) | (ids == len(bins)) | (ids == 0)\r\n\r\nIndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices\r\n```\r\n\r\n#### Expected Output\r\nShould match that of `float64`\r\n\r\n```\r\n0 (0.999, 2.5]\r\n1 (0.999, 2.5]\r\n2 (2.5, 4.0]\r\n3 (2.5, 4.0]\r\ndtype: category\r\nCategories (2, interval[float64]): [(0.999, 2.5] < (2.5, 4.0]]\r\n```\r\n\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit : f2c8480af2f25efdbd803218b9d87980f416563e\r\npython : 3.8.2.final.0\r\npython-bits : 64\r\nOS : Darwin\r\nOS-release : 19.6.0\r\nVersion : Darwin Kernel Version 19.6.0: Sun Jul 5 00:43:10 PDT 2020; root:xnu-6153.141.1~9/RELEASE_X86_64\r\nmachine : x86_64\r\nprocessor : i386\r\nbyteorder : little\r\nLC_ALL : None\r\nLANG : en_US.UTF-8\r\nLOCALE : en_US.UTF-8\r\n\r\npandas : 1.2.3\r\nnumpy : 1.19.5\r\npytz : 2021.1\r\ndateutil : 2.8.1\r\npip : 21.0.1\r\nsetuptools : 41.2.0\r\nCython : None\r\npytest : 6.0.1\r\nhypothesis : None\r\nsphinx : 3.2.1\r\nblosc : None\r\nfeather : None\r\nxlsxwriter : None\r\nlxml.etree : None\r\nhtml5lib : None\r\npymysql : None\r\npsycopg2 : None\r\njinja2 : 2.11.3\r\nIPython : 7.18.1\r\npandas_datareader: None\r\nbs4 : None\r\nbottleneck : None\r\nfsspec : 0.8.7\r\nfastparquet : None\r\ngcsfs : None\r\nmatplotlib : None\r\nnumexpr : None\r\nodfpy : None\r\nopenpyxl : None\r\npandas_gbq : None\r\npyarrow : 3.0.0\r\npyxlsb : None\r\ns3fs : None\r\nscipy : 1.6.2\r\nsqlalchemy : None\r\ntables : None\r\ntabulate : None\r\nxarray : None\r\nxlrd : None\r\nxlwt : None\r\nnumba : None\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/40969", "file_loc": {"base_commit": "70435eba769c6bcf57332306455eb70db9fa1111", "files": [{"path": "doc/source/whatsnew/v1.3.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [698]}}}, {"path": "pandas/core/reshape/tile.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [28], "mod": [27]}, "(None, '_coerce_to_type', 468)": {"mod": [491]}}}, {"path": "pandas/tests/reshape/test_qcut.py", "status": "modified", "Loc": {"(None, 'test_qcut_nullable_integer', 296)": {"mod": [296, 297]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/reshape/tile.py"], "doc": ["doc/source/whatsnew/v1.3.0.rst"], "test": ["pandas/tests/reshape/test_qcut.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "38afa9310040f1bd4fb122008e96fe6d719b12a2", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/19787", "iss_label": "Missing-data\nCategorical\nClean\ngood first issue", "title": "Clean: Categorical.fillna NaN in categories checking", "body": "We don't allow NaN in the categories anymore, so this block should be unreachable.\r\n\r\nhttps://github.com/pandas-dev/pandas/blob/8bfcddc7728deaf8e840416d83c8feda86630d27/pandas/core/arrays/categorical.py#L1622-L1628\r\n\r\nIf anyone wants to remove it and test things out.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/19880", "file_loc": {"base_commit": "38afa9310040f1bd4fb122008e96fe6d719b12a2", "files": [{"path": ".gitignore", "status": "modified", "Loc": {"(None, None, None)": {"add": [63], "mod": [93]}}}, {"path": "pandas/core/arrays/categorical.py", "status": "modified", "Loc": {"('Categorical', 'fillna', 1590)": {"mod": [1630, 1631, 1632, 1633, 1634, 1635, 1636]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/arrays/categorical.py"], "doc": [], "test": [], "config": [".gitignore"], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "2dad23f766790510d09e66f1e02b57a395d479b1", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/9570", "iss_label": "Enhancement\nTimedelta", "title": "timedelta string conversion requires two-digit hour value", "body": "`Timedelta('00:00:00')` works fine whereas `Timedelta('0:00:00')` raises and error. Unsure whether to call this a bug, but under some circumstances the `datetime` module in pure python will produce time delta strings without the leading 0.\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/9868", "file_loc": {"base_commit": "2dad23f766790510d09e66f1e02b57a395d479b1", "files": [{"path": "doc/source/whatsnew/v0.16.1.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [52]}}}, {"path": "pandas/tseries/tests/test_timedeltas.py", "status": "modified", "Loc": {"('TestTimedeltas', 'test_construction', 35)": {"add": [66]}}}, {"path": "pandas/tseries/timedeltas.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [122]}, "(None, 'convert', 190)": {"mod": [212, 216]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/tseries/timedeltas.py"], "doc": ["doc/source/whatsnew/v0.16.1.txt"], "test": ["pandas/tseries/tests/test_timedeltas.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "b03df731095154e94d23db51d11df5dd736622f8", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/3925", "iss_label": "Datetime", "title": "Access DateTimeIndexed dataframe by timestamp", "body": "Hello, \n\nI am new to pandas and thanks for this great library!\n\nI have a data frame like this: \n\n```\nGold_2012.head()\n\n open high low close volume\ndate_time \n2012-01-02 18:01:00 1571.0 1571.0 1569.1 1569.8 351\n2012-01-02 18:02:00 1569.8 1570.0 1569.7 1569.8 54\n2012-01-02 18:03:00 1570.0 1570.0 1569.1 1569.9 247\n2012-01-02 18:04:00 1570.0 1570.0 1569.8 1569.9 55\n2012-01-02 18:05:00 1569.8 1569.9 1568.5 1568.5 48\n```\n\nI am trying to access the first element of this dataframe. If I use loc function, everything works out:\n\n```\nGold_2012.loc[Gold_2012.index[0]]\n\n\nopen 1571.0\nhigh 1571.0\nlow 1569.1\nclose 1569.8\nvolume 351.0\nName: 2012-01-02 18:01:00-06:00, dtype: float64\n```\n\nBut if I do something like this, an error is thrown. Is this expected?\n\n```\nGold_2012[Gold_2012.index[0]]\n```\n\n---\n\nKeyError Traceback (most recent call last)\n<ipython-input-30-bb7117766fdd> in <module>()\n----> 1 Gold_2012[Gold_2012.index[0]]\n\n/Users/chen/Virtualenvs/python3Env/lib/python3.3/site-packages/pandas/core/frame.py in **getitem**(self, key)\n 1926 else:\n 1927 # get column\n-> 1928 return self._get_item_cache(key)\n 1929 \n 1930 def _getitem_slice(self, key):\n\n/Users/chen/Virtualenvs/python3Env/lib/python3.3/site-packages/pandas/core/generic.py in _get_item_cache(self, item)\n 568 return cache[item]\n 569 except Exception:\n--> 570 values = self._data.get(item)\n 571 res = self._box_item_values(item, values)\n 572 cache[item] = res\n\n/Users/chen/Virtualenvs/python3Env/lib/python3.3/site-packages/pandas/core/internals.py in get(self, item)\n 1382 \n 1383 def get(self, item):\n-> 1384 _, block = self._find_block(item)\n 1385 return block.get(item)\n 1386 \n\n/Users/chen/Virtualenvs/python3Env/lib/python3.3/site-packages/pandas/core/internals.py in _find_block(self, item)\n 1524 \n 1525 def _find_block(self, item):\n-> 1526 self._check_have(item)\n 1527 for i, block in enumerate(self.blocks):\n 1528 if item in block:\n\n/Users/chen/Virtualenvs/python3Env/lib/python3.3/site-packages/pandas/core/internals.py in _check_have(self, item)\n 1531 def _check_have(self, item):\n 1532 if item not in self.items:\n-> 1533 raise KeyError('no item named %s' % com.pprint_thing(item))\n 1534 \n 1535 def reindex_axis(self, new_axis, method=None, axis=0, copy=True):\n\nKeyError: 'no item named 2012-01-02 18:01:00-06:00'\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/3931", "file_loc": {"base_commit": "b03df731095154e94d23db51d11df5dd736622f8", "files": [{"path": "RELEASE.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [256, 357, 360]}}}, {"path": "pandas/core/indexing.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}}}, {"path": "pandas/tseries/index.py", "status": "modified", "Loc": {"('DatetimeIndex', '_partial_date_slice', 1070)": {"add": [1104, 1112]}}}, {"path": "pandas/tseries/tests/test_timeseries.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [253]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/tseries/index.py", "pandas/core/indexing.py"], "doc": ["RELEASE.rst"], "test": ["pandas/tseries/tests/test_timeseries.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "f231c9a74a544ec94cd12e813cb2543fb5a18556", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/35331", "iss_label": "good first issue\nNeeds Tests", "title": "BUG: np.argwhere on pandas series", "body": "- [x] I have checked that this issue has not already been reported.\r\n\r\n- [x] I have confirmed this bug exists on the latest version of pandas.\r\n\r\n- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.\r\n\r\n---\r\n\r\nnumpy/numpy#15555 reports an issue with `np.argwhere` on pandas Series. Reporting here for visibility.\r\n\r\nMRE:\r\n```python\r\n>>> import numpy as np\r\n>>> import pandas as pd\r\n>>> s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])\r\n>>> np.argwhere(s < 0)\r\n```\r\nwhich, with `numpy.__version__ ==1.20.0.dev0+046a736` gives:\r\n**pd.__version__ == 0.25.3:**\r\n```\r\nFutureWarning: Series.nonzero() is deprecated and will be removed in a future version.Use Series.to_numpy().nonzero() instead\r\narray([[3]])\r\n```\r\n**pd.__version__ == 1.0.5:**\r\n```\r\nValueError: Length of passed values is 1, index implies 5.\r\n```\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/53381", "file_loc": {"base_commit": "f231c9a74a544ec94cd12e813cb2543fb5a18556", "files": [{"path": "pandas/tests/series/test_npfuncs.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5, 7]}, "(None, 'test_numpy_unique', 19)": {"add": [21]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["pandas/tests/series/test_npfuncs.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "5de6b84f5117b005a8f010d4510a758b50f3d14e", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/12081", "iss_label": "Reshaping\nError Reporting", "title": "DataFrame.merge with Series should give nice error message", "body": "Right now trying this results in \"IndexError: list index out of range\". It should say can't merge DataFrame with a Series...\n\nI know this for quite a while now, but still get trapped on it every once in a while. This would be very helpful for beginners.\n\nOther people also get confused: http://stackoverflow.com/questions/27281734/pandas-merge-on-index-not-working\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/12112", "file_loc": {"base_commit": "5de6b84f5117b005a8f010d4510a758b50f3d14e", "files": [{"path": "doc/source/whatsnew/v0.18.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [205]}}}, {"path": "pandas/tools/merge.py", "status": "modified", "Loc": {"('_MergeOperation', '__init__', 157)": {"add": [186]}}}, {"path": "pandas/tools/tests/test_merge.py", "status": "modified", "Loc": {"('TestMerge', None, 45)": {"add": [263]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/tools/merge.py"], "doc": ["doc/source/whatsnew/v0.18.0.txt"], "test": ["pandas/tools/tests/test_merge.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "a3c0e7bcfb8bbe9ca45df7e571a305d403e0f066", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/44597", "iss_label": "API Design\nDeprecate", "title": "API/DEPR: int downcasting in DataFrame.where", "body": "\r\n\r\n`Block.where` has special downcasting logic that splits blocks differently from any other Block methods. I would like to deprecate and eventually remove this bespoke logic.\r\n\r\nThe relevant logic is only reached AFAICT when we have integer dtype (non-int64) and an integer `other` too big for this dtype, AND the passed `cond` has all-`True` columns.\r\n\r\n(Identifying the affected behavior is difficult in part because it relies on `can_hold_element` incorrectly returning `True` in these cases)\r\n\r\n```\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\narr = np.arange(6).astype(np.int16).reshape(3, 2)\r\ndf = pd.DataFrame(arr)\r\n\r\nmask = np.zeros(arr.shape, dtype=bool)\r\nmask[:, 0] = True\r\n\r\nres = df.where(mask, 2**17)\r\n\r\n>>> res.dtypes\r\n0 int16\r\n1 int32\r\ndtype: object\r\n```\r\n\r\nThe simplest thing to do would be to not do any downcasting in these cases, in which case we would end up with all-int32. The next simplest would be to downcast column-wise, which would give the same end result but with less consolidation.\r\n\r\nWe do not have any test cases that fail if I disable this downcasting (after I fix a problem with an expressions.where call that the downcasting somehow makes irrelevant). This makes me think the current behavior is not intentional, or at least not a priority.\r\n\r\nAny objection to deprecating the integer downcasting entirely?\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/45009", "file_loc": {"base_commit": "a3c0e7bcfb8bbe9ca45df7e571a305d403e0f066", "files": [{"path": "doc/source/whatsnew/v1.4.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [547]}}}, {"path": "pandas/core/internals/blocks.py", "status": "modified", "Loc": {"('Block', 'where', 1138)": {"add": [1229]}}}, {"path": "pandas/tests/frame/indexing/test_where.py", "status": "modified", "Loc": {"('TestDataFrameIndexingWhere', 'test_where_axis', 464)": {"add": [501], "mod": [503]}, "(None, None, None)": {"add": [719]}, "('TestDataFrameIndexingWhere', None, 50)": {"mod": [101, 464]}, "('TestDataFrameIndexingWhere', 'test_where_alignment', 101)": {"mod": [144]}}}, {"path": "pandas/tests/frame/methods/test_clip.py", "status": "modified", "Loc": {"('TestDataFrameClip', None, 11)": {"mod": [139]}, "('TestDataFrameClip', 'test_clip_with_na_args', 139)": {"mod": [154]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/internals/blocks.py"], "doc": ["doc/source/whatsnew/v1.4.0.rst"], "test": ["pandas/tests/frame/methods/test_clip.py", "pandas/tests/frame/indexing/test_where.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "32f789fbc5d5a72d9d1ac14935635289eeac9009", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/52151", "iss_label": "Bug\nGroupby\nCategorical", "title": "BUG: Inconsistent behavior with `groupby/min` and `observed=False` on categoricals between 2.0 and 2.1", "body": "### Pandas version checks\r\n\r\n- [X] I have checked that this issue has not already been reported.\r\n\r\n- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.\r\n\r\n- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.\r\n\r\n\r\n### Reproducible Example\r\n\r\n```python\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndf = pd.DataFrame({\r\n \"cat_1\": pd.Categorical(list(\"AB\"), categories=list(\"ABCDE\"), ordered=True),\r\n \"cat_2\": pd.Categorical([1, 2], categories=[1, 2, 3], ordered=True),\r\n \"value_1\": np.random.uniform(size=2),\r\n})\r\n\r\nchunk1 = df[df.cat_1 == \"A\"]\r\nchunk2 = df[df.cat_1 == \"B\"]\r\n\r\ndf1 = chunk1.groupby(\"cat_1\", observed=False).min()\r\ndf2 = chunk2.groupby(\"cat_1\", observed=False).min()\r\ndf3 = pd.concat([df1, df2], ignore_index=False)\r\n\r\nres3 = df3.groupby(level=0, observed=False).min()\r\nprint(f\"\\n{res3}\")\r\n```\r\n\r\n\r\n### Issue Description\r\n\r\nWhen performing a `groupby/min` with a categorical dtype and `observed=False`, the results differ between `1.5.3` (and `2.0`) and 2.1.\r\n\r\nOutput with 1.5.3 or 2.0:\r\n\r\n```python\r\n cat_2 value_1\r\ncat_1\r\nA 1 0.384993\r\nB 2 0.955231\r\nC NaN NaN\r\nD NaN NaN\r\nE NaN NaN\r\n```\r\n\r\nOutput with the latest `main`:\r\n\r\n```python\r\n cat_2 value_1\r\ncat_1\r\nA 1 0.297557\r\nB 1 0.081856\r\nC 1 NaN\r\nD 1 NaN\r\nE 1 NaN\r\n```\r\n\r\nThe change can be traced to this PR:\r\n\r\n* https://github.com/pandas-dev/pandas/pull/52120\r\n\r\n### Expected Behavior\r\n\r\nI'm not sure if the changed behavior is intended. Please advise.\r\n\r\n### Installed Versions\r\n\r\n<details>\r\n\r\ncommit : d22d1f2db0bc7846f679b2b0a572216f23fa83cc\r\npython : 3.8.16.final.0\r\npython-bits : 64\r\nOS : Darwin\r\nOS-release : 22.3.0\r\nVersion : Darwin Kernel Version 22.3.0: Thu Jan 5 20:50:36 PST 2023; root:xnu-8792.81.2~2/RELEASE_ARM64_T6020\r\nmachine : arm64\r\nprocessor : arm\r\nbyteorder : little\r\nLC_ALL : None\r\nLANG : en_US.UTF-8\r\nLOCALE : en_US.UTF-8\r\n\r\npandas : 2.1.0.dev0+293.gd22d1f2db0\r\nnumpy : 1.23.5\r\npytz : 2022.7.1\r\ndateutil : 2.8.2\r\nsetuptools : 67.4.0\r\npip : 23.0.1\r\nCython : 0.29.33\r\npytest : 7.2.1\r\nhypothesis : 6.68.2\r\nsphinx : 4.5.0\r\nblosc : None\r\nfeather : None\r\nxlsxwriter : 3.0.8\r\nlxml.etree : 4.9.2\r\nhtml5lib : 1.1\r\npymysql : 1.0.2\r\npsycopg2 : 2.9.3\r\njinja2 : 3.1.2\r\nIPython : 8.11.0\r\npandas_datareader: None\r\nbs4 : 4.11.2\r\nbottleneck : 1.3.6\r\nbrotli :\r\nfastparquet : 2023.2.0\r\nfsspec : 2023.1.0\r\ngcsfs : 2023.1.0\r\nmatplotlib : 3.6.3\r\nnumba : 0.56.4\r\nnumexpr : 2.8.3\r\nodfpy : None\r\nopenpyxl : 3.1.0\r\npandas_gbq : None\r\npyarrow : 11.0.0\r\npyreadstat : 1.2.1\r\npyxlsb : 1.0.10\r\ns3fs : 2023.1.0\r\nscipy : 1.10.1\r\nsnappy :\r\nsqlalchemy : 2.0.4\r\ntables : 3.7.0\r\ntabulate : 0.9.0\r\nxarray : 2023.1.0\r\nxlrd : 2.0.1\r\nzstandard : 0.19.0\r\ntzdata : None\r\nqtpy : None\r\npyqt5 : None\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/52236", "file_loc": {"base_commit": "32f789fbc5d5a72d9d1ac14935635289eeac9009", "files": [{"path": "pandas/core/groupby/ops.py", "status": "modified", "Loc": {"('WrappedCythonOp', '_ea_wrap_cython_operation', 358)": {"add": [404]}}}, {"path": "pandas/tests/groupby/test_min_max.py", "status": "modified", "Loc": {"(None, 'test_min_max_nullable_uint64_empty_group', 235)": {"add": [249]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/groupby/ops.py"], "doc": [], "test": ["pandas/tests/groupby/test_min_max.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "8924277fa3dbe775f46e679ab8bd97b293e465ea", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/41556", "iss_label": "Bug\nGroupby\nAlgos", "title": "BUG: groupby.shift return keys filled with `fill_value` when `fill_value` is specified", "body": "- [x] I have checked that this issue has not already been reported.\r\n\r\n- [x] I have confirmed this bug exists on the latest version of pandas.\r\n\r\n- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.\r\n\r\n---\r\n\r\n**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.\r\n\r\n#### Code Sample, a copy-pastable example\r\n\r\n```python\r\nIn [2]: df = pd.DataFrame({'a': [2, 1, 2, 1], 'b': ['x', 'x', 'y', 'y']})\r\n\r\nIn [3]: df.groupby('a').shift(1)\r\nOut[3]: \r\n b\r\n0 NaN\r\n1 NaN\r\n2 x\r\n3 x\r\n\r\nIn [4]: df.groupby('a').shift(1, fill_value='fill')\r\nOut[4]: \r\n a b\r\n0 fill fill\r\n1 fill fill\r\n2 2 x\r\n3 1 x\r\n```\r\n\r\n#### Problem description\r\nWhen specifying `fill_value` in `groupby.shift`, the returned result includes the key column with keys filled with `fill_value`. When `fill_value` is unspecified (None), the key column is not included.\r\n\r\n#### Expected Output\r\nIt seems pretty strange that keys are to be filled with `fill_value`. This makes more sense to me:\r\n```python\r\nIn [4]: df.groupby('a').shift(1, fill_value='fill')\r\nOut[4]: \r\n b\r\n0 fill\r\n1 fill\r\n2 x\r\n3 x\r\n```\r\n\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\n\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit : 2cb96529396d93b46abab7bbc73a208e708c642e\r\npython : 3.7.10.final.0\r\npython-bits : 64\r\nOS : Linux\r\nOS-release : 4.15.0-76-generic\r\nVersion : #86-Ubuntu SMP Fri Jan 17 17:24:28 UTC 2020\r\nmachine : x86_64\r\nprocessor : x86_64\r\nbyteorder : little\r\nLC_ALL : None\r\nLANG : None\r\nLOCALE : en_US.UTF-8\r\n\r\npandas : 1.2.4\r\nnumpy : 1.20.2\r\npytz : 2021.1\r\ndateutil : 2.8.1\r\npip : 21.1.1\r\nsetuptools : 52.0.0.post20210125\r\nCython : 0.29.23\r\npytest : 6.2.4\r\nhypothesis : 6.12.0\r\nsphinx : 3.5.4\r\nblosc : None\r\nfeather : None\r\nxlsxwriter : None\r\nlxml.etree : None\r\nhtml5lib : None\r\npymysql : None\r\npsycopg2 : None\r\njinja2 : 2.11.3\r\nIPython : 7.23.1\r\npandas_datareader: None\r\nbs4 : None\r\nbottleneck : None\r\nfsspec : 2021.04.0\r\nfastparquet : None\r\ngcsfs : None\r\nmatplotlib : None\r\nnumexpr : None\r\nodfpy : None\r\nopenpyxl : None\r\npandas_gbq : None\r\npyarrow : 1.0.1\r\npyxlsb : None\r\ns3fs : None\r\nscipy : None\r\nsqlalchemy : None\r\ntables : None\r\ntabulate : None\r\nxarray : None\r\nxlrd : None\r\nxlwt : None\r\nnumba : 0.53.1\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/41858", "file_loc": {"base_commit": "8924277fa3dbe775f46e679ab8bd97b293e465ea", "files": [{"path": "asv_bench/benchmarks/groupby.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [371]}}}, {"path": "doc/source/whatsnew/v1.4.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [170, 264]}}}, {"path": "pandas/core/groupby/groupby.py", "status": "modified", "Loc": {"('GroupBy', '_get_cythonized_result', 2809)": {"add": [2824, 2874]}, "('GroupBy', 'shift', 2997)": {"add": [3034], "mod": [3025]}, "('GroupBy', 'blk_func', 2908)": {"mod": [2949]}}}, {"path": "pandas/tests/groupby/test_groupby_shift_diff.py", "status": "modified", "Loc": {"(None, 'test_group_shift_with_fill_value', 41)": {"mod": [58]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["asv_bench/benchmarks/groupby.py", "pandas/core/groupby/groupby.py"], "doc": ["doc/source/whatsnew/v1.4.0.rst"], "test": ["pandas/tests/groupby/test_groupby_shift_diff.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "92093457ca13ba037257d0b8d41735268535c84f", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/3573", "iss_label": "Bug\nOutput-Formatting", "title": "Unintuitive default behavior with wide DataFrames in the IPython notebook", "body": "In the IPython notebook, HTML output it the default and whether summary view is displayed should not be governed by hypothetical line width. I ran into this problem in a demo recently and it took me a minute to figure out what was wrong, definitely a bad change in 0.11. \n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/3663", "file_loc": {"base_commit": "0ed4549ac857fbf2c7e975acdf1d987bacc3ea32", "files": [{"path": "RELEASE.rst", "status": "modified", "Loc": {"(None, None, 65)": {"add": [65]}, "(None, None, 87)": {"add": [87]}, "(None, None, 141)": {"add": [141]}}}, {"path": "doc/source/faq.rst", "status": "modified", "Loc": {"(None, None, 38)": {"mod": [38, 39]}, "(None, None, 48)": {"mod": [48, 49, 50, 51]}}}, {"path": "pandas/core/common.py", "status": "modified", "Loc": {"(None, 'in_qtconsole', 1895)": {"add": [1906]}, "(None, 'in_ipnb_frontend', 1908)": {"mod": [1908]}}}, {"path": "pandas/core/config_init.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [129, 250], "mod": [123, 141, 142, 143, 144]}}}, {"path": "pandas/core/format.py", "status": "modified", "Loc": {"(None, 'get_console_size', 1702)": {"mod": [1705, 1721]}}}, {"path": "pandas/core/frame.py", "status": "modified", "Loc": {"('DataFrame', '_repr_fits_vertical_', 606)": {"add": [613], "mod": [608, 609, 610, 612, 615, 616, 618, 619, 620, 621, 622, 624]}, "('DataFrame', '_repr_fits_horizontal_', 624)": {"add": [628, 642], "mod": [636, 639, 640, 648, 649, 651, 652, 653, 658]}, "('DataFrame', '_repr_html_', 729)": {"add": [733], "mod": [739]}, "('DataFrame', '__unicode__', 682)": {"mod": [700, 701, 702, 703, 704, 705, 706, 707]}}}, {"path": "pandas/tests/test_format.py", "status": "modified", "Loc": {"('TestDataFrameFormatting', 'test_repr_max_columns_max_rows', 203)": {"add": [241], "mod": [236, 238]}, "('TestDataFrameFormatting', 'test_wide_repr_multiindex_cols', 854)": {"add": [855], "mod": [859, 860, 861]}, "('TestDataFrameFormatting', 'test_expand_frame_repr', 167)": {"mod": [173, 174]}, "('TestDataFrameFormatting', 'test_wide_repr', 787)": {"mod": [790]}, "('TestDataFrameFormatting', 'test_wide_repr_named', 810)": {"mod": [813]}, "('TestDataFrameFormatting', 'test_wide_repr_multiindex', 831)": {"mod": [836]}, "('TestDataFrameFormatting', 'test_wide_repr_unicode', 876)": {"mod": [879]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/common.py", "pandas/core/frame.py", "pandas/core/config_init.py", "pandas/core/format.py"], "doc": ["RELEASE.rst", "doc/source/faq.rst"], "test": ["pandas/tests/test_format.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "a214915e241ea15f3d072d54930d0e0c8f42ee10", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/19482", "iss_label": "Dtype Conversions\nError Reporting\nNumeric Operations", "title": "Rank With 'method=first' Broken for Objects", "body": "Came across this working on #15779\r\n\r\n\r\n```python\r\nIn []: df = pd.DataFrame({'key': ['a'] * 5, 'val': ['bar', 'bar', 'foo', 'bar', 'baz']})\r\nIn []: df.groupby('key').rank(method='first')\r\n\r\nOut []: \r\nEmpty DataFrame\r\nColumns: []\r\nIndex: []\r\n\r\n```\r\n\r\n#### Expected Output\r\n\r\n```python\r\n\r\nOut[]: \r\n val\r\n0 1.0\r\n1 2.0\r\n2 5.0\r\n3 3.0\r\n4 4.0\r\n\r\n```\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: d3f7d2a666aa824e2df98083aa5c1fd9bb63252e\r\npython: 3.6.3.final.0\r\npython-bits: 64\r\nOS: Darwin\r\nOS-release: 17.4.0\r\nmachine: x86_64\r\nprocessor: i386\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.23.0.dev0+169.gd3f7d2a66.dirty\r\npytest: 3.2.1\r\npip: 9.0.1\r\nsetuptools: 36.5.0.post20170921\r\nCython: 0.26.1\r\nnumpy: 1.13.3\r\nscipy: 1.0.0\r\npyarrow: 0.8.0\r\nxarray: 0.10.0\r\nIPython: 6.2.1\r\nsphinx: 1.6.3\r\npatsy: 0.4.1\r\ndateutil: 2.6.1\r\npytz: 2017.2\r\nblosc: None\r\nbottleneck: 1.2.1\r\ntables: 3.4.2\r\nnumexpr: 2.6.4\r\nfeather: 0.4.0\r\nmatplotlib: 2.1.1\r\nopenpyxl: 2.5.0b1\r\nxlrd: 1.1.0\r\nxlwt: 1.3.0\r\nxlsxwriter: 1.0.2\r\nlxml: 4.1.1\r\nbs4: 4.6.0\r\nhtml5lib: 1.0.1\r\nsqlalchemy: 1.1.13\r\npymysql: 0.7.11.None\r\npsycopg2: None\r\njinja2: 2.10\r\ns3fs: 0.1.2\r\nfastparquet: 0.1.3\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/19481", "file_loc": {"base_commit": "a214915e241ea15f3d072d54930d0e0c8f42ee10", "files": [{"path": "doc/source/whatsnew/v0.23.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [583]}}}, {"path": "pandas/_libs/algos.pxd", "status": "modified", "Loc": {"(None, None, None)": {"add": [13]}}}, {"path": "pandas/_libs/algos.pyx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [34, 35, 36, 37, 38, 39, 40]}}}, {"path": "pandas/_libs/groupby.pyx", "status": "modified", "Loc": {"(None, None, None)": {"mod": [19, 20]}}}, {"path": "pandas/_libs/groupby_helper.pxi.in", "status": "modified", "Loc": {"(None, None, None)": {"add": [446]}}}, {"path": "pandas/core/groupby.py", "status": "modified", "Loc": {"('GroupBy', None, 1147)": {"add": [1770]}, "('BaseGrouper', None, 1926)": {"add": [2185], "mod": [2245, 2376, 2377]}, "('_GroupBy', None, 551)": {"mod": [997]}, "('_GroupBy', '_cython_transform', 997)": {"mod": [1005, 1010]}, "('BaseGrouper', '_cython_operation', 2245)": {"mod": [2317, 2318, 2320, 2337]}, "('BaseGrouper', '_transform', 2396)": {"mod": [2397, 2409, 2411]}}}, {"path": "pandas/tests/groupby/test_groupby.py", "status": "modified", "Loc": {"('TestGroupBy', None, 37)": {"add": [1897]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/_libs/groupby.pyx", "pandas/_libs/groupby_helper.pxi.in", "pandas/_libs/algos.pyx", "pandas/core/groupby.py", "pandas/_libs/algos.pxd"], "doc": ["doc/source/whatsnew/v0.23.0.txt"], "test": ["pandas/tests/groupby/test_groupby.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "679dbd021eccc238e422057009365e2ee1c04b25", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/21687", "iss_label": "Docs\nUsage Question\nAlgos\nWindow", "title": "\"on\" argument of DataFrame.rolling only works for datetime columns", "body": "the `on=` argument of `DataFrame.rolling` only works for datetime columns.\r\n\r\n```\r\ndf = pd.DataFrame([\r\n [18, 0],\r\n [2, 0],\r\n [1, 0],\r\n [9, 1],\r\n [8, 1],\r\n], columns=['value', 'roll'])\r\n```\r\n\r\n```\r\ndf.roll = pd.to_datetime(df.roll, unit='s')\r\ndf.rolling('1s', on='roll').value.max()\r\n```\r\n\r\nreturns:\r\n\r\n```\r\n0 18.0\r\n1 18.0\r\n2 18.0\r\n3 9.0\r\n4 9.0\r\nName: value, dtype: float64\r\n```\r\nas expected.\r\n\r\nBut \r\n\r\n```df.rolling(1, on='roll').value.max()```\r\n\r\nreturns:\r\n\r\n```\r\n0 18.0\r\n1 2.0\r\n2 1.0\r\n3 9.0\r\n4 8.0\r\nName: value, dtype: float64\r\n```\r\n\r\nIf this is intentional behavior, I'd be happy to change the docs to note this (the docs currently imply that `on=` can be used for any column).\r\n\r\n\r\n\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/27265", "file_loc": {"base_commit": "679dbd021eccc238e422057009365e2ee1c04b25", "files": [{"path": "pandas/core/window.py", "status": "modified", "Loc": {"('Window', None, 489)": {"mod": [516, 517]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/window.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "940104efc9e708bc93744dfaa36c9492b03b1ca4", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/20452", "iss_label": "Reshaping\nAPI Design", "title": "BUG: New feature allowing merging on combination of columns and index levels drops levels of index", "body": "#### Code Sample, a copy-pastable example if possible\r\n\r\n```python\r\nIn [1]: import pandas as pd\r\n\r\nIn [2]: pd.__version__\r\nOut[2]: '0.23.0.dev0+657.g01882ba5b'\r\n\r\nIn [3]: df1 = pd.DataFrame({'v1' : range(12)}, index=pd.MultiIndex.from_product([list('abc'),list('xy'),[1,2]], names=['abc','xy','num']))\r\n ...: df1\r\n ...:\r\nOut[3]:\r\n v1\r\nabc xy num\r\na x 1 0\r\n 2 1\r\n y 1 2\r\n 2 3\r\nb x 1 4\r\n 2 5\r\n y 1 6\r\n 2 7\r\nc x 1 8\r\n 2 9\r\n y 1 10\r\n 2 11\r\n\r\nIn [4]: df2 = pd.DataFrame({'v2': [100*i for i in range(1,7)]}, index=pd.MultiIndex.from_product([list('abc'), list('xy')],names=['abc','xy']))\r\n\r\nIn [5]: df2\r\nOut[5]:\r\n v2\r\nabc xy\r\na x 100\r\n y 200\r\nb x 300\r\n y 400\r\nc x 500\r\n y 600\r\n\r\nIn [6]: df1.merge(df2, on=['abc','xy']) # 'num' disappears\r\nOut[6]:\r\n v1 v2\r\nabc xy\r\na x 0 100\r\n x 1 100\r\n y 2 200\r\n y 3 200\r\nb x 4 300\r\n x 5 300\r\n y 6 400\r\n y 7 400\r\nc x 8 500\r\n x 9 500\r\n y 10 600\r\n y 11 600\r\n\r\nIn [7]: df1.reset_index().merge(df2, on=['abc','xy']) # This preserves 'num'\r\nOut[7]:\r\n abc xy num v1 v2\r\n0 a x 1 0 100\r\n1 a x 2 1 100\r\n2 a y 1 2 200\r\n3 a y 2 3 200\r\n4 b x 1 4 300\r\n5 b x 2 5 300\r\n6 b y 1 6 400\r\n7 b y 2 7 400\r\n8 c x 1 8 500\r\n9 c x 2 9 500\r\n10 c y 1 10 600\r\n11 c y 2 11 600\r\n\r\nIn [8]: df1.merge(df2, on='xy') # 'abc' and 'num' disappear\r\nOut[8]:\r\n v1 v2\r\nxy\r\nx 0 100\r\nx 0 300\r\nx 0 500\r\nx 1 100\r\nx 1 300\r\nx 1 500\r\nx 4 100\r\nx 4 300\r\nx 4 500\r\nx 5 100\r\nx 5 300\r\nx 5 500\r\nx 8 100\r\nx 8 300\r\nx 8 500\r\nx 9 100\r\nx 9 300\r\nx 9 500\r\ny 2 200\r\ny 2 400\r\ny 2 600\r\ny 3 200\r\ny 3 400\r\ny 3 600\r\ny 6 200\r\ny 6 400\r\ny 6 600\r\ny 7 200\r\ny 7 400\r\ny 7 600\r\ny 10 200\r\ny 10 400\r\ny 10 600\r\ny 11 200\r\ny 11 400\r\ny 11 600\r\n\r\n```\r\n#### Problem description\r\n\r\nIt seems that the new feature implemented in #17484 that allows merging on a combination of columns and index levels can drop index levels, which is really non-intuitive. In the first example, the index level named \"num\" gets dropped, while in the last example, both \"abc\" and \"xy\" are dropped.\r\n\r\nIf this is the desired behavior, then it needs to be carefully documented.\r\n\r\nN.B. There is also an error in the docs of merging.rst that says this feature was introduced in v.0.22, but it will be introduced in v0.23\r\n\r\nI'm guessing @jmmease will need to look at this.\r\n\r\n#### Expected Output\r\n\r\n```python\r\nIn [6]: df1.merge(df2, on=['abc','xy'])\r\nOut[6]:\r\n v1 v2\r\nabc xy num\r\na x 1 0 100\r\n 2 1 100\r\n y 1 2 200\r\n 2 3 200\r\nb x 1 4 300\r\n 2 5 300\r\n y 1 6 400\r\n 2 7 400\r\nc x 1 8 500\r\n 2 9 500\r\n y 1 10 600\r\n 2 11 600\r\n\r\nIn [8]: df1.merge(df2, on='xy')\r\nOut[8]:\r\n abc_x num v1 abc_y v2\r\nxy\r\nx a 1 0 a 100\r\nx a 1 0 b 300\r\nx a 1 0 c 500\r\nx a 2 1 a 100\r\nx a 2 1 b 300\r\nx a 2 1 c 500\r\nx b 1 4 a 100\r\nx b 1 4 b 300\r\nx b 1 4 c 500\r\nx b 2 5 a 100\r\nx b 2 5 b 300\r\nx b 2 5 c 500\r\nx c 1 8 a 100\r\nx c 1 8 b 300\r\nx c 1 8 c 500\r\nx c 2 9 a 100\r\nx c 2 9 b 300\r\nx c 2 9 c 500\r\ny a 1 2 a 200\r\ny a 1 2 b 400\r\ny a 1 2 c 600\r\ny a 2 3 a 200\r\ny a 2 3 b 400\r\ny a 2 3 c 600\r\ny b 1 6 a 200\r\ny b 1 6 b 400\r\ny b 1 6 c 600\r\ny b 2 7 a 200\r\ny b 2 7 b 400\r\ny b 2 7 c 600\r\ny c 1 10 a 200\r\ny c 1 10 b 400\r\ny c 1 10 c 600\r\ny c 2 11 a 200\r\ny c 2 11 b 400\r\ny c 2 11 c 600\r\n```\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.6.4.final.0\r\npython-bits: 64\r\nOS: Windows\r\nOS-release: 10\r\nmachine: AMD64\r\nprocessor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: None\r\nLOCALE: None.None\r\n\r\npandas: 0.23.0.dev0+657.g01882ba5b\r\npytest: 3.4.0\r\npip: 9.0.1\r\nsetuptools: 38.5.1\r\nCython: 0.25.1\r\nnumpy: 1.14.1\r\nscipy: 1.0.0\r\npyarrow: 0.8.0\r\nxarray: None\r\nIPython: 6.2.1\r\nsphinx: 1.7.1\r\npatsy: 0.5.0\r\ndateutil: 2.6.1\r\npytz: 2018.3\r\nblosc: 1.5.1\r\nbottleneck: 1.2.1\r\ntables: 3.4.2\r\nnumexpr: 2.6.4\r\nfeather: None\r\nmatplotlib: 2.2.0\r\nopenpyxl: 2.5.0\r\nxlrd: 1.1.0\r\nxlwt: 1.3.0\r\nxlsxwriter: 1.0.2\r\nlxml: 4.1.1\r\nbs4: 4.6.0\r\nhtml5lib: 1.0.1\r\nsqlalchemy: 1.2.5\r\npymysql: 0.8.0\r\npsycopg2: None\r\njinja2: 2.10\r\ns3fs: 0.1.3\r\nfastparquet: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/20475", "file_loc": {"base_commit": "940104efc9e708bc93744dfaa36c9492b03b1ca4", "files": [{"path": "doc/source/merging.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1202], "mod": [1136, 1137, 1141, 1142, 1143, 1146, 1164]}}}, {"path": "doc/source/whatsnew/v0.24.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [1543]}}}, {"path": "pandas/core/reshape/merge.py", "status": "modified", "Loc": {"('_MergeOperation', '_maybe_add_join_keys', 646)": {"add": [717]}}}, {"path": "pandas/tests/reshape/merge/test_join.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [732]}}}, {"path": "pandas/tests/reshape/merge/test_merge.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [1400, 1401, 1402, 1403, 1404, 1405, 1406, 1407, 1408, 1409]}, "(None, 'test_merge_series', 1409)": {"mod": [1419]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/reshape/merge.py"], "doc": ["doc/source/whatsnew/v0.24.0.rst", "doc/source/merging.rst"], "test": ["pandas/tests/reshape/merge/test_merge.py", "pandas/tests/reshape/merge/test_join.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "13940c7f3c0371d6799bbd88b9c6546392b418a1", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/35650", "iss_label": "good first issue\nNeeds Tests", "title": "BUG: pd.factorize with read-only datetime64 numpy array raises ValueError", "body": "- [x] I have checked that this issue has not already been reported.\r\n\r\n- [x] I have confirmed this bug exists on the latest version of pandas.\r\n\r\n- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.\r\n\r\n---\r\n\r\n**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.\r\n\r\n#### Code Sample, a copy-pastable example\r\n\r\n```python\r\n\r\nIn [1]: pandas.__version__\r\n[PYFLYBY] import pandas\r\nOut[1]: u'0.24.2'\r\n\r\nIn [2]: arr = numpy.array([numpy.datetime64('2015-11-20T15:06:58.000')])\r\n\r\nIn [3]: arr.dtype\r\nOut[3]: dtype('<M8[ms]')\r\n\r\nIn [4]: arr.flags.writeable = False\r\n\r\n[PYFLYBY] import pandas as pd\r\nIn [5]: pd.factorize(arr)\r\n```\r\n\r\n#### Problem description\r\n\r\n[Construction with non-mutable datetime64 strings]\r\n\r\n#### Expected Output\r\n(array([0]), array(['2015-11-20T15:06:58.000000000'], dtype='datetime64[ns]'))\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\npandas/_libs/tslibs/conversion.pyx in pandas._libs.tslibs.conversion.ensure_datetime64ns()\r\n\r\n/usr/local/python/python-2.7/std/lib/python2.7/site-packages/pandas/_libs/tslibs/conversion.so in View.MemoryView.memoryview_cwrapper()\r\n\r\n/usr/local/python/python-2.7/std/lib/python2.7/site-packages/pandas/_libs/tslibs/conversion.so in View.MemoryView.memoryview.__cinit__()\r\n\r\nValueError: buffer source array is read-only\r\n</details>\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/35775", "file_loc": {"base_commit": "13940c7f3c0371d6799bbd88b9c6546392b418a1", "files": [{"path": "pandas/tests/test_algos.py", "status": "modified", "Loc": {"('TestFactorize', 'test_object_factorize', 245)": {"add": [253]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["pandas/tests/test_algos.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "816f94575c9ec1af2169a28536217c4d16dd6b4b", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16033", "iss_label": "Docs", "title": "DOC: styler warnings in doc-build", "body": "https://travis-ci.org/pandas-dev/pandas/jobs/222779268\r\n\r\n```\r\n/tmp/doc/source/generated/pandas.io.formats.style.Styler.rst:74: WARNING: failed to import template:\r\n/tmp/doc/source/generated/pandas.io.formats.style.Styler.rst:74: WARNING: toctree references unknown document 'generated/template:'\r\n```\r\n\r\ncc @TomAugspurger @jorisvandenbossche \r\n\r\nI just pushed a change to fix the path of the imports (after ``pandas.formats`` change), but I think it still needs something.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/16094", "file_loc": {"base_commit": "f0bd908336a260cafa9d83c8244dd1a0a056f72d", "files": [{"path": "pandas/tests/io/formats/test_css.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [2]}, "(None, 'test_css_parse_strings', 46)": {"mod": [48, 49, 50, 51]}, "(None, 'test_css_parse_invalid', 79)": {"mod": [80]}, "(None, 'test_css_side_shorthands', 99)": {"mod": [118]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["pandas/tests/io/formats/test_css.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "2067d7e306ae720d455f356e4da21f282a8a762e", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/35811", "iss_label": "Bug\nUsage Question\nAPI Design\nSeries", "title": "BUG/QST: Series.transform with a dictionary", "body": "What is the expected output of passing a dictionary to `Series.transform`? For example:\r\n\r\n s = pd.Series([1, 2, 3])\r\n result1 = s.transform({'a': lambda x: x + 1})\r\n result2 = s.transform({'a': lambda x: x + 1, 'b': lambda x: x + 2})\r\n\r\nThe docs say that `dict of axis labels -> functions` is acceptable, but I can't find any example in the docs where the output is described/shown. Under the hood, `Series.transform` is just calling `Series.aggregate` which produces the following outputs for `result1` and `result2`.\r\n\r\n````\r\n# result1\r\na 0 2\r\n 1 3\r\n 2 4\r\ndtype: int64\r\n\r\n# result2\r\na 0 2\r\n 1 3\r\n 2 4\r\nb 0 3\r\n 1 4\r\n 2 5\r\ndtype: int64\r\n````\r\n\r\n`result1` is deemed acceptable (the length of the result equals the length of the input) and is returned, but `result2` raises; it is not a transformation.\r\n\r\nI am wondering if a better return would be a DataFrame where the keys are the column names ('a' and 'b' in this example).", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/35964", "file_loc": {"base_commit": "2067d7e306ae720d455f356e4da21f282a8a762e", "files": [{"path": "doc/source/whatsnew/v1.2.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"add": [344]}}}, {"path": "pandas/core/aggregation.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [23], "mod": [21]}, "(None, 'validate_func_kwargs', 353)": {"add": [386]}}}, {"path": "pandas/core/base.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [7]}, "('SelectionMixin', None, 132)": {"mod": [563]}}}, {"path": "pandas/core/frame.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [47], "mod": [119]}, "('DataFrame', None, 341)": {"mod": [7464, 7468, 7469, 7470, 7471, 7472]}}}, {"path": "pandas/core/generic.py", "status": "modified", "Loc": {"('NDFrame', None, 168)": {"mod": [10651, 10652, 10653, 10654, 10656, 10658, 10659, 10660, 10661, 10662, 10664, 10666, 10667, 10668, 10669, 10670, 10671, 10672, 10673, 10674, 10676, 10677, 10678, 10679, 10681, 10682, 10683, 10685, 10686, 10687, 10688, 10690, 10691, 10692, 10693, 10694, 10695, 10696, 10697, 10698, 10699, 10700, 10701, 10702, 10704, 10705, 10707, 10708, 10709, 10710, 10711, 10712, 10713, 10714, 10715, 10716, 10717, 10718, 10719, 10720, 10721, 10723]}}}, {"path": "pandas/core/series.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [27, 91]}, "('Series', None, 141)": {"mod": [4084, 4088, 4089, 4090, 4091]}}}, {"path": "pandas/core/shared_docs.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [259]}}}, {"path": "pandas/tests/frame/apply/test_frame_transform.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [1, 7], "mod": [6]}, "(None, 'test_agg_transform', 11)": {"add": [13, 14], "mod": [11, 12, 16, 17, 19, 20, 21, 22, 24, 25, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]}, "(None, 'test_transform_method_name', 67)": {"add": [72], "mod": [69]}, "(None, 'test_transform_and_agg_err', 54)": {"mod": [54, 55, 56, 57, 58, 60, 61, 62, 63]}}}, {"path": "pandas/tests/series/apply/test_series_apply.py", "status": "modified", "Loc": {"('TestSeriesAggregate', 'test_transform', 203)": {"add": [213, 221], "mod": [212]}}}, {"path": "pandas/tests/series/apply/test_series_transform.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [5], "mod": [4]}, "(None, 'test_transform_none_to_type', 53)": {"add": [59], "mod": [55, 57, 58]}, "(None, 'test_transform', 8)": {"mod": [8, 9, 13, 20, 21, 22, 23, 24, 26, 27, 29, 30, 32, 33, 34, 35, 36, 37]}, "(None, 'test_transform_and_agg_error', 41)": {"mod": [41, 43, 47]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/core/aggregation.py", "pandas/core/base.py", "pandas/core/generic.py", "pandas/core/frame.py", "pandas/core/series.py", "pandas/core/shared_docs.py"], "doc": ["doc/source/whatsnew/v1.2.0.rst"], "test": ["pandas/tests/series/apply/test_series_apply.py", "pandas/tests/frame/apply/test_frame_transform.py", "pandas/tests/series/apply/test_series_transform.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "889c2ff67af14213e8ed065df2957b07e34ac95b", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/33810", "iss_label": "Testing\nIO Parquet", "title": "TST: add Feather V2 round-trip test", "body": "no that pyarrow 0.17 has landed, we should have a round-trip Feather V2 test to ensure we have dtype preservation (we can likely re-use some of our test frames from the parquet tests).", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/33422", "file_loc": {"base_commit": "889c2ff67af14213e8ed065df2957b07e34ac95b", "files": [{"path": "doc/source/conf.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [418]}}}, {"path": "doc/source/user_guide/io.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [4586, 4588, 4589, 4595, 4596]}}}, {"path": "doc/source/whatsnew/v1.1.0.rst", "status": "modified", "Loc": {"(None, None, None)": {"mod": [91]}}}, {"path": "pandas/core/frame.py", "status": "modified", "Loc": {"('DataFrame', 'to_feather', 2061)": {"add": [2068], "mod": [2063, 2072]}, "('DataFrame', None, 324)": {"mod": [2061]}}}, {"path": "pandas/io/feather_format.py", "status": "modified", "Loc": {"(None, 'to_feather', 10)": {"add": [17, 18], "mod": [10, 12, 61]}}}, {"path": "pandas/tests/io/test_feather.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6]}, "('TestFeather', 'test_basic', 52)": {"add": [73]}, "('TestFeather', 'test_path_localpath', 147)": {"add": [150]}, "('TestFeather', None, 21)": {"mod": [30]}, "('TestFeather', 'check_round_trip', 30)": {"mod": [36, 38]}, "('TestFeather', 'test_unsupported_other', 103)": {"mod": [105, 106]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/io/feather_format.py", "pandas/core/frame.py", "doc/source/conf.py"], "doc": ["doc/source/user_guide/io.rst", "doc/source/whatsnew/v1.1.0.rst"], "test": ["pandas/tests/io/test_feather.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "b6691127523f965003dbf877a358c81af5012989", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/15630", "iss_label": "Numeric Operations\nAlgos", "title": "Pandas (0.18) Rank: unexpected behavior for method = 'dense' and pct = True", "body": "I find the behavior of rank function with method = 'dense' and pct = True unexpected as it looks like, in order to calculate percentile ranks, the function is using the total number of observations instead of the number of _distinct_ observations.\r\n\r\n#### Code Sample, a copy-pastable example if possible\r\n\r\n```\r\nimport pandas as pd\r\nn_rep = 2\r\nts = pd.Series([1,2,3,4] * n_rep )\r\noutput = ts.rank(method = 'dense', pct = True)\r\n```\r\n\r\n#### Problem description\r\n\r\n```\r\nts.rank(method = 'dense', pct = True)\r\nOut[116]: \r\n0 0.125\r\n1 0.250\r\n2 0.375\r\n3 0.500\r\n4 0.125\r\n5 0.250\r\n6 0.375\r\n7 0.500\r\n```\r\n\r\n#### Expected Output\r\nSomething similar to:\r\n\r\n```\r\npd.Series([1,2,3,4] * 2).rank(method = 'dense', pct = True) * n_rep \r\nOut[118]: \r\n0 0.25\r\n1 0.50\r\n2 0.75\r\n3 1.00\r\n4 0.25\r\n5 0.50\r\n6 0.75\r\n7 1.00\r\n```\r\n\r\nAlso, I would expected the result above to be invariant to n_rep.\r\ni.e. I would expect a \"mapping\" {value -> pct_rank} that would not depend on how many times the value is repeated, while it is not the case here.\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/15639", "file_loc": {"base_commit": "b6691127523f965003dbf877a358c81af5012989", "files": [{"path": "doc/source/whatsnew/v0.23.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [909]}}}, {"path": "pandas/_libs/algos_rank_helper.pxi.in", "status": "modified", "Loc": {"(None, None, None)": {"mod": [216, 388]}}}, {"path": "pandas/tests/frame/test_rank.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [6, 13], "mod": [3, 4, 5, 8, 10, 12]}, "('TestRank', 'test_rank_2d_tie_methods', 247)": {"add": [268]}}}, {"path": "pandas/tests/series/test_rank.py", "status": "modified", "Loc": {"('TestSeriesRank', 'test_rank_modify_inplace', 370)": {"add": [378]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/_libs/algos_rank_helper.pxi.in"], "doc": ["doc/source/whatsnew/v0.23.0.txt"], "test": ["pandas/tests/series/test_rank.py", "pandas/tests/frame/test_rank.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "95be01dbc060f405b7928cc6e4ba4d6d6181c22a", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/13420", "iss_label": "Groupby\nCategorical", "title": "DataFrame.groupby(grp, axis=1) with categorical grp breaks", "body": "While attempting to use `pd.qcut` (which returned a Categorical) to bin some data in groups for plotting, I encountered the following error. The idea is to group a DataFrame by columns (`axis=1`) using a Categorical.\n#### Minimal breaking example\n\n```\n>>> import pandas\n>>> df = pandas.DataFrame({'a':[1,2,3,4], 'b':[-1,-2,-3,-4], 'c':[5,6,7,8]})\n>>> df\n a b c\n0 1 -1 5\n1 2 -2 6\n2 3 -3 7\n3 4 -4 8\n>>> grp = pandas.Categorical([1,0,1])\n>>> df.groupby(grp, axis=1).mean()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/generic.py\", line 3778, in groupby\n **kwargs)\n File \"/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py\", line 1427, in groupby\n return klass(obj, by, **kwds)\n File \"/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py\", line 354, in __init__\n mutated=self.mutated)\n File \"/home/ntawolf/anaconda3/lib/python3.5/site-packages/pandas/core/groupby.py\", line 2390, in _get_grouper\n raise ValueError(\"Categorical dtype grouper must \"\nValueError: Categorical dtype grouper must have len(grouper) == len(data)\n```\n#### Expected behaviour\n\nSame as\n\n```\n>>> df.T.groupby(grp, axis=0).mean().T\n 0 1\n0 -1 3\n1 -2 4\n2 -3 5\n3 -4 6\n```\n\nSo, it works as expected when doubly transposed. This makes it appear as a bug to me.\n#### Proposed solution\n\nIn [`if is_categorical_dtype(gpr) and len(gpr) != len(obj):`](https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L2406), change `len(obj)` to `obj.shape[axis]`. This assumes that `len(obj) == obj.shape[0]` for all `obj`.\n\nSo, supposing you agree that this is a bug, should a test be put in [`test_groupby_categorical`](https://github.com/pydata/pandas/blob/master/pandas/tests/test_groupby.py#L3968)?\n#### output of `pd.show_versions()`\n\n```\nINSTALLED VERSIONS\n------------------\ncommit: None\npython: 3.5.1.final.0\npython-bits: 64\nOS: Linux\nOS-release: 3.19.0-59-generic\nmachine: x86_64\nprocessor: x86_64\nbyteorder: little\nLC_ALL: None\nLANG: en_US.UTF-8\n\npandas: 0.18.1\nnose: 1.3.7\npip: 8.1.2\nsetuptools: 22.0.5\nCython: 0.24\nnumpy: 1.10.4\nscipy: 0.17.1\nstatsmodels: 0.6.1\nxarray: None\nIPython: 4.2.0\nsphinx: 1.4.1\npatsy: 0.4.1\ndateutil: 2.5.3\npytz: 2016.4\nblosc: None\nbottleneck: 1.0.0\ntables: 3.2.2\nnumexpr: 2.5.2\nmatplotlib: 1.5.1\nopenpyxl: 2.3.2\nxlrd: 1.0.0\nxlwt: 1.1.1\nxlsxwriter: 0.8.9\nlxml: 3.6.0\nbs4: 4.4.1\nhtml5lib: None\nhttplib2: None\napiclient: None\nsqlalchemy: 1.0.13\npymysql: None\npsycopg2: None\njinja2: 2.8\nboto: 2.40.0\npandas_datareader: None\n```\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/27788", "file_loc": {"base_commit": "54e58039fddc79492e598e85279c42e85d06967c", "files": [{"path": "pandas/tests/groupby/test_categorical.py", "status": "modified", "Loc": {"(None, 'test_seriesgroupby_observed_apply_dict', 1159)": {"add": [1165]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": ["pandas/tests/groupby/test_categorical.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "be61825986ba565bc038beb2f5df2750fc1aca30", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/13565", "iss_label": "Docs\nUsage Question\nTimezones", "title": "Call unique() on a timezone aware datetime series returns non timezone aware result", "body": "Call unique() on a timezone aware datetime series returns non timezone aware result. \n#### Code Sample\n\nimport pandas as pd\nimport pytz\nimport datetime\n\nIn [242]: ts = pd.Series([datetime.datetime(2011,2,11,20,0,0,0,pytz.utc), datetime.datetime(2011,2,11,20,0,0,0,pytz.utc), datetime.datetime(2011,2,11,21,0,0,0,pytz.utc)])\n\nIn [243]: ts\nOut[243]: \n0 2011-02-11 20:00:00+00:00\n1 2011-02-11 20:00:00+00:00\n2 2011-02-11 21:00:00+00:00\ndtype: datetime64[ns, UTC]\n\nIn [244]: ts.unique()\nOut[244]: array(['2011-02-11T20:00:00.000000000', '2011-02-11T21:00:00.000000000'], dtype='datetime64[ns]')\n#### output of `pd.show_versions()`\n## INSTALLED VERSIONS\n\ncommit: None\npython: 2.7.9.final.0\npython-bits: 64\nOS: Linux\nOS-release: 3.16.0-4-amd64\nmachine: x86_64\nprocessor: \nbyteorder: little\nLC_ALL: None\nLANG: de_AT.UTF-8\n\npandas: 0.18.1\nnose: 1.3.4\npip: 8.1.2\nsetuptools: 22.0.5\nCython: 0.21.1\nnumpy: 1.11.0\nscipy: 0.14.0\nstatsmodels: None\nxarray: None\nIPython: 4.2.0\nsphinx: 1.2.3\npatsy: None\ndateutil: 2.5.3\npytz: 2016.4\nblosc: None\nbottleneck: None\ntables: 3.1.1\nnumexpr: 2.4\nmatplotlib: 1.4.2\nopenpyxl: 2.3.5\nxlrd: 0.9.2\nxlwt: 0.7.4\nxlsxwriter: None\nlxml: 3.6.0\nbs4: None\nhtml5lib: 1.0b3\nhttplib2: 0.9\napiclient: None\nsqlalchemy: 0.9.8\npymysql: None\npsycopg2: None\njinja2: 2.7.3\nboto: None\npandas_datareader: None\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/13979", "file_loc": {"base_commit": "be61825986ba565bc038beb2f5df2750fc1aca30", "files": [{"path": "doc/source/whatsnew/v0.19.0.txt", "status": "modified", "Loc": {"(None, None, None)": {"add": [460, 906]}}}, {"path": "pandas/core/base.py", "status": "modified", "Loc": {"(None, None, None)": {"mod": [10, 11, 24]}, "('IndexOpsMixin', None, 799)": {"mod": [955]}, "('IndexOpsMixin', 'unique', 955)": {"mod": [957, 958, 962, 963, 964, 965, 966, 967, 969]}}}, {"path": "pandas/core/series.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [20], "mod": [80]}, "('Series', None, 99)": {"add": [1233]}}}, {"path": "pandas/indexes/base.py", "status": "modified", "Loc": {"('Index', None, 89)": {"add": [3219]}, "(None, None, None)": {"mod": [63]}}}, {"path": "pandas/indexes/category.py", "status": "modified", "Loc": {"('CategoricalIndex', None, 23)": {"add": [285]}}}, {"path": "pandas/tests/indexes/test_category.py", "status": "modified", "Loc": {"('TestCategoricalIndex', 'test_duplicates', 390)": {"add": [397]}}}, {"path": "pandas/tests/indexes/test_multi.py", "status": "modified", "Loc": {"('TestMultiIndex', None, 28)": {"add": [1929]}}}, {"path": "pandas/tests/test_base.py", "status": "modified", "Loc": {"(None, None, None)": {"add": [19]}, "('TestIndexOps', 'test_value_counts_unique_nunique', 450)": {"add": [505, 510, 533, 579], "mod": [455, 457, 458, 459, 461, 462, 463, 465, 466, 467, 469, 471, 472, 473, 475, 476, 477, 479, 480, 483, 490, 491, 502, 504, 507, 528, 529, 530, 573, 575, 577, 578]}, "('TestIndexOps', 'test_value_counts_inferred', 585)": {"mod": [593, 594]}, "('TestIndexOps', 'test_value_counts_bins', 612)": {"mod": [630, 631, 655, 656, 664, 665]}, "('TestIndexOps', 'test_value_counts_datetime64', 668)": {"mod": [694, 695, 697, 717, 718, 720, 721, 722, 736, 737, 739]}}}, {"path": "pandas/tests/test_categorical.py", "status": "modified", "Loc": {"('TestCategorical', None, 28)": {"add": [1304]}}}, {"path": "pandas/tseries/base.py", "status": "modified", "Loc": {"('DatetimeIndexOpsMixin', None, 108)": {"mod": [746, 747, 748, 750, 751, 752, 753, 754, 755, 756, 757]}}}, {"path": "pandas/util/testing.py", "status": "modified", "Loc": {"(None, 'makeUnicodeIndex', 1536)": {"mod": [1537]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/indexes/base.py", "pandas/core/base.py", "pandas/core/series.py", "pandas/tseries/base.py", "pandas/indexes/category.py"], "doc": ["doc/source/whatsnew/v0.19.0.txt"], "test": ["pandas/util/testing.py", "pandas/tests/test_base.py", "pandas/tests/indexes/test_category.py", "pandas/tests/indexes/test_multi.py", "pandas/tests/test_categorical.py"], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "c4a996adfc91f023b46ce3cb67e33fc8b2ca3627", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/9400", "iss_label": "Visualization\nError Reporting", "title": "Improve error message in plotting.py's _plot", "body": "This a minor enhancement proposal. At the moment I cannot submit a pull request. I will probably have time to create one during the next week. \n\nThis is a snippet from `tools/plotting.py`: https://github.com/pydata/pandas/blob/master/pandas/tools/plotting.py#L2269-2283\n\n``` python\ndef _plot(data, x=None, y=None, subplots=False,\n ax=None, kind='line', **kwds):\n kind = _get_standard_kind(kind.lower().strip())\n if kind in _all_kinds:\n klass = _plot_klass[kind]\n else:\n raise ValueError('Invalid chart type given %s' % kind)\n\n from pandas import DataFrame\n if kind in _dataframe_kinds:\n if isinstance(data, DataFrame):\n plot_obj = klass(data, x=x, y=y, subplots=subplots, ax=ax,\n kind=kind, **kwds)\n else:\n raise ValueError('Invalid chart type given %s' % kind)\n```\n\nWhich results in following error message:\n\n```\nC:\\Anaconda3\\lib\\site-packages\\pandas\\tools\\plotting.py in plot_series(series, label, kind, use_index, rot, xticks, yticks, xlim, ylim, ax, style, grid, legend, logx, logy, secondary_y, **kwds)\n 2231 klass = _plot_klass[kind]\n 2232 else:\n-> 2233 raise ValueError('Invalid chart type given %s' % kind)\n 2234 \n 2235 \"\"\"\n\nValueError: Invalid chart type given hist\n```\n\nI would suggest using the format string `\"Invalid chart type given: '%s'\"` instead.\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/9417", "file_loc": {"base_commit": "c4a996adfc91f023b46ce3cb67e33fc8b2ca3627", "files": [{"path": "pandas/tools/plotting.py", "status": "modified", "Loc": {"(None, '_plot', 2269)": {"mod": [2269, 2277]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/tools/plotting.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "53243e8ec73ecf5035a63f426a9c703d6835e9a7", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/54889", "iss_label": "Build", "title": "BUILD: Race condition between .pxi.in and .pyx compiles in parallel build of 2.1.0", "body": "### Installation check\n\n- [X] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).\n\n\n### Platform\n\nLinux-6.4.7-gentoo-dist-x86_64-AMD_Ryzen_5_3600_6-Core_Processor-with-glibc2.38\n\n### Installation Method\n\nBuilt from source\n\n### pandas Version\n\n2.1.0\n\n### Python Version\n\n3.11.5\n\n### Installation Logs\n\n<details>\r\n<summary>Build log excerpt</summary>\r\n\r\n```\r\ngpep517 build-wheel --backend mesonpy --output-fd 3 --wheel-dir /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/wheel --config-json {\"builddir\": \"/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10\", \"setup-args\": [], \"compile-args\": [\"-v\", \"-j12\", \"-l0\"]}\r\n2023-08-31 07:02:26,275 gpep517 INFO Building wheel via backend mesonpy\r\n+ meson setup /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0 /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10 -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/meson-python-native-file.ini\r\nThe Meson build system\r\nVersion: 1.2.1\r\nSource dir: /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0\r\nBuild dir: /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10\r\nBuild type: native build\r\nProject name: pandas\r\nProject version: 2.1.0\r\nC compiler for the host machine: x86_64-pc-linux-gnu-gcc (gcc 13.2.1 \"x86_64-pc-linux-gnu-gcc (Gentoo 13.2.1_p20230826 p7) 13.2.1 20230826\")\r\nC linker for the host machine: x86_64-pc-linux-gnu-gcc ld.bfd 2.41\r\nC++ compiler for the host machine: x86_64-pc-linux-gnu-g++ (gcc 13.2.1 \"x86_64-pc-linux-gnu-g++ (Gentoo 13.2.1_p20230826 p7) 13.2.1 20230826\")\r\nC++ linker for the host machine: x86_64-pc-linux-gnu-g++ ld.bfd 2.41\r\nCython compiler for the host machine: cython (cython 0.29.36)\r\nHost machine cpu family: x86_64\r\nHost machine cpu: x86_64\r\nProgram python found: YES (/usr/bin/python3.10)\r\nFound pkg-config: /usr/bin/pkg-config (1.8.1)\r\nRun-time dependency python found: YES 3.10\r\nBuild targets in project: 53\r\n\r\npandas 2.1.0\r\n\r\n User defined options\r\n Native files: /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/meson-python-native-file.ini\r\n buildtype : release\r\n vsenv : True\r\n b_ndebug : if-release\r\n b_vscrt : md\r\n\r\nFound samurai-1.9 at /usr/bin/samu\r\n\r\nVisual Studio environment is needed to run Ninja. It is recommended to use Meson wrapper:\r\n/usr/lib/python-exec/python3.10/meson compile -C .\r\n\r\nGenerating targets: 0%| | 0/53 eta ?\r\nGenerating targets: 98%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a| 52/53 eta 00:00\r\n \r\n\r\nWriting build.ninja: 0%| | 0/225 eta ?\r\n \r\n+ /usr/bin/samu -v -j12 -l0\r\n[\u2026]\r\nsamu: job failed: cython -M --fast-fail -3 --include-dir /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/pandas/_libs '-X always_allow_keywords=true' /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0/pandas/_libs/interval.pyx -o pandas/_libs/interval.cpython-310-x86_64-linux-gnu.so.p/pandas/_libs/interval.pyx.c\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.binomial\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.bytes\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.chisquare\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.choice\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.dirichlet\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.exponential\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.f\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.gamma\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.geometric\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.pareto\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.gumbel\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.poisson\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.negative_binomial\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.normal\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.laplace\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.logistic\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.lognormal\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.logseries\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.power\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.ranf\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.randint\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.random\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.random_integers\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.random_sample\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.rayleigh\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.sample\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.standard_exponential\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.standard_gamma\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.standard_normal\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.uniform\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.weibull\u001b[0m\r\n\r\nError compiling Cython file:\r\n------------------------------------------------------------\r\n...\r\n bint kh_exist_strbox(kh_strbox_t*, khiter_t) nogil\r\n\r\n khuint_t kh_needed_n_buckets(khuint_t element_n) nogil\r\n\r\n\r\ninclude \"khash_for_primitive_helper.pxi\"\r\n^\r\n------------------------------------------------------------\r\n\r\n/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0/pandas/_libs/khash.pxd:129:0: 'khash_for_primitive_helper.pxi' not found\r\n```\r\n</details>\r\n\r\nFull build log: [dev-python:pandas-2.1.0:20230831-050223.log](https://github.com/pandas-dev/pandas/files/12482393/dev-python.pandas-2.1.0.20230831-050223.log)\r\n\r\n```\r\n$ find /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/ -name '*.pxi'\r\n/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/pandas/_libs/intervaltree.pxi\r\n/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/pandas/_libs/sparse_op_helper.pxi\r\n```\r\n\r\nIt looks that meson files do not declare dependencies between `khash_for_primitive_helper.pxi` and `khash.pxd` files, so the former isn't necessarily created before the latter is attempt to be compiled.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/54958", "file_loc": {"base_commit": "53243e8ec73ecf5035a63f426a9c703d6835e9a7", "files": [{"path": "pandas/_libs/meson.build", "status": "modified", "Loc": {"(None, None, None)": {"mod": [72]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["pandas/_libs/meson.build"]}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e", "iss_has_pr": 1, "iss_html_url": "https://github.com/meta-llama/llama/issues/658", "iss_label": "documentation", "title": "Confusion about the default max_seq_len = 2048", "body": "When reading the class Transformer, I found that the code use max_seq_len * 2 to prepare the rotary positional encoding, which confused me for a while. Then I realized that the default max_seq_len was set to 2048, and the 'max_seq_len * 2' aims to generate 4096 positional embeddings, corresponding to the 4K context length in the paper. I understand it can achieve the purpose but why not setting max_seq_len directly to 4096? which is more clear and less likely to cause misconception.\r\n\r\nself.freqs_cis = precompute_freqs_cis(\r\n self.params.dim // self.params.n_heads, self.params.max_seq_len * 2\r\n )", "pr_html_url": "https://github.com/meta-llama/llama/pull/754", "file_loc": {"base_commit": "7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e", "files": [{"path": "llama/model.py", "status": "modified", "Loc": {"('Transformer', '__init__', 414)": {"add": [450]}}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["llama/model.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "f88765d504ce2fa9bc3926c76910b11510522892", "iss_html_url": "https://github.com/pallets/flask/issues/1224", "iss_label": "", "title": "Starting up a public server.", "body": "I ran into this problem today with one of my applications trying to make it public to my local network. \n\nC:\\Users\\Savion\\Documents\\GitHub\\Example-Flask-Website>flask\\Scripts\\python run.\npy\n- Running on http://127.0.0.1:5000/\n- Restarting with reloader\n 10.101.37.124 - - [26/Oct/2014 15:51:23] \"GET / HTTP/1.1\" 404 -\n- Running on http://0.0.0.0:5000/\n 10.101.37.124 - - [26/Oct/2014 15:51:38] \"GET / HTTP/1.1\" 404 -\n\nThe problem that i run into is the fact that this app continuously attempts to default to localhost. It is not until 2 Ctrl + C, that it goes to 0.0.0.0, then I still receive a 404 error in my browser. I do have routes that are valid when running locally. I have tried to create a new virtualenv and i still recieve the same error, I reset the firewall rule on this application. All effort that did not return rewarded.\n\nAny Ideas onto why my app makes an attempt to startup on the localhost first then moves over, but then returns a 404?\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f88765d504ce2fa9bc3926c76910b11510522892", "files": [{"path": "flask/views.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1\n404 error", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/views.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "2d8a21c7321a9ead8e27208b49a18f4b8b27e2c1", "iss_html_url": "https://github.com/pallets/flask/issues/834", "iss_label": "", "title": "How to get the serialized version of the session cookie in 0.10?", "body": "In version 0.9 I could simply get the value of the `session` like this: \n\n```\nflask.session.serialize()\n```\n\nBut after upgrading to 0.10 this is not working anymore.. what's the alternative? How can I get the session value?\n\n(`flask.request.cookies.get('session')` is not good for me, because I would like to get the session right after login, so it's not part of the request yet)\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "2d8a21c7321a9ead8e27208b49a18f4b8b27e2c1", "files": [{"path": "flask/sessions.py", "Loc": {"('SecureCookieSessionInterface', 'get_signing_serializer', 308)": {"mod": []}, "('TaggedJSONSerializer', 'dumps', 60)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nhow to do \u2026", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/sessions.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "22d82e70b3647ed16c7d959a939daf533377382b", "iss_html_url": "https://github.com/pallets/flask/issues/4015", "iss_label": "", "title": "2.0.0: build requires ContextVar module", "body": "Simple I cannot find it.\r\n```console\r\n+ /usr/bin/python3 setup.py build '--executable=/usr/bin/python3 -s'\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 4, in <module>\r\n setup(\r\n File \"/usr/lib/python3.8/site-packages/setuptools/__init__.py\", line 144, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"/usr/lib64/python3.8/distutils/core.py\", line 121, in setup\r\n dist.parse_config_files()\r\n File \"/usr/lib/python3.8/site-packages/setuptools/dist.py\", line 689, in parse_config_files\r\n parse_configuration(self, self.command_options,\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 121, in parse_configuration\r\n meta.parse()\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 426, in parse\r\n section_parser_method(section_options)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 399, in parse_section\r\n self[name] = value\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 184, in __setitem__\r\n value = parser(value)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 515, in _parse_version\r\n version = self._parse_attr(value, self.package_dir)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 349, in _parse_attr\r\n module = import_module(module_name)\r\n File \"/usr/lib64/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/home/tkloczko/rpmbuild/BUILD/Flask-2.0.0/src/flask/__init__.py\", line 7, in <module>\r\n from .app import Flask\r\n File \"/home/tkloczko/rpmbuild/BUILD/Flask-2.0.0/src/flask/app.py\", line 19, in <module>\r\n from werkzeug.local import ContextVar\r\nImportError: cannot import name 'ContextVar' from 'werkzeug.local' (/usr/lib/python3.8/site-packages/werkzeug/local.py)\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "22d82e70b3647ed16c7d959a939daf533377382b", "files": [{"path": "setup.py", "Loc": {"(None, None, None)": {"mod": [7]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "43e2d7518d2e89dc7ed0b4ac49b2d20211ad1bfa", "iss_html_url": "https://github.com/pallets/flask/issues/2977", "iss_label": "", "title": "Serial port access problem in DEBUG mode.", "body": "### Expected Behavior\r\n\r\nSending commands through the serial port.\r\n\r\n```python\r\napp = Flask(__name__)\r\nserialPort = serial.Serial(port = \"COM5\", baudrate=1000000,\r\n bytesize=8, timeout=2, stopbits=serial.STOPBITS_ONE)\r\n\r\nlamp = {\r\n 1 : {'name' : 'n1', 'state' : True},\r\n 2 : {'name' : 'n2', 'state' : True} \r\n}\r\n\r\n@app.route(\"/\")\r\ndef hello():\r\n templateData = {\r\n 'lamp': lamp\r\n }\r\n\r\n \r\n return render_template('main.html', **templateData)\r\n\r\n\r\n@app.route(\"/setPin/<action>\")\r\ndef action(action):\r\n\r\n if action == \"on\":\r\n\r\n serialPort.write(b\"n2c1111\\r\\n\")\r\n lamp[1][\"state\"] = True\r\n\r\n if action == \"off\":\r\n serialPort.write(b\"n2c0000\\r\\n\")\r\n lamp[1][\"state\"] = False\r\n\r\n\r\n templateData = {\r\n 'lamp': lamp\r\n }\r\n\r\n return render_template('main.html', **templateData)\r\n\r\nif __name__ == \"__main__\":\r\n app.run(host='0.0.0.0', port=5000, debug=True)\r\n```\r\n\r\n\r\n### Actual Behavior\r\n\r\nI can not access the serial port with FLASK_ENV = development and FLASK_DEBUG = 1. Everything works fine with DEBUG mode disabled.\r\n\r\n```pytb\r\nFLASK_APP = app.py\r\nFLASK_ENV = development\r\nFLASK_DEBUG = 1\r\nIn folder C:/Users/user/PycharmProjects/Ho_server\r\nC:\\Users\\user\\Anaconda3\\python.exe -m flask run\r\n * Serving Flask app \"app.py\" (lazy loading)\r\n * Environment: development\r\n * Debug mode: on\r\n * Restarting with stat\r\n * Debugger is active!\r\n * Debugger PIN: 138-068-963\r\n * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)\r\n127.0.0.1 - - [30/Oct/2018 10:49:27] \"GET /setPin/on HTTP/1.1\" 500 -\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\flask\\_compat.py\", line 35, in reraise\r\n raise value\r\n File \"C:\\Users\\user\\PycharmProjects\\H_server\\app.py\", line 8, in <module>\r\n bytesize=8, timeout=2, stopbits=serial.STOPBITS_ONE)\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialwin32.py\", line 31, in __init__\r\n super(Serial, self).__init__(*args, **kwargs)\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialutil.py\", line 240, in __init__\r\n self.open()\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialwin32.py\", line 62, in open\r\n raise SerialException(\"could not open port {!r}: {!r}\".format(self.portstr, ctypes.WinError()))\r\nserial.serialutil.SerialException: could not open port 'COM5': PermissionError(13, 'Access is denied.', None, 5)\r\n```\r\n\r\n### Environment\r\n\r\n* Python version: 3.6.5\r\n* Flask version: 1.0.2\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [7], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pallets", "repo_name": "flask", "base_commit": "1a7fd980f8579bd7d7d53c812a77c1dc64be52ba", "iss_html_url": "https://github.com/pallets/flask/issues/1749", "iss_label": "", "title": "JSONEncoder and aware datetimes", "body": "I was surprised to see that though flask.json.JSONEncoder accepts datetime objects, it ignores the timezone. I checked werkzeug.http.http_date and it can handle timezone aware dates just fine if they are passed in, but the JSONEncoder insists on transforming the datetime to a timetuple, like this\n\n `return http_date(o.timetuple())`\n\nThis means i have to convert all my dates to utc before encoding them, otherwise I should overwrite the dafault() method in the encoder. Can you help me understand why the encoder was made to function with naive dates only?\nThx\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1a7fd980f8579bd7d7d53c812a77c1dc64be52ba", "files": [{"path": "flask/json.py", "Loc": {"('JSONEncoder', 'default', 60)": {"mod": [78]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/json.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "144d43830f663808c5fbca75b797350060acf7dd", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/559", "iss_label": "", "title": "Results files saved to specific folder", "body": "Having just installed Sherlock I was surprised to see the results files are just jumbled in with everything else instead of being in their own Results folder.\r\n\r\nHaving a separate folder would keep things cleaner especially as you use it more and the number of files increases.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "144d43830f663808c5fbca75b797350060acf7dd", "files": [{"path": "README.md", "Loc": {"(None, None, 65)": {"mod": [65]}}, "status": "modified"}, {"path": "sherlock/sherlock.py", "Loc": {"(None, 'main', 462)": {"mod": [478]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\n+\nDoc"}, "loctype": {"code": ["sherlock/sherlock.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "7ec56895a37ada47edd6573249c553379254d14a", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1911", "iss_label": "question", "title": "How do you search for usernames? New to this. ", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE.\r\n######################################################################\r\n\r\n-->\r\n\r\n## Checklist\r\n<!--\r\nPut x into all boxes (like this [x]) once you have completed what they say.\r\nMake sure complete everything in the checklist.\r\n-->\r\n- [ ] I'm asking a question regarding Sherlock\r\n- [ ] My question is not a tech support question.\r\n\r\n**We are not your tech support**. \r\nIf you have questions related to `pip`, `git`, or something that is not related to Sherlock, please ask them on [Stack Overflow](https://stackoverflow.com/) or [r/learnpython](https://www.reddit.com/r/learnpython/)\r\n\r\n\r\n## Question\r\n\r\nASK YOUR QUESTION HERE\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "7ec56895a37ada47edd6573249c553379254d14a", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "65ce128b7fd8c8915c40495191d9c136f1d2322b", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1297", "iss_label": "bug", "title": "name 'requests' is not defined", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n<!--\r\nPut x into all boxes (like this [x]) once you have completed what they say.\r\nMake sure complete everything in the checklist.\r\n-->\r\n\r\n- [x] I'm reporting a bug in Sherlock's functionality\r\n- [x] The bug I'm reporting is not a false positive or a false negative\r\n- [x] I've verified that I'm running the latest version of Sherlock\r\n- [x] I've checked for similar bug reports including closed ones\r\n- [x] I've checked for pull requests that attempt to fix this bug\r\n\r\n## Description\r\n<!--\r\nUnable to search for usernames.\r\nERROR: Problem while attempting to access data file URL 'https://raw.githubusercontent.com/sherlock-project/sherlock/master/sherlock/resources/data.json': name 'requests' is not defined\r\n\r\nlatest, pulled today\r\n-->\r\n\r\nERROR: Problem while attempting to access data file URL 'https://raw.githubusercontent.com/sherlock-project/sherlock/master/sherlock/resources/data.json': name 'requests' is not defined\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "65ce128b7fd8c8915c40495191d9c136f1d2322b", "files": [{"path": "sherlock/sites.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sherlock/sites.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "f63e17066dc4881ee5a164aed60b6e8f1e9ab129", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/462", "iss_label": "environment", "title": "File \"sherlock.py\", line 24, in <module> from requests_futures.sessions import FuturesSession ModuleNotFoundError: No module named 'requests_futures'", "body": "File \"sherlock.py\", line 24, in <module>\r\n from requests_futures.sessions import FuturesSession\r\nModuleNotFoundError: No module named 'requests_futures'", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f63e17066dc4881ee5a164aed60b6e8f1e9ab129", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "6c6faff416896a41701aa3e24e5b5a584bd5cb44", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/236", "iss_label": "question", "title": "No module named 'torrequest'", "body": "Hi,\r\nsimilar problem to module \"requests_futures\"\r\n\r\nTraceback (most recent call last):\r\n File \"sherlock.py\", line 25, in <module>\r\n from torrequest import TorRequest\r\nModuleNotFoundError: No module named 'torrequest'\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6c6faff416896a41701aa3e24e5b5a584bd5cb44", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "980a6be629610ee58c1eae5a65a4724ce650597b", "iss_html_url": "https://github.com/keras-team/keras/issues/16234", "iss_label": "type:support", "title": "Compiling model in callback causes TypeError", "body": "**System information**.\r\n- Have I written custom code (as opposed to using a stock example script provided in Keras): yes\r\n- TensorFlow version (use command below): 2.8.0 (2.4 too)\r\n- Python version: 3.7\r\n\r\n**Describe the problem**.\r\n\r\nIn a fine-tuning case I would like to do transfer-learning phase first (with fine-tuned layers frozen) and after that, all layers should be unfrozen. I wrote a callback that unfreeze the layers after few epochs. Unfortunately, after changing the layers' `trainable` attribute, the model should be recompiled - and the recompilation causes the `TypeError` (see colab). \r\n\r\nI am aware that I can workaround this by compiling and fitting model twice - for both phases separately - but the usage of callback seems more elegant to me.\r\n\r\n**Standalone code to reproduce the issue**.\r\n\r\nhttps://colab.research.google.com/drive/1u6VlH6EIQGXSp7vEIngTasp3v2EE42Wi?usp=sharing\r\n\r\n**Source code / logs**.\r\n\r\n```\r\nTypeError: 'NoneType' object is not callable\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "980a6be629610ee58c1eae5a65a4724ce650597b", "files": [{"path": "keras/engine/training.py", "Loc": {"('Model', 'make_train_function', 998)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "90f441a6a0ed4334cac53760289061818a68b7c1", "iss_html_url": "https://github.com/keras-team/keras/issues/2893", "iss_label": "", "title": "Is the cifar10_cnn.py example actually performing data augmentation?", "body": "When `datagen.fit(X_train)` is called in the [`cifar10_cnn.py` example](https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py#L103), shouldn't it be (when `data_augmentation=True`):\n\n``` python\ndatagen.fit(X_train, augment=True)\n```\n\nas the [default value for `augment` is `False`](https://github.com/fchollet/keras/blob/master/keras/preprocessing/image.py#L410)?\n\nAlso, I am right in thinking when using `augment=True` the original (i.e. non-augmented - ignoring any normalisation/standardisation) data is not necessarily trained on? If so, I thought data augmentation is a method of artificially increasing the size of your dataset, so shouldn't we additionally be training on the non-augmented data? Thanks\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "90f441a6a0ed4334cac53760289061818a68b7c1", "files": [{"path": "keras/preprocessing/image.py", "Loc": {"('ImageDataGenerator', 'fit', 404)": {"mod": [419, 420, 421, 422, 423]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/preprocessing/image.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "654404c2ed8db47a5361a3bff9126a16507c9c4c", "iss_html_url": "https://github.com/keras-team/keras/issues/1787", "iss_label": "", "title": "What happened to WordContextProduct?", "body": "``` python\nIn [1]: import keras\n\nIn [2]: keras.__version__\nOut[2]: '0.3.2'\n\nIn [3]: from keras.layers.embeddings import WordContextProduct\nUsing Theano backend.\n/usr/local/lib/python3.5/site-packages/theano/tensor/signal/downsample.py:5: UserWarning: downsample module has been moved to the pool module.\n warnings.warn(\"downsample module has been moved to the pool module.\")\n---------------------------------------------------------------------------\nImportError Traceback (most recent call last)\n<ipython-input-3-65e83b407b3e> in <module>()\n----> 1 from keras.layers.embeddings import WordContextProduct\n\nImportError: cannot import name 'WordContextProduct'\n```\n\nThis page now returns a 404: https://github.com/fchollet/keras/blob/master/examples/skipgram_word_embeddings.py\n\nWas this code taken out of keras, or just moved somewhere else?\n\nThanks,\n\nZach\n\n---\n\nPlease make sure that the boxes below are checked before you submit your issue. Thank you!\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\nI'm trying something like this:\n\n``` python\nmodels = []\n\n# Word vectors\nmodel_word = Sequential()\nmodel_word.add(Embedding(1e4, 300, input_length=1))\nmodel_word.add(Reshape(dims=(300,)))\nmodels.append(model_word)\n\n# Context vectors\nmodel_context = Sequential()\nmodel_context.add(Embedding(1e4, 300, input_length=1))\nmodel_context.add(Reshape(dims=(300,)))\nmodels.append(model_context)\n\n# Combined model\nmodel = Sequential()\nmodel.add(Merge(models, mode='dot'))\nmodel.add(Dense(1))\nmodel.add(Activation('sigmoid'))\nmodel.compile(loss='mean_squared_error', optimizer=Adam(lr=0.001))\n```\n\nDoes that look reasonable? And then as input, I need to provide 2 lists of indexes?\n\"", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [54], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "8778add0d66aed64a8970c34576bf5800bc19170", "iss_html_url": "https://github.com/keras-team/keras/issues/3335", "iss_label": "", "title": "Masking the output of a conv layer", "body": "Hi,\nI am trying to apply a given mask in the output of a conv layer. The simplest form of my problem can be seen in the img\n\n![image](https://cloud.githubusercontent.com/assets/810340/17194147/e8728ad4-542c-11e6-8c60-b2949c288cec.png)\n\nThe mask should be considered as an input when training/predicting. I have already tried to use the Merge layer (mode='mul') to apply the input mask as follows:\n\n``` python\nmain_input= Input(shape=(3, 64, 64))\nmask1_input = Input(shape=(1, 64, 64))\nmask2_input = Input(shape=(1, 64, 64))\n\nconv1 = Convolution2D(1,7,7, border_mode='same')(main_input)\nmerged_model1 = Sequential()\nmerged_model1.add(Merge([conv1, mask1_input], mode='mul'))\n\nconv2 = Convolution2D(1, 7,7, border_mode='same')(main_input)\nmerged_model2 = Sequential()\nmerged_model2.add(Merge([conv2, mask2_input], mode='mul'))\n\nmodel = Sequential()\nmodel.add(Merge([merged_model1,merged_model2],mode='sum'))\n```\n\nBut it is not working, maybe because I'm trying to merge a layer with a Tensor. But even if I could do that, I don't feel this is the right way to do that. Can someone help?\n\nPlease make sure that the boxes below are checked before you submit your issue. Thank you!\n- [X] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [X] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8778add0d66aed64a8970c34576bf5800bc19170", "files": [{"path": "keras/src/models/model.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/models/model.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "ed07472bc5fc985982db355135d37059a1f887a9", "iss_html_url": "https://github.com/keras-team/keras/issues/13101", "iss_label": "type:support", "title": "model.fit : AttributeError: 'Model' object has no attribute '_compile_metrics'", "body": "**System information** \r\n- Have I written custom code (as opposed to using example directory): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Mint 19.3\r\n- TensorFlow backend (yes / no): yes\r\n- TensorFlow version: 2.0.0b1\r\n- Keras version: 2.2.4-tf\r\n- Python version: 3.6\r\n- CUDA/cuDNN version: /\r\n- GPU model and memory: GTX 940MX, 430.26\r\n\r\n**Describe the current behavior** \r\nThe model.fit() function throws a `AttributeError: 'Model' object has no attribute '_compile_metrics'` exception.\r\n\r\n**Describe the expected behavior** \r\nIt should work ?\r\n\r\n**Code to reproduce the issue** \r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\ninput_3D = tf.keras.Input(shape=(None, None, None, 1)) # unknown width, length and depth, 1 gray channel\r\nnetwork_3D = tf.keras.layers.Conv3D(\r\n filters = 128, # dimensionality of output space\r\n kernel_size = 5, # shape of 2D convolution window (5x5)\r\n strides = 1, # stride of convolution along all spatial dimensions\r\n padding = \"same\", data_format = \"channels_last\", # input with shape (batch, height, width, channels)\r\n activation = tf.keras.layers.LeakyReLU(alpha = 0.2), # activation function to use\r\n use_bias = True,\r\n kernel_initializer = tf.keras.initializers.TruncatedNormal(stddev = 1e-2),\r\n # initializer for the kernel weights matrix\r\n bias_initializer = 'zeros', # initializer for the bias vector\r\n input_shape = (None, None, None, 1)\r\n)(input_3D)\r\nnetwork_3D = tf.keras.layers.BatchNormalization(\r\n momentum = 0.1, # momentum + decay = 1.0\r\n epsilon = 1e-5,\r\n scale = True\r\n)(network_3D)\r\n\r\nmodel = tf.keras.Model(inputs = input_3D, outputs = network_3D)\r\nmodel.loss = tf.losses.mean_squared_error\r\nmodel.optimizer = tf.keras.optimizers.Adam(learning_rate = 0.002)\r\nv = np.zeros((100,100,100,100))\r\nl = np.zeros((100,100,100))\r\nmodel.fit(v, l, epochs = 20, batch_size = 1)\r\n``` \r\n\r\n**Other info / logs** \r\n```python\r\nTraceback (most recent call last):\r\n File \".../venv/lib/python3.6/site-packages/IPython/core/interactiveshell.py\", line 3296, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-14-a0cacfaacdab>\", line 1, in <module>\r\n history = model.fit(v, l, epochs = 20, batch_size = 1)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 643, in fit\r\n use_multiprocessing=use_multiprocessing)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py\", line 632, in fit\r\n shuffle=shuffle)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 2385, in _standardize_user_data\r\n metrics=self._compile_metrics,\r\nAttributeError: 'Model' object has no attribute '_compile_metrics'\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ed07472bc5fc985982db355135d37059a1f887a9", "files": [{"path": "keras/engine/training.py", "Loc": {"('Model', 'compile', 40)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "a3d160b9467c99cbb27f9aa0382c759f45c8ee66", "iss_html_url": "https://github.com/keras-team/keras/issues/9741", "iss_label": "", "title": "Improve Keras Documentation User Experience for Long Code Snippets By Removing The Need For Horizontal Slide Bars", "body": "**Category**: documentation user-experience\r\n**Comment**: modify highlight.js <code></code> to wrap long documentation code snippets\r\n**Why**: eliminates the need for a user to manually click and slide a horizontal slider just to get a quick sense of what available parameters and their default values are\r\n\r\n**Context**\r\nWhile reading the documentation, and coming from a scikit-learn background, I really like how their documentation shows all the class and method parameters ([example page](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html)). It's very helpful to quickly be able to see the default parameters.\r\n\r\nTake [Dense](https://keras.io/layers/core/#dense) for example. If the documentation looked like this (imagine this a code block, not individually highlighted lines):\r\n\r\n`keras.layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)`\r\n\r\nBenefits:\r\n- easy to read\r\n- no scrolling a horizontal slider\r\n- immediately tells me the available parameters and their default values\r\n\r\nCompare that experience to the current Keras experience:\r\n\r\n```\r\nkeras.layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)\r\n```\r\n\r\nDisadvantages:\r\n- requires scrolling horizontally to see the rest\r\n- easy to lose track of where you are while scrolling\r\n- requires physical action to see everything\r\n\r\nThe Keras team no-doubt is busy with much bigger concerns than documentation formatting. One could say that the \"Arguments\" are all listed below or by clicking the \"Source\". True, however the key point I'm trying to make is usability, and quick readability. Reading through an \"Argument\"'s verbose description, or having to scroll horizontally is not quick nor an optimal experience.\r\n\r\nI'm not going to make a case for why making documentation easy-to-read is important. I think the Keras documentation **content** itself is outstanding.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a3d160b9467c99cbb27f9aa0382c759f45c8ee66", "files": [{"path": "docs/autogen.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["docs/autogen.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "7a12fd0f8597760cf8e1238a9b021e247693517b", "iss_html_url": "https://github.com/keras-team/keras/issues/2372", "iss_label": "", "title": "problem of save/load model", "body": "HI, \n\nThanks for making such a wonderful tool!\n\nI'm using Keras 1.0. I want to save and load the model both the arch and the parameters. So I use the method in FAQ. Here is the code.\n\n```\ndef save_model(self, model, options):\n json_string = model.to_json()\n open(options['file_arch'], 'w').write(json_string)\n model.save_weights(options['file_weight'])\n\ndef load_model(self, options):\n self.model = model_from_json(open(options['file_arch']).read())\n self.model.load_weights(options['file_weight'])\n return self.model\n```\n\nWhen I load model and use model.predict(), there is a error:\nAttributeError: 'NoneType' object has no attribute 'predict'\n\nDon't know why. If I don't load the model from file, just train a model and use it, everything seems ok.\n\nI checked the issues, most people just need to load the parameters. Is it possible when I load the architecture, I overwrite the old model and loose the model.predict()?\n\nThanks again for making Keras!\n\nBen\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "7a12fd0f8597760cf8e1238a9b021e247693517b", "files": [{"path": "keras/src/trainers/trainer.py", "Loc": {"('Trainer', 'compile', 40)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/trainers/trainer.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "284ef7b495a61238dccc6149996c4cb88fef1c5a", "iss_html_url": "https://github.com/keras-team/keras/issues/933", "iss_label": "", "title": "Same model but graph gives bad performance", "body": "Hello, \n\nI am learning to use Graph as it seems more powerful so I implemented one of my previous model which uses Sequential. Here is the model using sequential (number of dimension set in random):\n\n```\ndef build_generation_embedding_model(self, dim):\n print \"Build model ...\"\n input_model = Sequential()\n input_model.add(TimeDistributedDense(dim, input_shape=(10,10)))\n input_model.add(LSTM(dim, return_sequences=False))\n input_model.add(Dense(dim))\n canonical_model = Sequential()\n canonical_model.add(TimeDistributedDense(dim, input_shape=(15,15)))\n canonical_model.add(LSTM(dim, return_sequences=False))\n canonical_model.add(Dense(dim))\n self.model = Sequential()\n self.model.add(Merge([input_model, canonical_model], mode='concat'))\n self.model.add(Dense(15))\n self.model.add(Activation('softmax'))\n self.model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n```\n\nThe model works fine and below is my reimplementation using Graph:\n\n```\ndef build_generation_embedding_model_graph(self, dim):\n self.model = Graph()\n self.model.add_input(name='input1', input_shape=(10,10))\n self.model.add_input(name='canonical', input_shape=(15,15))\n self.model.add_node(TimeDistributedDense(dim), name='Embed_input1', input='input1')\n self.model.add_node(TimeDistributedDense(dim), name='Embed_canonical', input='canonical')\n self.model.add_node(LSTM(dim, return_sequences=False), name='Hidden_input1', input='Embed_input1')\n self.model.add_node(LSTM(dim, return_sequences=False), name='Hidden_canonical', input='Embed_canonical')\n self.model.add_node(Dense(15), name='merge', inputs=['Hidden_input1','Hidden_canonical'], merge_mode='concat')\n self.model.add_node(Activation('softmax'), name='activation', input='merge')\n self.model.add_output(name='output', input='merge')\n self.model.compile('rmsprop', {'output':'categorical_crossentropy'})\n```\n\nMy impression is that they are exactly the same model (grateful if somebody spotted something wrong there). But the model based on Graph gives a loss of 3.6 while the loss for the other one is around 0.002. \n\nIs there a reason for this please ?\n\nThank you for your help\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [36], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "c2b844ba2fe8d0d597da9ef6a9af3b20d18d0bec", "iss_html_url": "https://github.com/keras-team/keras/issues/7603", "iss_label": "", "title": "Loss Increases after some epochs ", "body": "I have tried different convolutional neural network codes and I am running into a similar issue. The network starts out training well and decreases the loss but after sometime the loss just starts to increase. I have shown an example below: \r\nEpoch 15/800\r\n1562/1562 [==============================] - 49s - loss: 0.9050 - acc: 0.6827 - val_loss: 0.7667 - val_acc: 0.7323\r\nEpoch 16/800\r\n1562/1562 [==============================] - 49s - loss: 0.8906 - acc: 0.6864 - val_loss: 0.7404 - val_acc: 0.7434\r\nEpoch 380/800\r\n1562/1562 [==============================] - 49s - loss: 1.5519 - acc: 0.4880 - val_loss: 1.4250 - val_acc: 0.5233\r\nEpoch 381/800\r\n1562/1562 [==============================] - 48s - loss: 1.5416 - acc: 0.4897 - val_loss: 1.5032 - val_acc: 0.4868\r\nEpoch 800/800\r\n1562/1562 [==============================] - 49s - loss: 1.8483 - acc: 0.3402 - val_loss: 1.9454 - val_acc: 0.2398\r\n\r\nI have tried this on different cifar10 architectures I have found on githubs. I am training this on a GPU Titan-X Pascal. This only happens when I train the network in batches and with data augmentation. I have changed the optimizer, the initial learning rate etc. I have also attached a link to the code. I just want a cifar10 model with good enough accuracy for my tests, so any help will be appreciated. The code is from this:\r\nhttps://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c2b844ba2fe8d0d597da9ef6a9af3b20d18d0bec", "files": [{"path": "examples/cifar10_cnn.py", "Loc": {"(None, None, None)": {"mod": [65]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/cifar10_cnn.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "530eff62e5463e00d73e72c51cc830b9ac3a14ab", "iss_html_url": "https://github.com/keras-team/keras/issues/3997", "iss_label": "", "title": "Using keras for Distributed training raise RuntimeError(\"Graph is finalized and cannot be modified.\")", "body": "I'm using keras for distributed training with following code:\n\n``` python\n#!/usr/bin/env python\n# -*- coding:utf-8 -*-\n# Created by Enigma on 2016/9/26\n\nimport numpy as np\nimport tensorflow as tf\n\n# Define Hyperparameters\nFLAGS = tf.app.flags.FLAGS\n\n# For missions\ntf.app.flags.DEFINE_string(\"ps_hosts\", \"\",\n \"Comma-separated list of hostname:port pairs\")\ntf.app.flags.DEFINE_string(\"worker_hosts\", \"\",\n \"Comma-separated list of hostname:port pairs\")\ntf.app.flags.DEFINE_string(\"job_name\", \"\", \"One of 'ps', 'worker'\")\ntf.app.flags.DEFINE_integer(\"task_index\", 0, \"Index of task within the job\")\n\n# Hyperparameters\n\nfrom keras import backend as K\nfrom keras.layers import Input, Dense\nfrom keras.models import Model\n\n\ndef main(_):\n ps_hosts = FLAGS.ps_hosts.split(\",\")\n worker_hosts = FLAGS.worker_hosts.split(\",\")\n cluster = tf.train.ClusterSpec({\"ps\": ps_hosts, \"worker\": worker_hosts})\n\n server_config = tf.ConfigProto(\n gpu_options=tf.GPUOptions(allow_growth=True),\n log_device_placement=True)\n server = tf.train.Server(cluster, config=server_config,\n job_name=FLAGS.job_name, task_index=FLAGS.task_index)\n\n if FLAGS.job_name == \"ps\":\n server.join()\n elif FLAGS.job_name == \"worker\":\n with tf.device(tf.train.replica_device_setter(\n worker_device=\"/job:worker/task:%d/cpu:0\" % FLAGS.task_index,\n cluster=cluster)):\n global_step = tf.Variable(0, name='global_step', trainable=False)\n inputs = Input(shape=[1, ])\n hidden = Dense(10, activation='relu')(inputs)\n output = Dense(1, activation='sigmoid')(hidden)\n model = Model(input=inputs, output=output)\n\n saver = tf.train.Saver()\n summary_op = tf.merge_all_summaries()\n\n sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0),\n logdir=\"./checkpoint/\",\n # init_op=init_op,\n summary_op=summary_op,\n saver=saver,\n global_step=global_step,\n save_model_secs=60)\n with sv.managed_session(server.target) as sess:\n step = 0\n K.set_session(sess)\n model.compile(optimizer='sgd', loss='mse')\n while step < 1000000:\n train_x = np.random.randn(1)\n train_y = 2 * train_x + np.random.randn(1) * 0.33 + 10\n model.fit(train_x, train_y)\n sv.stop()\n\nif __name__ == \"__main__\":\n tf.app.run()\n```\n\nthen I run it with:\n\n```\n/opt/anaconda3/bin/python /cache/allenwoods/keras_dis_test.py --ps_hosts=0.0.0.0:48636 --worker_hosts=0.0.0.0:46261 --job_name=ps --task_index=0\n/opt/anaconda3/bin/python /cache/allenwoods/keras_dis_test.py --ps_hosts=0.0.0.0:48636 --worker_hosts=0.0.0.0:46261 --job_name=worker --task_index=0\n```\n\nit doesn't work and return\n\n```\nTraceback (most recent call last):\n File \"/cache/allenwoods/keras_dis_test.py\", line 73, in <module>\n tf.app.run()\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/platform/app.py\", line 30, in run\n sys.exit(main(sys.argv[:1] + flags_passthrough))\n File \"/cache/allenwoods/keras_dis_test.py\", line 69, in main\n model.fit(train_x, train_y)\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 969, in managed_session\n self.stop(close_summary_writer=close_summary_writer)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 797, in stop\n stop_grace_period_secs=self._stop_grace_secs)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/coordinator.py\", line 386, in join\n six.reraise(*self._exc_info_to_raise)\n File \"/opt/anaconda3/lib/python3.5/site-packages/six.py\", line 686, in reraise\n raise value\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 959, in managed_session\n yield sess\n File \"/cache/allenwoods/VRLforTraffic/src/missions/keras_dis_test.py\", line 65, in main\n model.compile(optimizer='sgd', loss='mse')\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/engine/training.py\", line 484, in compile\n self.optimizer = optimizers.get(optimizer)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/optimizers.py\", line 580, in get\n instantiate=True, kwargs=kwargs)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/utils/generic_utils.py\", line 18, in get_from_module\n return res()\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/optimizers.py\", line 134, in __init__\n self.iterations = K.variable(0.)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py\", line 149, in variable\n v = tf.Variable(value, dtype=_convert_string_dtype(dtype), name=name)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 215, in __init__\n dtype=dtype)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 327, in _init_from_args\n self._snapshot = array_ops.identity(self._variable, name=\"read\")\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 3645, in get_controller\n yield default\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2891, in name_scope\n yield \"\" if new_stack is None else new_stack + \"/\"\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 293, in _init_from_args\n initial_value, name=\"initial_value\", dtype=dtype)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 657, in convert_to_tensor\n ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py\", line 180, in _constant_tensor_conversion_function\n return constant(v, dtype=dtype, name=name)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py\", line 167, in constant\n attrs={\"value\": tensor_value, \"dtype\": dtype_value}, name=name).outputs[0]\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2339, in create_op\n self._check_not_finalized()\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2080, in _check_not_finalized\n raise RuntimeError(\"Graph is finalized and cannot be modified.\")\nRuntimeError: Graph is finalized and cannot be modified.\n```\n\nI wondering if it happens because keras' model wasn't created as part of the graph used in tf.train.Supervisor, but I have not a clue on how to prove it or fix it. Any idea\uff1f\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "530eff62e5463e00d73e72c51cc830b9ac3a14ab", "files": [{"path": "keras/engine/training.py", "Loc": {"('Model', '_make_train_function', 685)": {"mod": []}, "('Model', '_make_test_function', 705)": {"mod": []}, "('Model', '_make_predict_function', 720)": {"mod": []}}, "status": "modified"}, {"path": "keras/backend/tensorflow_backend.py", "Loc": {"(None, 'manual_variable_initialization', 31)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py", "keras/backend/tensorflow_backend.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "c2e36f369b411ad1d0a40ac096fe35f73b9dffd3", "iss_html_url": "https://github.com/keras-team/keras/issues/4810", "iss_label": "", "title": "Parent module '' not loaded, cannot perform relative import with vgg16.py", "body": "just set up my ubuntu and have the python 3.5 installed, together with Keras...the following occurs:\r\n\r\nRESTART: /usr/local/lib/python3.5/dist-packages/keras/applications/vgg16.py \r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/keras/applications/vgg16.py\", line 14, in <module>\r\n from ..models import Model\r\nSystemError: Parent module '' not loaded, cannot perform relative import\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c2e36f369b411ad1d0a40ac096fe35f73b9dffd3", "files": [{"path": "keras/applications/vgg16.py", "Loc": {"(None, None, None)": {"mod": [14, 15, 16, 17, 18, 19, 20, 21]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/applications/vgg16.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "keras-team", "repo_name": "keras", "base_commit": "da86250e5a95a7adccabd8821b0d51508c82bddc", "iss_html_url": "https://github.com/keras-team/keras/issues/18439", "iss_label": "stat:awaiting response from contributor\nstale\ntype:Bug", "title": "Problem with framework agnostic KerasVariable slicing with another KerasVariable", "body": "I defined a KerasVariable with shape (n,d) in a `keras.Layer()` using `self.add_weight()`. I've also defined another KerasVariable with shape (1) , dtype=\"int32\", and value 0. \r\n\r\n```\r\nself.first_variable = self.add_weight(\r\n initializer=\"zeros\", shape=(self.N,input_shape[-1]), trainable=False\r\n)\r\nself.second_variable = self.add_weight(initializer=\"zeros\",shape=(1), trainable=False, dtype=\"int32\")\r\n```\r\n\r\nDuring a call to this custom layer, I'm trying to retrieve a specific index of the first variable using the 2nd variable with:\r\n\r\n`self.first_variable[self.second_variable.value]`\r\n\r\nThis works as expected in pytorch backend, but throws an error in tensorflow backend.\r\n\r\n```\r\nOnly integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <tf.Variable 'custom_layer/variable_1:0' shape=(1,) dtype=int32>\r\n\r\nArguments received by CustomLayer.call():\r\n \u2022 x=tf.Tensor(shape=(None, 1600), dtype=float32)\r\n \u2022 training=True\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "da86250e5a95a7adccabd8821b0d51508c82bddc", "files": [{"path": "keras/src/ops/core.py", "Loc": {"(None, 'slice', 388)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/ops/core.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "09d9f63c98f9c4fc0953dd3fd6fb4589e9e1f6f3", "iss_html_url": "https://github.com/nvbn/thefuck/issues/376", "iss_label": "", "title": "Shell history polution", "body": "I haven't used this, but I just thought maybe this is not such a good idea because it's going to make traversing shell history really irritating. Does this do anything to get around that, or are there any workarounds?\n\nIf not, I know in zsh you can just populate the command line with whatever you want using LBUFFER and RBUFFER. What if you made it an option to type \"fuck\" then hit ctrl-F (for \"fuck\"), and it would just replace your command line with the correction, and if there's multiple candidates cycle through them by hitting ctrl-F again. That also lets you edit the correction however you want as well.\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ohmyzsh", "pro": "ohmyzsh", "path": ["plugins/thefuck"]}], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["plugins/thefuck"]}}, {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "6975d30818792f1b37de702fc93c66023c4c50d5", "iss_html_url": "https://github.com/nvbn/thefuck/issues/1087", "iss_label": "", "title": "Thinks 'sl' is install python softlayer ", "body": "\r\n![image](https://user-images.githubusercontent.com/13007697/81414970-66971080-910d-11ea-8a44-da5ab9ca77f9.png)\r\nAh, yes. This wasn't a mis-spelling of ls at all, but me installing Python-Softlayer.\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.30 using Python 3.8.2 and Bash 5.0.16(1)-release\r\n\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\nManjaro\r\n\r\nHow to reproduce the bug:\r\n\r\nType sl in the terminal, then fuck\r\n\r\n\r\n\r\n<!-- It's only with enough information that we can do something to fix the problem. -->\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6975d30818792f1b37de702fc93c66023c4c50d5", "files": [{"path": "thefuck/rules/sl_ls.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/sl_ls.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "921778a7cfa442409d17ab946c5f579e308c4f2b", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/404", "iss_label": "invalid", "title": "api\u8c03\u7528\u65f6\uff0c\u56de\u7b54\u7684\u5185\u5bb9\u4e2d\u51fa\u73b0\u83ab\u540d\u5176\u5999\u7684\u81ea\u52a8\u95ee\u7b54", "body": "\u4f7f\u7528\u7684baichuan-13b\u6a21\u578b\r\n\u4f7f\u7528\u7684scr/api_demo.py\r\n\u63d0\u95ee\u5185\u5bb9\u4e3a\uff1a\u4f60\u597d\r\n\u56de\u7b54\u4f1a\u5982\u56fe\r\n![image](https://github.com/hiyouga/LLaMA-Efficient-Tuning/assets/26214176/0d2beb92-e3b4-4126-a84f-d30bde97a194)\r\n\r\n\u4e0d\u660e\u767d\u4e3a\u4ec0\u4e48\u4f1a\u51fa\u73b0\u81ea\u52a8\u7684\u591a\u8f6e\u81ea\u6211\u95ee\u7b54", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "921778a7cfa442409d17ab946c5f579e308c4f2b", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nreadme\u4e2d\u63d0\u53ca", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "984b202f835d6f3f4869cbb1f0460bb2d9163fc1", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/6562", "iss_label": "solved", "title": "Batch Inference Error for qwen2vl Model After Full Fine-Tuning", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n- `llamafactory` version: 0.9.2.dev0\r\n- Python version: 3.8.20\r\n- PyTorch version: 2.4.1+cu121 (GPU)\r\n- Transformers version: 4.46.1\r\n- Datasets version: 3.1.0\r\n- Accelerate version: 1.0.1\r\n- PEFT version: 0.12.0\r\n- TRL version: 0.9.6\n\n### Reproduction\n\n\r\nI have fine-tuned the qwen2vl model using the command:\r\n\r\n```python\r\nllamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml\r\n```\r\nAfter saving the model in the \"saves\" directory, I attempted to perform batch inference using the provided script:\r\n\r\n```python\r\npython scripts/vllm_infer.py --model_name_or_path path_to_merged_model --dataset alpaca_en_demo\r\n```\r\nHowever, I encountered the following error:\r\n\r\n```python\r\nValueError: This model does not support image input.\r\n```\r\n\r\n1.The model_path I used points to the model saved after running the full fine-tuning script.\r\n2.I have successfully used the LoRA fine-tuned model (trained with the lora_sft script and merged with merge_lora script), which allows for inference using the method provided in the qwen2vl documentation.\r\n3.However, the model saved after full fine-tuning does not seem to support direct inference in the same way.\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "984b202f835d6f3f4869cbb1f0460bb2d9163fc1", "files": [{"path": "scripts/vllm_infer.py", "Loc": {"(None, 'vllm_infer', 38)": {"mod": [43]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/vllm_infer.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "4ed2b629a51ef58d229c795e85238d40346ecb58", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/5478", "iss_label": "solved", "title": "Can we set default_system in yaml file when training?", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n- `llamafactory` version: 0.8.4.dev0\r\n- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyTorch version: 2.4.0+cu121 (GPU)\r\n- Transformers version: 4.44.2\r\n- Datasets version: 2.21.0\r\n- Accelerate version: 0.33.0\r\n- PEFT version: 0.12.0\r\n- TRL version: 0.9.6\r\n- GPU type: NVIDIA A800-SXM4-80GB\r\n- DeepSpeed version: 0.15.0\n\n### Reproduction\n\n llamafactory-cli train\n\n### Expected behavior\n\nWe do not need the `default_system` in `template.py`.\r\nSet `default_system` in training yaml file to overwrite so we do not need to modify the source code in `template.py`.\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "4ed2b629a51ef58d229c795e85238d40346ecb58", "files": [{"path": "data/", "Loc": {}}, {"path": "data/", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["data/"]}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "18c6e6fea9dcc77c03b36301efe2025a87e177d5", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1971", "iss_label": "solved", "title": "llama'response repeat input then the answer", "body": "### Reminder\n\n- [ ] I have read the README and searched the existing issues.\n\n### Reproduction\n\n input_ids = tokenizer([\"[INST] \" +{text}\" + \" [/INST]\"], return_tensors=\"pt\", add_special_tokens=False).input_ids.to('cuda')\r\n\r\n generate_input = {\r\n \"input_ids\": input_ids,\r\n \"max_new_tokens\": 512,\r\n \"do_sample\": True,\r\n \"top_k\": 10,\r\n \"top_p\": 0.95,\r\n \"temperature\": 0.01,\r\n \"repetition_penalty\": 1.3,\r\n \"eos_token_id\": tokenizer.eos_token_id,\r\n \"bos_token_id\": tokenizer.bos_token_id,\r\n \"pad_token_id\": tokenizer.pad_token_id\r\n }\r\n\r\n generate_ids = model.generate(**generate_input)\r\n response = tokenizer.decode(generate_ids[0], skip_special_tokens=True)\r\n print(response)\n\n### Expected behavior\n\nI expect that llama just response the answer. for example,\r\ninput is \"[INST] how are you [/INST]\", output \"**I am fine**\"\r\nbut it repeat the input, the output is \"**[INST] how are you [/INST] I am fine**\"\r\n\n\n### System Info\n\n_No response_\n\n### Others\n\nDo you have any suggestions? This behaviour will limit the speed of the output and I wonder why this happen?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "18c6e6fea9dcc77c03b36301efe2025a87e177d5", "files": [{"path": "src/llmtuner/chat/chat_model.py", "Loc": {"('ChatModel', 'chat', 88)": {"mod": [102]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/chat/chat_model.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "13eb365eb768f30d46967dd5ba302ab1106a96b6", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1443", "iss_label": "solved", "title": "deepspeed\u8fdb\u884c\u4e00\u673a\u516b\u5361\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u76f4\u63a5\u914d\u7f6e--checkpoint_dir\u53c2\u6570\uff0c\u4ec5\u52a0\u8f7dmodel\u6743\u91cd\uff0c\u65e0\u6cd5\u52a0\u8f7doptimizer\u6743\u91cd", "body": "\u5728\u914d\u7f6ebaichuan2\u6a21\u578b\u5728\u56fa\u5b9a `step` \u8fdb\u884c\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u5e0c\u671b\u540c\u65f6\u52a0\u8f7dmp_rank_00_model_states.pt\u4ee5\u53cazero_pp_rank_*_mp_rank_00_optim_states.pt\r\n\r\n\u7136\u800c\uff0c\u5728\u4f7f\u7528\u5982\u4e0b\u547d\u4ee4 `--checkpoint_dir` \u542f\u52a8\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u5e76\u6ca1\u6709\u8f7d\u5165\u4f18\u5316\u5668zero_pp_rank_*_mp_rank_00_optim_states.pt\r\n`\r\ndeepspeed --num_gpus ${NUM_GPUS_PER_WORKER} src/train_bash.py \\\r\n --stage sft \\\r\n --model_name_or_path /xxxxxxxxxx/model_weight \\\r\n --deepspeed ./ds_config.json \\\r\n --do_train \\\r\n --dataset alpaca_gpt4_en \\\r\n --template default \\\r\n --checkpoint_dir /xxxxxxxxxxxxxxxxx/output_sft/checkpoint-1 \\\r\n --finetuning_type full \\\r\n --output_dir ./output_sft \\\r\n --overwrite_cache \\\r\n --overwrite_output_dir \\\r\n --per_device_train_batch_size 2 \\\r\n --gradient_accumulation_steps 16 \\\r\n --lr_scheduler_type cosine \\\r\n --logging_steps 1 \\\r\n --save_steps 10000 \\\r\n --learning_rate 5e-5 \\\r\n --num_train_epochs 10.0 \\\r\n --plot_loss \\\r\n --fp16 | tee logs/train_g16_lr5e.log\r\n`\r\n\r\n\r\n\u8bf7\u95ee\u5982\u4f55\u624d\u80fd\u987a\u5229\u52a0\u8f7d\u6240\u6709\u6743\u91cd\u4e0e\u72b6\u6001\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "13eb365eb768f30d46967dd5ba302ab1106a96b6", "files": [{"path": "src/llmtuner/tuner/sft/workflow.py", "Loc": {"(None, 'run_sft', 19)": {"mod": [67]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/tuner/sft/workflow.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "5377d0bf95f2fc79b75b253e956a7945f3030ad3", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/908", "iss_label": "solved", "title": "\u8bc4\u4f30\u6307\u6807\u9664\u4e86BLEU \u5206\u6570\u548c\u6c49\u8bed ROUGE \u5206\u6570\u8fd8\u80fd\u4f7f\u7528\u5176\u4ed6\u7684\u8bc4\u4f30\u6307\u6807\u5417\uff1f", "body": "\u6211\u60f3\u628a\u6a21\u578b\u7528\u4e8e\u610f\u56fe\u8bcd\u69fd\u7684\u63d0\u53d6\uff0c\u4e00\u822c\u8fd9\u4e2a\u4efb\u52a1\u7684\u8bc4\u4ef7\u6307\u6807\u662f\u51c6\u786e\u7387\u548cF1 score\u7b49\uff0c\u8bf7\u95ee\u5728\u8fd9\u4e2a\u9879\u76ee\u91cc\u80fd\u4f7f\u7528\u51c6\u786e\u7387\u548cF1 score\u4f5c\u4e3a\u8bc4\u4ef7\u6307\u6807\u5417\uff1f\u5e94\u8be5\u600e\u4e48\u505a\u5462\uff1f\u8c22\u8c22\u5927\u4f6c\u89e3\u7b54~", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5377d0bf95f2fc79b75b253e956a7945f3030ad3", "files": [{"path": "src/llmtuner/tuner/sft/metric.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/tuner/sft/metric.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "93809d1c3b73898a89cbdd99061eeeed5fd4f6a7", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1120", "iss_label": "solved", "title": "\u7cfb\u7edf\u63d0\u793a\u8bcd", "body": "\u60f3\u8bf7\u6559\u4e0b\u5927\u4f6c\uff0c\u201c\u7cfb\u7edf\u63d0\u793a\u8bcd\uff08\u975e\u5fc5\u586b\uff09\u201c\u6846\u4f20\u5165\u7684\u5185\u5bb9\u600e\u4e48\u8f93\u5165\u7ed9\u6a21\u578b\u7684\uff0c\u600e\u4e48\u548c\u201d\u8f93\u5165\u3002\u3002\u201c\u6846\u4f20\u5165\u7684\u5185\u5bb9\u62fc\u63a5\u7684\uff1f\u5bf9\u5e94\u7684\u4ee3\u7801\u5728\u54ea\u91cc\uff1f\r\n\r\n\u611f\u8c22\u611f\u8c22", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "93809d1c3b73898a89cbdd99061eeeed5fd4f6a7", "files": [{"path": "src/llmtuner/extras/template.py", "Loc": {"('Template', '_encode', 93)": {"mod": [109]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/extras/template.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "757564caa1a0e83d184100604e43efe3c5030c0e", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/2584", "iss_label": "solved", "title": "\u8bf7\u6559llama pro\u5e94\u8be5\u600e\u4e48\u7528\uff1f\u662f\u53ef\u4ee5\u7528\u6765\u5fae\u8c03\u5417\uff1f", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### Reproduction\n\n\u8bf7\u6559llama pro\u5e94\u8be5\u600e\u4e48\u7528\uff1f\u662f\u53ef\u4ee5\u7528\u6765\u505apt\u548cSFT\u5417\uff1f\n\n### Expected behavior\n\n_No response_\n\n### System Info\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "757564caa1a0e83d184100604e43efe3c5030c0e", "files": [{"path": "tests/llama_pro.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tests/llama_pro.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "e678c1ccb2583e7b3e9e5bf68b58affc1a71411c", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/5011", "iss_label": "solved", "title": "Compute_Accuracy", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n![image](https://github.com/user-attachments/assets/4847743e-e25b-4136-a3f4-43a3e7335f80)\r\n\r\nI'm curious about this metrics for and how could i use this? and when? ( ComputeAccuracy )\r\n\r\n![image](https://github.com/user-attachments/assets/672f14bb-c812-45fe-ad77-d3c66f660ce5)\r\nand I saw llama-factory paper's metrics ( multi-choice ) and I wonder if this metrics are match with ComputeAccuracy\r\n\r\nanyone can answer me ?\r\n\r\nplease tell me how can i use this metrics, give me some example commands \r\n\r\nthank you! \n\n### Reproduction\n\n. \n\n### Expected behavior\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e678c1ccb2583e7b3e9e5bf68b58affc1a71411c", "files": [{"path": "examples/train_lora/llama3_lora_eval.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "\u914d\u7f6e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["examples/train_lora/llama3_lora_eval.yaml"], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "955e01c038ccc708def77f392b0e342f2f51dc9b", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4787", "iss_label": "solved", "title": "\u5168\u91cf\u5fae\u8c03BaiChuan2-7B-Chat\u7684yaml\u6587\u4ef6\u4e2d\u5982\u4f55\u4fee\u6539\u8d85\u53c2\u6570\u80fd\u5728\u4e09\u5f20A6000\u4e0a\u8fd0\u884c", "body": "### Reminder\r\n\r\n- [X] I have read the README and searched the existing issues.\r\n\r\n### System Info\r\n\r\n- `llamafactory` version: 0.8.2.dev0\r\n- Platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.8.19\r\n- PyTorch version: 2.3.0+cu121 (GPU)\r\n- Transformers version: 4.41.2\r\n- Datasets version: 2.20.0\r\n- Accelerate version: 0.31.0\r\n- PEFT version: 0.11.1\r\n- TRL version: 0.9.4\r\n- GPU type: NVIDIA RTX A6000\r\n- DeepSpeed version: 0.14.0\r\n- vLLM version: 0.4.3\r\n\r\n### Reproduction\r\n```yaml\r\n### model\r\nmodel_name_or_path: /data/Baichuan2-7B-Chat\r\n\r\n### method\r\nstage: sft\r\ndo_train: true\r\nfinetuning_type: full\r\n\r\n### ddp\r\nddp_timeout: 180000000\r\ndeepspeed: examples/deepspeed/ds_z3_config.json\r\n\r\n### dataset\r\ndataset: entity\r\ntemplate: baichuan2\r\ncutoff_len: 1024\r\nmax_samples: 1000\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/baichuan2-7b/full/sft\r\nlogging_steps: 10\r\nsave_steps: 500\r\nplot_loss: true\r\noverwrite_output_dir: true\r\n\r\n### train\r\nper_device_train_batch_size: 1\r\ngradient_accumulation_steps: 2\r\nlearning_rate: 1.0e-4\r\nnum_train_epochs: 3.0\r\nlr_scheduler_type: cosine\r\nwarmup_ratio: 0.1\r\npure_bf16: true\r\n\r\n### eval\r\nval_size: 0.1\r\nper_device_eval_batch_size: 1\r\neval_strategy: steps\r\neval_steps: 500\r\n```\r\n### Expected behavior\r\n\r\n\u60a8\u7684\u9879\u76ee\u4e2d\u7ed9\u51fa7B\u6a21\u578b\u80fd\u5728120G\u7684\u663e\u5b58\u4e0a\u8fd0\u884c\uff0c\u73b0\u5728\u6211\u57283\u5f20A6000\u4e0a\u8fd0\u884c\u4f1a\u51fa\u73b0OOM\uff0c\u5e0c\u671b\u60a8\u80fd\u544a\u8bc9\u6211\u600e\u4e48\u4fee\u6539\u8d85\u53c2\u6570\u80fd\u8ba9\u5b83\u8dd1\u8d77\u6765\u3002\u6211\u4e5f\u53c2\u8003\u4e86\u4e4b\u524d\u7684issue\uff0c\u8bbe\u7f6e\u4e86pure_bf16\uff0c\u4ecd\u7136\u4e0d\u80fd\u8fd0\u884c\u3002\r\n\r\n### Others\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "955e01c038ccc708def77f392b0e342f2f51dc9b", "files": [{"path": "examples/deepspeed/ds_z3_offload_config.json", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/deepspeed/ds_z3_offload_config.json"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "955e01c038ccc708def77f392b0e342f2f51dc9b", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4803", "iss_label": "solved", "title": "predict_oom", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\nmodel_name_or_path: llm/Qwen2-72B-Instruct\r\n# adapter_name_or_path: saves/qwen2_7b_errata_0705/lora_ace04_instruction_v1_savesteps_10/sft\r\n\r\n### method\r\nstage: sft\r\ndo_predict: true\r\nfinetuning_type: lora\r\n\r\n### dataset\r\ndataset: prompt_to_get_cot_normal\r\ntemplate: qwen\r\ncutoff_len: 2048\r\nmax_samples: 1000\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/qwen2_72b_errata_0712/lora/predict\r\noverwrite_output_dir: true\r\n\r\n### eval\r\nper_device_eval_batch_size: 1\r\npredict_with_generate: true\r\nddp_timeout: 180000000\n\n### Reproduction\n\n8\u5361A100 80G \u5728 72b \u7684\u57fa\u5ea7 predict 1k\u7684\u6570\u636e\u663e\u793aoom, \u6240\u6709\u7684\u663e\u5361\u540c\u65f6\u52a0\u8f7d\u6574\u4e2a\u6a21\u578b\u53c2\u6570, \u5bfc\u81f4oom\r\n\u636e\u5b98\u65b9 160G \u5373\u53ef, \u6211\u8fd980*8 \u90fd\u4e0d\u591f, \u8bf7\u95ee\u662fbug\u8fd8\u662f\u9700\u8981\u8bbe\u7f6e\u4ec0\u4e48\u53c2\u6570;\n\n### Expected behavior\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "955e01c038ccc708def77f392b0e342f2f51dc9b", "files": [{"path": "Examples/train_lora/", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3\n\u7528\u6237\u914d\u7f6e\u9519\u8bef", "loc_way": "comment", "loc_scope": "3", "info_type": "config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Examples/train_lora/"]}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "3f11ab800f7dcf4b61a7c72ead4e051db11a8091", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4178", "iss_label": "solved", "title": "glm-4-9b-chat-1m do_predict\u5f97\u5230\u7684generated_predictions.jsonl\u4e2d\u7684label\u51fa\u73b0\u4e86\\n\u548c\u4e00\u4e9b\u975e\u6570\u636e\u96c6\u4e2d\u7684\u7ed3\u679c\u3002", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\nllamafactory 0.7.2.dev0\r\nPython 3.10.14\r\nubuntu 20.04\n\n### Reproduction\n\n$llamafactory-cli train glm_predict.yaml\r\n\r\ngenerated_predictions.jsonl \u8f93\u51fa\r\n{\"label\": \"\\n[S,137.0]\", \"predict\": \"\\n[S,137.0]\"}\r\n{\"label\": \"\\n\", \"predict\": \"\\n\"}\r\n{\"label\": \"\\n\", \"predict\": \"\\n\"}\r\n{\"label\": \"\\n[S,593\", \"predict\": \"\\n[S,593\"}\r\n{\"label\": \"\\n[H,593\", \"predict\": \"\\n[S,593\"}\r\n\r\nglm_predict.yaml \u5185\u5bb9\r\n### model\r\nmodel_name_or_path: ./THUDM_glm-4-9b-chat-1m\r\nadapter_name_or_path: saves/glm/lora/sft\r\n\r\n### method\r\nstage: sft\r\ndo_predict: true\r\nfinetuning_type: lora\r\n\r\n### dataset\r\ndataset: data_v0.1\r\ntemplate: glm4\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/glm/lora/predict\r\n\r\n### eval\r\nper_device_eval_batch_size: 4\r\npredict_with_generate: true\r\n\r\n\r\n\r\n\n\n### Expected behavior\n\n\u671f\u671b\u8f93\u51fa\r\ngenerated_predictions.jsonl \u8f93\u51fa\r\n{\"label\": \"[S,137.0]\", \"predict\": \"[S,137.0]\"}\r\n{\"label\": \"[S,593]\", \"predict\": \"[S,593]\"}\r\n{\"label\": \"[H,593]\", \"predict\": \"[S,593]\"}\r\n\r\n\r\n\u53ea\u5305\u542b\"\\n\"\u7684\u7ed3\u679c\u90fd\u6ca1\u6709\u51fa\u73b0\u5728\u6570\u636e\u96c6\u4e2d\u3002\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "3f11ab800f7dcf4b61a7c72ead4e051db11a8091", "files": [{"path": "src/llamafactory/data/template.py", "Loc": {"(None, None, None)": {"mod": [663, 664]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llamafactory/data/template.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "d46c136c0e104c50999df18a88c42658b819f71f", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/230", "iss_label": "solved", "title": "\u4f7f\u7528\u672c\u9879\u76ee\u8bad\u7ec3baichuan-13b\u4e4b\u540e\uff0c\u5982\u4f55\u5728baichuan-13b\u4e2d\u52a0\u8f7d\u8bad\u7ec3\u5b8c\u7684\u6a21\u578b", "body": "\u8bad\u7ec3\u5b8c\u6210\u540e\u5982\u4f55\u5e94\u8be5\u5982\u4f55\u5728baichuan-13b\u7684\u9879\u76ee\u4e2d\u4fee\u6539\u52a0\u8f7d\u8bad\u7ec3\u5b8c\u6210\u540e\u7684\u6a21\u578b\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d46c136c0e104c50999df18a88c42658b819f71f", "files": [{"path": "src/export_model.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/export_model.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "024b0b1ab28d3c3816f319370ed79a4f26d40edf", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1995", "iss_label": "solved", "title": "Phi-1.5\u8dd1RM lora \u51fa\u73b0'NoneType' object is not subcriptable", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### Reproduction\n\n\r\n\r\nsh\u811a\u672c\uff1a\r\n```\r\ndeepspeed --num_gpus 8 --master_port=9901 src/train_bash.py \\\r\n --stage rm \\\r\n --model_name_or_path Phi-1.5 \\\r\n --deepspeed ds_config.json \\ \r\n --adapter_name_or_path sft_lora \\ \r\n --create_new_adapter \\\r\n --do_train \\ \r\n --dataset comparision_gpt4_en \\\r\n --template default \\\r\n --finetuning_type lora \\\r\n --lora_target Wqkv \\ \r\n --overwrite_ouput_dir \\ \r\n --output_dir rm_lora \\ \r\n --per_device_train_batch_size 2 \\\r\n --gradient_accumulation_steps 4 \\\r\n --lr_scheduler_type cosine \\ \r\n --logging_steps 1 \\\r\n --save_steps 200 \\ \r\n --learning_rate 1e-6 \\ \r\n --num_train_epochs 1.0 \\ \r\n --max_steps 200 \\\r\n --fp16 > rm.log 2>&1 &\r\nwait\r\n \r\n```\n\n### Expected behavior\n\n\u671f\u671b\u7ed3\u679c\uff1a\u6210\u529f\u8bfb\u53d6\u6743\u91cd\u5e76\u8fdb\u5165\u8bad\u7ec3\n\n### System Info\n\n\u8bbe\u5907\uff1aNPU\r\n\u5305\u7248\u672c\uff1a\r\n```\r\ntransformers==4.36.1\r\ndeepspeed==0.12.4\r\npeft==0.7.1\r\ntrl==0.7.4\r\ntorch==2.1.0\r\naccelerate==0.25.0\r\n```\n\n### Others\n\n\u62a5\u9519\u4fe1\u606f\uff1a\r\nTraceback\r\n File \"src/train_bash.py\" line 14\r\n main()\r\nFile \"src/train_bash.py\" line 5\r\n run_exp()\r\nFile \"LLaMA-Factory/src/llmtuner/train/tuner.py\", line 28 in run_exp\r\n run_rm(model_args, data_args, training_args, finetuning_args, callbacks)\r\nFile \"LLaMA-Factory/src/llmtuner/train/rm/workflow.py\", line 50, in run_rm\r\n train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)\r\nFile \"transformers/trainer.py\" line 2728\r\n loss = self.compute_loss(model, inputs)\r\nFile \"LLaMA-Factory/src/llmtuner/train/rm/trainer.py\" in line 41, in compute_loss\r\n _, _, values = model(**inputs, output_hidden_states=True, return_dict=True)\r\nFile \".../trl/models/modeling_value_head.py\", in line 175. in forward\r\n last_hidden_state = base_model_output.hidden_state[-1]\r\nTypeError: 'NoneType' object is not subscriptable\r\n\r\n\u6211\u4e00\u5f00\u59cb\u6000\u7591\u662f\u6743\u91cd\u95ee\u9898\uff0c\u91cd\u65b0\u4e0b\u8f7d\u4e86\u6743\u91cd\u4f9d\u7136\u62a5\u8be5\u9519\u8bef\uff0c\u5c1d\u8bd5\u5c06Phi-1.5\u6362\u6210Phi-2\u540c\u6837\u62a5\u9519\r\n\r\n \r\n \r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "docs", "pro": "transformers", "path": ["model_doc/phi"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["model_doc/phi"], "test": [], "config": [], "asset": []}}, {"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "d46c136c0e104c50999df18a88c42658b819f71f", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/226", "iss_label": "solved", "title": "\u8bf7\u95ee\u9879\u76ee\u4e2d\u5bf9\u591a\u8f6e\u5bf9\u8bdd\u8bed\u6599\u7684\u5904\u7406\u65b9\u5f0f", "body": "\u662f\u7528\u591a\u4e2a\u5386\u53f2\u5bf9\u8bdd\u62fc\u63a5\u540e\u4f5c\u4e3ainput\u6765\u9884\u6d4b\u6700\u540e\u4e00\u8f6e\u7684\u56de\u7b54\u5417\uff1f\u8fd8\u662f\u628a\u5386\u53f2\u5bf9\u8bdd\u62c6\u5206\u6210\u591a\u4e2a\u8f6e\u6b21\u7684\u8bad\u7ec3\u8bed\u6599\u6bd4\u59825\u8f6e\u6b21\u5bf9\u8bdd\u53ef\u4ee5\u62c6\u5206\u62101 2 3 4 5\u8f6e\u6b21\u5bf9\u8bdd\u6837\u672c\u3002\u5173\u4e8e\u5177\u4f53\u7684\u5904\u7406\u8fc7\u7a0b\u4ee3\u7801 \u80fd\u5426\u8bf7\u4f5c\u8005\u6307\u51fa\u4e00\u4e0b \u6211\u60f3\u5b66\u4e60\u5b66\u4e60\u3002\u8c22\u8c22\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d46c136c0e104c50999df18a88c42658b819f71f", "files": [{"path": "src/llmtuner/dsets/preprocess.py", "Loc": {"(None, 'preprocess_supervised_dataset', 50)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/dsets/preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "2f3aab9cfdc139f399387dbb90300d5a8bf8d2f1", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/375", "iss_label": "bug", "title": "ValueError: Requested tokens exceed context window of 1000", "body": "After I ingest a file, run privateGPT and try to ask anything, I get following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Stable_Diffusion\\privateGPT\\privateGPT.py\", line 75, in <module>\r\n main()\r\n File \"C:\\Stable_Diffusion\\privateGPT\\privateGPT.py\", line 47, in main\r\n res = qa(query)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\retrieval_qa\\base.py\", line 120, in _call\r\n answer = self.combine_documents_chain.run(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 239, in run\r\n return self(kwargs, callbacks=callbacks)[self.output_keys[0]]\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\combine_documents\\base.py\", line 84, in _call\r\n output, extra_return_dict = self.combine_docs(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\combine_documents\\stuff.py\", line 87, in combine_docs\r\n return self.llm_chain.predict(callbacks=callbacks, **inputs), {}\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 213, in predict\r\n return self(kwargs, callbacks=callbacks)[self.output_key]\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 69, in _call\r\n response = self.generate([inputs], run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 79, in generate\r\n return self.llm.generate_prompt(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 134, in generate_prompt\r\n return self.generate(prompt_strings, stop=stop, callbacks=callbacks)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 191, in generate\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 185, in generate\r\n self._generate(prompts, stop=stop, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 405, in _generate\r\n self._call(prompt, stop=stop, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\llamacpp.py\", line 225, in _call\r\n for token in self.stream(prompt=prompt, stop=stop, run_manager=run_manager):\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\llamacpp.py\", line 274, in stream\r\n for chunk in result:\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\llama_cpp\\llama.py\", line 618, in _create_completion\r\n raise ValueError(\r\nValueError: Requested tokens exceed context window of 1000\r\n```\r\n\r\nI tried it with docx and pdf, used models are ggml-vic13b-q5_1.bin and stable-vicuna-13B.ggml.q4_0.bin.\r\nDuring ingestion or loading privateGPT I get no error.\r\n\r\nOS: Windows 10\r\nCPU: Ryzen 7 3700\r\nRAM: 32gb\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "2f3aab9cfdc139f399387dbb90300d5a8bf8d2f1", "files": [{"path": "ingest.py", "Loc": {"(None, 'process_documents', 114)": {"mod": [124]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nor\n1", "info_type": "Code"}, "loctype": {"code": ["ingest.py"], "doc": [], "test": [], "config": [".env"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "24cfddd60f74aadd2dade4c63f6012a2489938a1", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1125", "iss_label": "", "title": "LLM mock donot gives any output", "body": "i have downloaded the llm models \r\n\r\nused this \r\npoetry install --with local\r\npoetry run python scripts/setup\r\n\r\nstill i get this output \r\n![Screenshot from 2023-10-27 23-50-34](https://github.com/imartinez/privateGPT/assets/148402457/201b18f9-c269-40e4-99c5-a22fd3b9366d)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "24cfddd60f74aadd2dade4c63f6012a2489938a1", "files": [{"path": "settings.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["settings.yaml"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "dd1100202881a01b6b013b7bc1faad8b5c63fec9", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/850", "iss_label": "primordial", "title": "privateGPT\u4e2d\u6587\u63d0\u95ee\u663e\u793atoken\u8d85\u51fa\u9650\u5236\uff0c\u82f1\u6587\u63d0\u95ee\u4e0d\u5b58\u5728\u8fd9\u4e2a\u95ee\u9898", "body": "\r\ntoken\u7684\u8ba1\u7b97\u65b9\u5f0f\u5f88\u5947\u602a\u4e94\u4e2a\u5b57\u6307\u4ee4\u7684token\u6bd4\u4e03\u4e2a\u5b57\u591a\r\n![\u5fae\u4fe1\u56fe\u7247_20230713094810](https://github.com/imartinez/privateGPT/assets/139415035/6346ae1f-9c65-4721-b7dd-a176fc9be4e1)\r\n![\u5fae\u4fe1\u56fe\u7247_20230713094822](https://github.com/imartinez/privateGPT/assets/139415035/60f2d272-8a80-48d7-9032-4d915a83aa7d)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "4", "loc_way": "comment", "loc_scope": "1", "info_type": "config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "d17c34e81a84518086b93605b15032e2482377f7", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1724", "iss_label": "", "title": "Error in Model Download and Tokenizer Fetching During Setup Script Execution", "body": "### Environment\r\nOperating System: Macbook Pro M1\r\nPython Version: 3.11\r\n\r\nDescription\r\nI'm encountering an issue when running the setup script for my project. The script is supposed to download an embedding model and an LLM model from Hugging Face, followed by their respective tokenizers. While the script successfully downloads the embedding and LLM models, it fails when attempting to download the tokenizer with a 404 Client Error.\r\n\r\n### Steps to Reproduce\r\nRun `poetry run python scripts/setup`\r\nEmbedding model (BAAI/bge-small-en-v1.5) and the LLM model (mistral-7b-instruct-v0.2.Q4_K_M.gguf) are downloaded successfully.\r\nThe script then attempts to download a tokenizer and fails.\r\n\r\n### Actual Behavior\r\nThe script throws an error when trying to download the tokenizer. The error message indicates a 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json. This suggests that either the tokenizer's name is not being correctly passed (as indicated by the 'None' in the URL) or there's an issue with the tokenizer's availability on Hugging Face.\r\n\r\n### Logs\r\n```bash\r\n22:02:47.207 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default']\r\nDownloading embedding BAAI/bge-small-en-v1.5\r\nFetching 14 files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 14/14 [00:00<00:00, 14.81it/s]\r\nEmbedding model downloaded!\r\nDownloading LLM mistral-7b-instruct-v0.2.Q4_K_M.gguf\r\nLLM model downloaded!\r\nDownloading tokenizer None\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/utils/hub.py\", line 398, in cached_file\r\n resolved_file = hf_hub_download(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1374, in hf_hub_download\r\n raise head_call_error\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1247, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1624, in get_hf_file_metadata\r\n r = _request_wrapper(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 402, in _request_wrapper\r\n response = _request_wrapper(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 426, in _request_wrapper\r\n hf_raise_for_status(response)\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 320, in hf_raise_for_status\r\n raise RepositoryNotFoundError(message, response) from e\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-65f21479-09e39977255bdb72502d4b8c;66371627-2d02-44c6-8f25-d115820c1986)\r\n\r\nRepository Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/scripts/setup\", line 43, in <module>\r\n AutoTokenizer.from_pretrained(\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 767, in from_pretrained\r\n tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 600, in get_tokenizer_config\r\n resolved_config_file = cached_file(\r\n ^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/utils/hub.py\", line 421, in cached_file\r\n raise EnvironmentError(\r\nOSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`\r\n```\r\n### Additional Information\r\nIt seems like the script is not correctly fetching the name or identifier for the tokenizer.\r\nThe issue might be related to how the tokenizer's name is being resolved or passed in the script (None).\r\nI also tried with docker compose, yielding same results. Maybe it is just some setting that I am missing from the docs?\r\n\r\nThank you\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d17c34e81a84518086b93605b15032e2482377f7", "files": [{"path": "settings.yaml", "Loc": {"(None, None, 42)": {"mod": [42]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["settings.yaml"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "026b9f895cfb727da523a20c59773146801236ba", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/13", "iss_label": "", "title": "gpt_tokenize: unknown token '?'", "body": "gpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\n[1] 32658 killed python3 privateGPT.py", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3\n\uff1f\uff1f\uff1f", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "d3acd85fe34030f8cfd7daf50b30c534087bdf2b", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1514", "iss_label": "", "title": "LLM Chat only returns \"#\" characters", "body": "No matter the prompt, privateGPT only returns hashes as the response. This doesn't occur when not using CUBLAS. \r\n\r\n<img width=\"745\" alt=\"image\" src=\"https://github.com/imartinez/privateGPT/assets/6668593/b4ef137f-0122-44fe-864a-eef246066ec3\">\r\n\r\nSet up info:\r\n\r\nNVIDIA GeForce RTX 4080\r\nWindows 11\r\n\r\n<img width=\"924\" alt=\"image\" src=\"https://github.com/imartinez/privateGPT/assets/6668593/35c2233d-ae28-40c9-a018-d9590f85908d\">\r\n\r\n<img width=\"1141\" alt=\"image\" src=\"https://github.com/imartinez/privateGPT/assets/6668593/750e4006-cd97-4b4f-848d-5598f09697f3\">\r\n\r\n\r\n\r\naccelerate==0.25.0\r\naiofiles==23.2.1\r\naiohttp==3.9.1\r\naiosignal==1.3.1\r\naiostream==0.5.2\r\naltair==5.2.0\r\nannotated-types==0.6.0\r\nanyio==3.7.1\r\nattrs==23.1.0\r\nbeautifulsoup4==4.12.2\r\nblack==22.12.0\r\nboto3==1.34.2\r\nbotocore==1.34.2\r\nbuild==1.0.3\r\nCacheControl==0.13.1\r\ncertifi==2023.11.17\r\ncfgv==3.4.0\r\ncharset-normalizer==3.3.2\r\ncleo==2.1.0\r\nclick==8.1.7\r\ncolorama==0.4.6\r\ncoloredlogs==15.0.1\r\ncontourpy==1.2.0\r\ncoverage==7.3.3\r\ncrashtest==0.4.1\r\ncycler==0.12.1\r\ndataclasses-json==0.5.14\r\ndatasets==2.14.4\r\nDeprecated==1.2.14\r\ndill==0.3.7\r\ndiskcache==5.6.3\r\ndistlib==0.3.8\r\ndistro==1.8.0\r\ndnspython==2.4.2\r\ndulwich==0.21.7\r\nemail-validator==2.1.0.post1\r\nevaluate==0.4.1\r\nfastapi==0.103.2\r\nfastjsonschema==2.19.1\r\nffmpy==0.3.1\r\nfilelock==3.13.1\r\nflatbuffers==23.5.26\r\nfonttools==4.46.0\r\nfrozenlist==1.4.1\r\nfsspec==2023.12.2\r\ngradio==4.10.0\r\ngradio_client==0.7.3\r\ngreenlet==3.0.2\r\ngrpcio==1.60.0\r\ngrpcio-tools==1.60.0\r\nh11==0.14.0\r\nh2==4.1.0\r\nhpack==4.0.0\r\nhttpcore==1.0.2\r\nhttptools==0.6.1\r\nhttpx==0.25.2\r\nhuggingface-hub==0.19.4\r\nhumanfriendly==10.0\r\nhyperframe==6.0.1\r\nidentify==2.5.33\r\nidna==3.6\r\nimportlib-resources==6.1.1\r\niniconfig==2.0.0\r\ninjector==0.21.0\r\ninstaller==0.7.0\r\nitsdangerous==2.1.2\r\njaraco.classes==3.3.0\r\nJinja2==3.1.2\r\njmespath==1.0.1\r\njoblib==1.3.2\r\njsonschema==4.20.0\r\njsonschema-specifications==2023.11.2\r\nkeyring==24.3.0\r\nkiwisolver==1.4.5\r\nllama-index==0.9.3\r\nllama_cpp_python==0.2.29\r\nmarkdown-it-py==3.0.0\r\nMarkupSafe==2.1.3\r\nmarshmallow==3.20.1\r\nmatplotlib==3.8.2\r\nmdurl==0.1.2\r\nmore-itertools==10.2.0\r\nmpmath==1.3.0\r\nmsgpack==1.0.7\r\nmultidict==6.0.4\r\nmultiprocess==0.70.15\r\nmypy==1.7.1\r\nmypy-extensions==1.0.0\r\nnest-asyncio==1.5.8\r\nnetworkx==3.2.1\r\nnltk==3.8.1\r\nnodeenv==1.8.0\r\nnumpy==1.26.3\r\nonnx==1.15.0\r\nonnxruntime==1.16.3\r\nopenai==1.5.0\r\noptimum==1.16.1\r\norjson==3.9.10\r\npackaging==23.2\r\npandas==2.1.4\r\npathspec==0.12.1\r\npexpect==4.9.0\r\nPillow==10.1.0\r\npkginfo==1.9.6\r\nplatformdirs==4.1.0\r\npluggy==1.3.0\r\npoetry==1.7.1\r\npoetry-core==1.8.1\r\npoetry-plugin-export==1.6.0\r\nportalocker==2.8.2\r\npre-commit==2.21.0\r\n-e git+https://github.com/imartinez/privateGPT@d3acd85fe34030f8cfd7daf50b30c534087bdf2b#egg=private_gpt\r\nprotobuf==4.25.1\r\npsutil==5.9.6\r\nptyprocess==0.7.0\r\npyarrow==14.0.1\r\npydantic==2.5.2\r\npydantic-extra-types==2.2.0\r\npydantic-settings==2.1.0\r\npydantic_core==2.14.5\r\npydub==0.25.1\r\nPygments==2.17.2\r\npyparsing==3.1.1\r\npypdf==3.17.2\r\npyproject_hooks==1.0.0\r\npyreadline3==3.4.1\r\npytest==7.4.3\r\npytest-asyncio==0.21.1\r\npytest-cov==3.0.0\r\npython-dateutil==2.8.2\r\npython-dotenv==1.0.0\r\npython-multipart==0.0.6\r\npytz==2023.3.post1\r\npywin32==306\r\npywin32-ctypes==0.2.2\r\nPyYAML==6.0.1\r\nqdrant-client==1.7.0\r\nrapidfuzz==3.6.1\r\nreferencing==0.32.0\r\nregex==2023.10.3\r\nrequests==2.31.0\r\nrequests-toolbelt==1.0.0\r\nresponses==0.18.0\r\nrich==13.7.0\r\nrpds-py==0.14.1\r\nruff==0.1.8\r\ns3transfer==0.9.0\r\nsafetensors==0.4.1\r\nscikit-learn==1.3.2\r\nscipy==1.11.4\r\nsemantic-version==2.10.0\r\nsentence-transformers==2.2.2\r\nsentencepiece==0.1.99\r\nshellingham==1.5.4\r\nsix==1.16.0\r\nsniffio==1.3.0\r\nsoupsieve==2.5\r\nSQLAlchemy==2.0.23\r\nstarlette==0.27.0\r\nsympy==1.12\r\ntenacity==8.2.3\r\nthreadpoolctl==3.2.0\r\ntiktoken==0.5.2\r\ntokenizers==0.15.0\r\ntomlkit==0.12.0\r\ntoolz==0.12.0\r\ntorch==2.1.2+cu121\r\ntorchaudio==2.1.2+cu121\r\ntorchvision==0.16.2+cu121\r\ntqdm==4.66.1\r\ntransformers==4.36.1\r\ntrove-classifiers==2024.1.8\r\ntyper==0.9.0\r\ntypes-PyYAML==6.0.12.12\r\ntyping-inspect==0.9.0\r\ntyping_extensions==4.9.0\r\ntzdata==2023.3\r\nujson==5.9.0\r\nurllib3==1.26.18\r\nuvicorn==0.24.0.post1\r\nvirtualenv==20.25.0\r\nwatchdog==3.0.0\r\nwatchfiles==0.21.0\r\nwebsockets==11.0.3\r\nwrapt==1.16.0\r\nxxhash==3.4.1\r\nyarl==1.9.4", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d3acd85fe34030f8cfd7daf50b30c534087bdf2b", "files": [{"path": "private_gpt/components/llm/llm_component.py", "Loc": {"('LLMComponent', '__init__', 21)": {"mod": [45]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["private_gpt/components/llm/llm_component.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "c4b247d696c727c1da6d993ce4f6c3a557e91b42", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/685", "iss_label": "enhancement\nprimordial", "title": "CPU utilization", "body": "CPU utilization appears to be capped at 20%\r\nIs there a way to increase CPU utilization and thereby enhance performance?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c4b247d696c727c1da6d993ce4f6c3a557e91b42", "files": [{"path": "privateGPT.py", "Loc": {"(None, 'main', 23)": {"mod": [36]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["privateGPT.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "7ae80e662936bd946a231d1327bde476556c5d61", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/181", "iss_label": "primordial", "title": "Segfault : not enough space in the context's memory pool", "body": "ggml_new_tensor_impl: not enough space in the context's memory pool (needed 3779301744, available 3745676000)\r\nzsh: segmentation fault python3.11 privateGPT.py\r\n\r\nWhats context memory pool? can i configure it? i actually have a lot of excess memory", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "7ae80e662936bd946a231d1327bde476556c5d61", "files": [{"path": "ingest.py", "Loc": {"(None, 'main', 37)": {"mod": [47]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ingest.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "9d47d03d183685c675070d47ad3beb67446d6580", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/630", "iss_label": "bug\nprimordial", "title": "Use falcon model in privategpt", "body": "Hi how can i use Falcon model in privategpt?\r\n\r\nhttps://huggingface.co/tiiuae/falcon-40b-instruct\r\n\r\nThanks", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "9d47d03d183685c675070d47ad3beb67446d6580", "files": [{"path": "privateGPT.py", "Loc": {"(None, 'main', 23)": {"mod": [32]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["privateGPT.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "380b119581d2afcd24948f1108507b138490aec6", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/235", "iss_label": "bug\nprimordial", "title": "Need help on in some errors", "body": " File \"F:\\privateGPT\\Lib\\site-packages\\langchain\\embeddings\\llamacpp.py\", line 79, in validate_environment\r\n values[\"client\"] = Llama(\r\n ^^^^^^\r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama.py\", line 155, in __init__ \r\n self.ctx = llama_cpp.llama_init_from_file(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n \r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama_cpp.py\", line 182, in llama_init_from_file\r\n return _lib.llama_init_from_file(path_model, params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nOSError: [WinError -1073741795] Windows Error 0xc000001d\r\nDuring handling of the above exception, another exception occurred:\r\n\r\n File \"F:\\privateGPT\\ingest.py\", line 62, in <module>\r\n main()\r\n File \"F:\\privateGPT\\ingest.py\", line 53, in main\r\n llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx) \r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \r\n File \"pydantic\\main.py\", line 339, in pydantic.main.BaseModel.__init__\r\n File \"pydantic\\main.py\", line 1102, in pydantic.main.validate_model\r\n File \"F:\\privateGPT\\Lib\\site-packages\\langchain\\embeddings\\llamacpp.py\", line 99, in validate_environment\r\n raise NameError(f\"Could not load Llama model from path: {model_path}\")\r\nNameError: Could not load Llama model from path: F:/privateGPT/models/ggml-model-q4_0.bin \r\nException ignored in: <function Llama.__del__ at 0x000002307F085E40>\r\nTraceback (most recent call last):\r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama.py\", line 978, in __del__\r\n if self.ctx is not None:\r\n ^^^^\r\nAttributeError: 'Llama' object has no attribute 'ctx'\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "380b119581d2afcd24948f1108507b138490aec6", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "b1057afdf8f65fdb10e4160adbd8462be0c08271", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/796", "iss_label": "primordial", "title": "Unable to instantiate model (type=value_error)", "body": "Installed on Ubuntu 20.04 with Python3.11-venv\r\n\r\nError on line 38:\r\nhttps://github.com/imartinez/privateGPT/blob/b1057afdf8f65fdb10e4160adbd8462be0c08271/privateGPT.py#L38C7-L38C7\r\n\r\nError:\r\n\r\nUsing embedded DuckDB with persistence: data will be stored in: db\r\nFound model file at models/ggml-gpt4all-j-v1.3-groovy.bin\r\nInvalid model file\r\nTraceback (most recent call last):\r\n File \"/home/kk/Documents/privateGPT/privateGPT.py\", line 83, in <module>\r\n main()\r\n File \"/home/kk/Documents/privateGPT/privateGPT.py\", line 38, in main\r\n llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=False)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"pydantic/main.py\", line 341, in pydantic.main.BaseModel.__init__\r\npydantic.error_wrappers.ValidationError: 1 validation error for GPT4All\r\n__root__\r\n Unable to instantiate model (type=value_error)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["ggml-gpt4all-j-v1.3-groovy.bin"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ggml-gpt4all-j-v1.3-groovy.bin"]}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "dd1100202881a01b6b013b7bc1faad8b5c63fec9", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/839", "iss_label": "bug\nprimordial", "title": "ERROR: The prompt size exceeds the context window size and cannot be processed.", "body": "Enter a query\uff0c\r\nIt show:\r\n\r\nERROR: The prompt size exceeds the context window size and cannot be processed.GPT-J ERROR: The prompt is2614tokens and the context window is2048!\r\n\r\nERROR: The prompt size exceeds the context window size and cannot be processed.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}}, {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "6bbec79583b7f28d9bea4b39c099ebef149db843", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1598", "iss_label": "", "title": "Performance bottleneck using GPU ", "body": "Hi Guys, \r\n\r\nI am running the default Mistral model, and when running queries I am seeing 100% CPU usage (so single core), and up to 29% GPU usage which drops to have 15% mid answer. \r\n\r\nI am using a MacBook Pro with M3 Max. I have set: model_kwargs={\"n_gpu_layers\": -1, \"offload_kqv\": True},\r\n\r\nI am curious as LM studio runs the same model with low CPU usage and 80%+ GPU", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6bbec79583b7f28d9bea4b39c099ebef149db843", "files": [{"path": "private_gpt/ui/ui.py", "Loc": {"('PrivateGptUi', 'yield_deltas', 81)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["private_gpt/ui/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "87ebab0615b1bf9b14b478b055e7059d630b4833", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/6007", "iss_label": "question", "title": "How to limit YouTube Music search to tracks only?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nIs there a way to return only tracks in a YTmusic search? Sometimes music videos have sound effects, while I'm only interested in the original song.\r\n\r\nI'm using this command:\r\n`yt-dlp -f bestaudio --playlist-items 1 --default-search \"https://music.youtube.com/search?q=\" -a list-of-tracks.txt`\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "87ebab0615b1bf9b14b478b055e7059d630b4833", "files": [{"path": "yt_dlp/extractor/youtube.py", "Loc": {"('YoutubeMusicSearchURLIE', None, 6647)": {"mod": [6676]}}, "status": "modified"}, {"path": "yt_dlp/extractor/youtube.py", "Loc": {"('YoutubeMusicSearchURLIE', None, 6647)": {"mod": [6659]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "91302ed349f34dc26cc1d661bb45a4b71f4417f7", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/7436", "iss_label": "question", "title": "Is YT-DLP capacity of downloading/displaying Automatic caption?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\n**Similar Issue:**\r\n- #5733 \r\n\r\n--- \r\nI may have missed one or 2 that could answer my question. These discussions and answers are not clear to my understanding, so here i am \r\n\r\n**Details**\r\nThere are many videos that have \"auto-generated subtitles | automatic captions\" and no non-generated subtitles. I've ran `yt-dlp --list-subs URL` and discover that it said `URL has no subtitles`. \r\n\r\n**QUESTION:**\r\n1. Is it possible for yt-dlp to display the automatic caption while I am streaming the video to MPV? \r\n2. Does yt-dlp preferred \"non auto-generated caption\"? \r\n\r\nI'm not sure if this is intentional or not due to one discussion via issues that a guy mentioned that yt-dlp preferred non-autogenerated subtitles. \r\n\r\n**Command for using MPV with yt-dlp**\r\nthe command was `mpv \"https://youtu.be/i6kccBc-FBQ\" --ytdl-raw-options=write-auto-subs=,write-subs=,sub-lang=en`\r\n\r\nEDIT: added the double quote to the URL in the command line\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\r\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n```shell\r\n[debug] Command-line config: ['-vU', '--list-subs', 'https://youtu.be/i6kccBc-FBQ']\r\n[debug] Portable config \"C:\\Program Scoop\\apps\\yt-dlp\\current\\yt-dlp.conf\": []\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2023.06.22 [812cdfa06] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1851 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nAvailable version: stable@2023.06.22, Current version: stable@2023.06.22\r\nCurrent Build Hash: 37e7ffe204309357cfd1388b0e2c782a30e293ddd0f2761a9a8f6afa185b3566\r\nyt-dlp is up to date (stable@2023.06.22)\r\n[youtube] Extracting URL: https://youtu.be/i6kccBc-FBQ\r\n[youtube] i6kccBc-FBQ: Downloading webpage\r\n[youtube] i6kccBc-FBQ: Downloading ios player API JSON\r\n[youtube] i6kccBc-FBQ: Downloading android player API JSON\r\n[debug] Loading youtube-nsig.b7910ca8 from cache\r\n[debug] [youtube] Decrypted nsig ftRL4j1AuTut8ZV => WMPfJf_eWd71gQ\r\n[youtube] i6kccBc-FBQ: Downloading m3u8 information\r\n[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec:vp9.2, channels, acodec, lang, proto\r\n[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec:vp9.2(10), channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id\r\n[info] Available automatic captions for i6kccBc-FBQ:\r\nLanguage Name Formats\r\naf Afrikaans vtt, ttml, srv3, srv2, srv1, json3\r\nak Akan vtt, ttml, srv3, srv2, srv1, json3\r\nsq Albanian vtt, ttml, srv3, srv2, srv1, json3\r\nam Amharic vtt, ttml, srv3, srv2, srv1, json3\r\nar Arabic vtt, ttml, srv3, srv2, srv1, json3\r\nhy Armenian vtt, ttml, srv3, srv2, srv1, json3\r\nas Assamese vtt, ttml, srv3, srv2, srv1, json3\r\nay Aymara vtt, ttml, srv3, srv2, srv1, json3\r\naz Azerbaijani vtt, ttml, srv3, srv2, srv1, json3\r\nbn Bangla vtt, ttml, srv3, srv2, srv1, json3\r\neu Basque vtt, ttml, srv3, srv2, srv1, json3\r\nbe Belarusian vtt, ttml, srv3, srv2, srv1, json3\r\nbho Bhojpuri vtt, ttml, srv3, srv2, srv1, json3\r\nbs Bosnian vtt, ttml, srv3, srv2, srv1, json3\r\nbg Bulgarian vtt, ttml, srv3, srv2, srv1, json3\r\nmy Burmese vtt, ttml, srv3, srv2, srv1, json3\r\nca Catalan vtt, ttml, srv3, srv2, srv1, json3\r\nceb Cebuano vtt, ttml, srv3, srv2, srv1, json3\r\nzh-Hans Chinese (Simplified) vtt, ttml, srv3, srv2, srv1, json3\r\nzh-Hant Chinese (Traditional) vtt, ttml, srv3, srv2, srv1, json3\r\nco Corsican vtt, ttml, srv3, srv2, srv1, json3\r\nhr Croatian vtt, ttml, srv3, srv2, srv1, json3\r\ncs Czech vtt, ttml, srv3, srv2, srv1, json3\r\nda Danish vtt, ttml, srv3, srv2, srv1, json3\r\ndv Divehi vtt, ttml, srv3, srv2, srv1, json3\r\nnl Dutch vtt, ttml, srv3, srv2, srv1, json3\r\nen-orig English (Original) vtt, ttml, srv3, srv2, srv1, json3\r\nen English vtt, ttml, srv3, srv2, srv1, json3\r\neo Esperanto vtt, ttml, srv3, srv2, srv1, json3\r\net Estonian vtt, ttml, srv3, srv2, srv1, json3\r\nee Ewe vtt, ttml, srv3, srv2, srv1, json3\r\nfil Filipino vtt, ttml, srv3, srv2, srv1, json3\r\nfi Finnish vtt, ttml, srv3, srv2, srv1, json3\r\nfr French vtt, ttml, srv3, srv2, srv1, json3\r\ngl Galician vtt, ttml, srv3, srv2, srv1, json3\r\nlg Ganda vtt, ttml, srv3, srv2, srv1, json3\r\nka Georgian vtt, ttml, srv3, srv2, srv1, json3\r\nde German vtt, ttml, srv3, srv2, srv1, json3\r\nel Greek vtt, ttml, srv3, srv2, srv1, json3\r\ngn Guarani vtt, ttml, srv3, srv2, srv1, json3\r\ngu Gujarati vtt, ttml, srv3, srv2, srv1, json3\r\nht Haitian Creole vtt, ttml, srv3, srv2, srv1, json3\r\nha Hausa vtt, ttml, srv3, srv2, srv1, json3\r\nhaw Hawaiian vtt, ttml, srv3, srv2, srv1, json3\r\niw Hebrew vtt, ttml, srv3, srv2, srv1, json3\r\nhi Hindi vtt, ttml, srv3, srv2, srv1, json3\r\nhmn Hmong vtt, ttml, srv3, srv2, srv1, json3\r\nhu Hungarian vtt, ttml, srv3, srv2, srv1, json3\r\nis Icelandic vtt, ttml, srv3, srv2, srv1, json3\r\nig Igbo vtt, ttml, srv3, srv2, srv1, json3\r\nid Indonesian vtt, ttml, srv3, srv2, srv1, json3\r\nga Irish vtt, ttml, srv3, srv2, srv1, json3\r\nit Italian vtt, ttml, srv3, srv2, srv1, json3\r\nja Japanese vtt, ttml, srv3, srv2, srv1, json3\r\njv Javanese vtt, ttml, srv3, srv2, srv1, json3\r\nkn Kannada vtt, ttml, srv3, srv2, srv1, json3\r\nkk Kazakh vtt, ttml, srv3, srv2, srv1, json3\r\nkm Khmer vtt, ttml, srv3, srv2, srv1, json3\r\nrw Kinyarwanda vtt, ttml, srv3, srv2, srv1, json3\r\nko Korean vtt, ttml, srv3, srv2, srv1, json3\r\nkri Krio vtt, ttml, srv3, srv2, srv1, json3\r\nku Kurdish vtt, ttml, srv3, srv2, srv1, json3\r\nky Kyrgyz vtt, ttml, srv3, srv2, srv1, json3\r\nlo Lao vtt, ttml, srv3, srv2, srv1, json3\r\nla Latin vtt, ttml, srv3, srv2, srv1, json3\r\nlv Latvian vtt, ttml, srv3, srv2, srv1, json3\r\nln Lingala vtt, ttml, srv3, srv2, srv1, json3\r\nlt Lithuanian vtt, ttml, srv3, srv2, srv1, json3\r\nlb Luxembourgish vtt, ttml, srv3, srv2, srv1, json3\r\nmk Macedonian vtt, ttml, srv3, srv2, srv1, json3\r\nmg Malagasy vtt, ttml, srv3, srv2, srv1, json3\r\nms Malay vtt, ttml, srv3, srv2, srv1, json3\r\nml Malayalam vtt, ttml, srv3, srv2, srv1, json3\r\nmt Maltese vtt, ttml, srv3, srv2, srv1, json3\r\nmi M\u0101ori vtt, ttml, srv3, srv2, srv1, json3\r\nmr Marathi vtt, ttml, srv3, srv2, srv1, json3\r\nmn Mongolian vtt, ttml, srv3, srv2, srv1, json3\r\nne Nepali vtt, ttml, srv3, srv2, srv1, json3\r\nnso Northern Sotho vtt, ttml, srv3, srv2, srv1, json3\r\nno Norwegian vtt, ttml, srv3, srv2, srv1, json3\r\nny Nyanja vtt, ttml, srv3, srv2, srv1, json3\r\nor Odia vtt, ttml, srv3, srv2, srv1, json3\r\nom Oromo vtt, ttml, srv3, srv2, srv1, json3\r\nps Pashto vtt, ttml, srv3, srv2, srv1, json3\r\nfa Persian vtt, ttml, srv3, srv2, srv1, json3\r\npl Polish vtt, ttml, srv3, srv2, srv1, json3\r\npt Portuguese vtt, ttml, srv3, srv2, srv1, json3\r\npa Punjabi vtt, ttml, srv3, srv2, srv1, json3\r\nqu Quechua vtt, ttml, srv3, srv2, srv1, json3\r\nro Romanian vtt, ttml, srv3, srv2, srv1, json3\r\nru Russian vtt, ttml, srv3, srv2, srv1, json3\r\nsm Samoan vtt, ttml, srv3, srv2, srv1, json3\r\nsa Sanskrit vtt, ttml, srv3, srv2, srv1, json3\r\ngd Scottish Gaelic vtt, ttml, srv3, srv2, srv1, json3\r\nsr Serbian vtt, ttml, srv3, srv2, srv1, json3\r\nsn Shona vtt, ttml, srv3, srv2, srv1, json3\r\nsd Sindhi vtt, ttml, srv3, srv2, srv1, json3\r\nsi Sinhala vtt, ttml, srv3, srv2, srv1, json3\r\nsk Slovak vtt, ttml, srv3, srv2, srv1, json3\r\nsl Slovenian vtt, ttml, srv3, srv2, srv1, json3\r\nso Somali vtt, ttml, srv3, srv2, srv1, json3\r\nst Southern Sotho vtt, ttml, srv3, srv2, srv1, json3\r\nes Spanish vtt, ttml, srv3, srv2, srv1, json3\r\nsu Sundanese vtt, ttml, srv3, srv2, srv1, json3\r\nsw Swahili vtt, ttml, srv3, srv2, srv1, json3\r\nsv Swedish vtt, ttml, srv3, srv2, srv1, json3\r\ntg Tajik vtt, ttml, srv3, srv2, srv1, json3\r\nta Tamil vtt, ttml, srv3, srv2, srv1, json3\r\ntt Tatar vtt, ttml, srv3, srv2, srv1, json3\r\nte Telugu vtt, ttml, srv3, srv2, srv1, json3\r\nth Thai vtt, ttml, srv3, srv2, srv1, json3\r\nti Tigrinya vtt, ttml, srv3, srv2, srv1, json3\r\nts Tsonga vtt, ttml, srv3, srv2, srv1, json3\r\ntr Turkish vtt, ttml, srv3, srv2, srv1, json3\r\ntk Turkmen vtt, ttml, srv3, srv2, srv1, json3\r\nuk Ukrainian vtt, ttml, srv3, srv2, srv1, json3\r\nur Urdu vtt, ttml, srv3, srv2, srv1, json3\r\nug Uyghur vtt, ttml, srv3, srv2, srv1, json3\r\nuz Uzbek vtt, ttml, srv3, srv2, srv1, json3\r\nvi Vietnamese vtt, ttml, srv3, srv2, srv1, json3\r\ncy Welsh vtt, ttml, srv3, srv2, srv1, json3\r\nfy Western Frisian vtt, ttml, srv3, srv2, srv1, json3\r\nxh Xhosa vtt, ttml, srv3, srv2, srv1, json3\r\nyi Yiddish vtt, ttml, srv3, srv2, srv1, json3\r\nyo Yoruba vtt, ttml, srv3, srv2, srv1, json3\r\nzu Zulu vtt, ttml, srv3, srv2, srv1, json3\r\ni6kccBc-FBQ has no subtitles\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "91302ed349f34dc26cc1d661bb45a4b71f4417f7", "files": [{"path": "yt_dlp/options.py", "Loc": {"(None, 'create_parser', 216)": {"mod": [853, 857, 861]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0\n\u8fd9\u4e2a\u53ef\u4e0d\u7b97\uff0c\u56e0\u4e3auser\u77e5\u9053\u547d\u4ee4\u53ea\u662f\u5f15\u53f7\u95ee\u9898", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/options.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "6075a029dba70a89675ae1250e7cdfd91f0eba41", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/10356", "iss_label": "question", "title": "Unable to install curl_cffi", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nI am trying to install `curl_cffi` in order to get around Vimeo's new TLS fingerprinting anti-bot protection. I have run the command `pipx install 'yt-dlp[default,curl_cffi]' --force'`, which gives the output:\r\n\r\n```\r\nInstalling to existing venv 'yt-dlp'\r\n\u26a0\ufe0f Note: yt-dlp was already on your PATH at /opt/homebrew/bin/yt-dlp\r\n installed package yt-dlp 2024.7.2, installed using Python 3.12.4\r\n These apps are now globally available\r\n - yt-dlp\r\n These manual pages are now globally available\r\n - man1/yt-dlp.1\r\n\u26a0\ufe0f Note: '/Users/username-hidden/.local/bin' is not on your PATH environment variable. These apps will not be globally accessible until your PATH is updated. Run `pipx ensurepath` to automatically add it,\r\n or manually modify your PATH in your shell's config file (e.g. ~/.bashrc).\r\ndone! \u2728 \ud83c\udf1f \u2728\r\n```\r\n\r\nFrom this output, I understand that `curl_cffi` would have been installed. However, running `yt-dlp --list-impersonate-targets -vU` does not show it.\r\n\r\nI intend to use `--impersonate chrome` but I am stuck at `curl_cffi` installation. Any help would be **greatly** appreciated. Thank you.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['--list-impersonate-targets', '-vU']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.07.02 from yt-dlp/yt-dlp [93d33cb29] (pip)\r\n[debug] Python 3.12.4 (CPython arm64 64bit) - macOS-14.5-arm64-arm-64bit (OpenSSL 3.3.1 4 Jun 2024)\r\n[debug] exe versions: ffmpeg 7.0.1 (setts), ffprobe 7.0.1\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.06.02, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.0, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1831 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.07.02 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.07.02 from yt-dlp/yt-dlp)\r\n[info] Available impersonate targets\r\nClient OS Source\r\n---------------------------------------\r\nChrome - curl_cffi (not available)\r\nEdge - curl_cffi (not available)\r\nSafari - curl_cffi (not available)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [".zshrc", ".bash_profile"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".bash_profile", ".zshrc"], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "4a601c9eff9fb42e24a4c8da3fa03628e035b35b", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/8479", "iss_label": "question\nNSFW", "title": "OUTPUT TEMPLATE --output %(title)s.%(ext)s", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I'm running yt-dlp version **2023.10.13** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\nI'm using the latest yt-dlp which states does not support website https://sfmcompile.club/. Understood.\r\nIssue: The pages appear to just be playlist of others' posts. A series of pages may take the format below:\r\n\r\n**_### LINKS ARE NSFW_**\r\nhttps://sfmcompile.club/category/overwatch/dva/page/2/\r\nhttps://sfmcompile.club/category/overwatch/dva/page/3/\r\nhttps://sfmcompile.club/category/overwatch/dva/page/4/\r\n**_### LINKS ARE NSFW_**\r\n\r\nI'm copying/pasting to a text file, the link's base, adding the page number, then the trailing slash. After having a series of these weblinks, I run yt-dlp against this text file. Each weblink contains about 8 posts per page. yt-dlp downloads the 8 posts for that page.\r\nDVA (1)\r\nDVA (2)\r\nDVA (3)\r\nDVA (4)\r\nDVA (5)\r\nDVA (6)\r\nDVA (7)\r\nDVA (8)\r\n\r\nyt-dlp then goes to the next weblink in the text file and \"reports\" the file has already been downloaded:\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nand again,\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nand again,\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nIt repeats with whatever number of weblinks in the text file until exhausted. I might be trying to download 8 weblinks multiplied by 8 posts which should be 64, but is instead only the original 8 from the first page.\r\n\r\nI understand I can add something like %(autonumber)s to the output but each of these posts in the playlists do have an actual title to them.\r\nDVA eating lunch\r\nDVA at the park\r\nDVA at work\r\n(lol)\r\n\r\nI'd prefer to use the original title of the post rather than repeating title with a follow-on count.\r\nDVA (1) 00001\r\nDVA (2) 00002\r\nDVA (3) 00003\r\nDVA (4) 00004\r\nDVA (5) 00005\r\nDVA (6) 00006\r\nDVA (7) 00007\r\nDVA (8) 00008\r\n\r\nDVA (1) 00009\r\nDVA (2) 00010\r\netc.\r\n\r\nI've experimented with using most of the OUTPUT TEMPLATE options on the yt-dlp page but can't for the life of me seem to figure out which output string is going to give me the output I desire. Most of them give me **NA**.\r\n\r\nid (string): Video identifier\r\ntitle (string): Video title\r\nfulltitle (string): Video title ignoring live timestamp and generic title\r\next (string): Video filename extension\r\nalt_title (string): A secondary title of the video\r\ndescription (string): The description of the video\r\ndisplay_id (string): An alternative identifier for the video\r\n\r\nEven tried %(original_url)s w/ no luck, thinking I could at least get the https://www.blahblahblah.com, and then afterward use a mass filename editor to edit out the unwanted https:// and .com. Nope, get an NA.\r\n\r\n**If there is a way to \"poll\" a weblink to see \"keywords\" that would be great!**\r\n\r\nIn advance, any help is appreciated.\r\n\r\nMy yt-dlp.conf\r\n\r\n```\r\n--no-download-archive\r\n--no-clean-info-json\r\n--windows-filenames\r\n--trim-filenames 140\r\n--ffmpeg-location \"..\\..\\..\\..\\ffmpeg\\bin\\ffmpeg.exe\"\r\n--audio-format \"mp3\"\r\n--format \"bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4\"\r\n--output \"D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\%(title)s.%(ext)s\"\r\n```\r\n\r\n```\r\nG:\\00OSz\\12win10b zEnt-LTSC 1809 x64\\05Apps\\Multimedia\\Video\\Installed\\yt-dlp Singles\\Support\\Folder Prep\\aX Drive Source>\"..\\..\\..\\yt-dlp.exe\" --config-location \"..\\..\\..\\yt-dlp.conf\" --batch-file \".\\aBatch URLs.txt\" --verbose\r\n[debug] Command-line config: ['--config-location', '..\\\\..\\\\..\\\\yt-dlp.conf', '--batch-file', '.\\\\aBatch URLs.txt', '--verbose']\r\n[debug] | Config \"..\\..\\..\\yt-dlp.conf\": ['--no-download-archive', '--no-clean-info-json', '--windows-filenames', '--trim-filenames', '140', '--ffmpeg-location', '..\\\\..\\\\..\\\\..\\\\ffmpeg\\\\bin\\\\ffmpeg.exe', '--audio-format', 'mp3', '--format', 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4', '--output', 'D:\\\\11Downloadz\\\\bTorrents Complete\\\\Podcasts\\\\tmp in\\\\%(title)s.%(ext)s']\r\n[debug] Batch file urls: ['https://sfmcompile.club/tag/lazyprocrastinator/page/1/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/2/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/3/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/4/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/5/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/6/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/7/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/8/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/9/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/10/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/11/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/12/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/13/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/14/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/15/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/16/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/17/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/18/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/19/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/20/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/21/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/22/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/23/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/24/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/25/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/26/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/27/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/28/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/29/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/30/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/31/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/32/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/33/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/34/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/35/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/36/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/37/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/38/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/39/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/40/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/41/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/42/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/43/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/44/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/45/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/46/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/47/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/48/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/49/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/50/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/51/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/52/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/53/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/54/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/55/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/56/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/57/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/58/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/59/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/60/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/61/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/62/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/63/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/64/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/65/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/66/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/67/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/68/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/69/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/70/']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2023.09.24.003044 [de015e930] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg N-110072-g073ec3b9da-20230325 (setts), ffprobe N-110072-g073ec3b9da-20230325\r\n[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.07.22, mutagen-1.47.0, sqlite3-3.35.5, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1886 extractors\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/page/1/\r\n[generic] 1: Downloading webpage\r\n[redirect] Following redirect to https://sfmcompile.club/tag/lazyprocrastinator/\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/\r\n[generic] lazyprocrastinator: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] lazyprocrastinator: Extracting information\r\n[debug] Looking for embeds\r\n[debug] Identified 8 html5 embeds\r\n[download] Downloading playlist: LazyProcrastinator Archives\r\n[generic] Playlist LazyProcrastinator Archives: Downloading 8 items of 8\r\n[download] Downloading item 1 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-1: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-blowjob-pov-Sound-update.mp4\"\r\n[debug] File locking is not supported. Proceeding without locking\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (1).mp4\r\n[download] 100% of 2.66MiB in 00:00:00 at 5.05MiB/s\r\n[download] Downloading item 2 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-2: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Hood-Lunafreya-spooning-fuck-Sound-update.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (2).mp4\r\n[download] 100% of 3.53MiB in 00:00:00 at 6.47MiB/s\r\n[download] Downloading item 3 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-3: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Bunny-Serah-Farron-sideway-proneboned.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (3).mp4\r\n[download] 100% of 3.09MiB in 00:00:00 at 6.05MiB/s\r\n[download] Downloading item 4 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-4: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Bunny-Serah-Farron-sideway-fucked.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (4).mp4\r\n[download] 100% of 2.97MiB in 00:00:00 at 5.50MiB/s\r\n[download] Downloading item 5 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-5: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Sadako-caught-on-tape.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (5).mp4\r\n[download] 100% of 1.77MiB in 00:00:00 at 4.34MiB/s\r\n[download] Downloading item 6 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-6: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Hood-Lunafreya-mating-press-Sound-update.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (6).mp4\r\n[download] 100% of 2.65MiB in 00:00:00 at 4.40MiB/s\r\n[download] Downloading item 7 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-7: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-hand-holding-cowgirl.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (7).mp4\r\n[download] 100% of 1.67MiB in 00:00:00 at 4.73MiB/s\r\n[download] Downloading item 8 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-8: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-dangerous-handjob-pov.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (8).mp4\r\n[download] 100% of 4.85MiB in 00:00:00 at 4.86MiB/s\r\n[download] Finished downloading playlist: LazyProcrastinator Archives\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/page/2/\r\n[generic] 2: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] 2: Extracting information\r\n[debug] Looking for embeds\r\n[debug] Identified 8 html5 embeds\r\n[download] Downloading playlist: LazyProcrastinator Archives\r\n[generic] Playlist LazyProcrastinator Archives: Downloading 8 items of 8\r\n[download] Downloading item 1 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-1: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-dangerous-thighjob-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (1).mp4 has already been downloaded\r\n[download] 100% of 2.66MiB\r\n[download] Downloading item 2 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-2: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-face-sitting-and-feetjob.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (2).mp4 has already been downloaded\r\n[download] 100% of 3.53MiB\r\n[download] Downloading item 3 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-3: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-heel-torture-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (3).mp4 has already been downloaded\r\n[download] 100% of 3.09MiB\r\n[download] Downloading item 4 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-4: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Ashley-Graham-cowgirl-riding-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (4).mp4 has already been downloaded\r\n[download] 100% of 2.97MiB\r\n[download] Downloading item 5 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-5: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Lunafreya-lifted-anal-Sound-update.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (5).mp4 has already been downloaded\r\n[download] 100% of 1.77MiB\r\n[download] Downloading item 6 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-6: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-sucking-nip-and-handjob-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (6).mp4 has already been downloaded\r\n[download] 100% of 2.65MiB\r\n[download] Downloading item 7 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-7: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-reverse-cowgirl-ride-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (7).mp4 has already been downloaded\r\n[download] 100% of 1.67MiB\r\n[download] Downloading item 8 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-8: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/2B-thighs-crushing-and-handjob.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (8).mp4 has already been downloaded\r\n```\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\r\n- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["yt-dlp.conf"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": ["yt-dlp.conf"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "a903d8285c96b2c7ac7915f228a17e84cbfe3ba4", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/1238", "iss_label": "question", "title": "[Question] How to use Sponsorblock as part of Python script", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:\r\n- Look through the README (https://github.com/yt-dlp/yt-dlp)\r\n- Read \"opening an issue\" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue\r\n- Search the bugtracker for similar questions: https://github.com/yt-dlp/yt-dlp/issues\r\n- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)\r\n-->\r\n\r\n- [x] I'm asking a question\r\n- [x] I've looked through the README\r\n- [x] I've read the opening an issue section in CONTRIBUTING.md\r\n- [x] I've searched the bugtracker for similar questions including closed ones\r\n- [x] I have given an appropriate title to the issue\r\n\r\n\r\n## Question\r\n\r\n<!--\r\nAsk your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/yt-dlp/yt-dlp.\r\n-->\r\n\r\nWhat are the relevant `ydl_opts` to use Sponsorblock with yt-dlp as part of a Python script?\r\n\r\n[README.md](https://github.com/yt-dlp/yt-dlp/blob/master/README.md#sponsorblock-options) documents usage on the command line and [yt_dlp/YoutubeDL.py](https://github.com/yt-dlp/yt-dlp/blob/master/yt_dlp/YoutubeDL.py) doesn't mention Sponsorblock at all.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a903d8285c96b2c7ac7915f228a17e84cbfe3ba4", "files": [{"path": "yt_dlp/__init__.py", "Loc": {"(None, '_real_main', 62)": {"mod": [427, 501]}}, "status": "modified"}, {"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "8531d2b03bac9cc746f2ee8098aaf8f115505f5b", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/10462", "iss_label": "question", "title": "Cookie not loading when downloading instagram videos", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\nI tried to download instagram videos with this code but the cookie does not load.\r\n\r\nBut with ```yt-dlp --cookies instagram_cookie.txt \"https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\"``` it does.\r\nIs there something wrong with my code? If so, please let me know the solution.\r\nSorry if I have missed something.\r\n\r\n```\r\nfrom yt_dlp import YoutubeDL\r\nimport subprocess\r\n\r\ndef download_video(url):\r\n if url in \".m3u8\":\r\n subprocess.run(f'ffmpeg -i {url} -c copy \"%name%.mp4\"', shell=True)\r\n print(\"m3u8\u30d5\u30a1\u30a4\u30eb\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u307e\u3057\u305f\")\r\n else:\r\n ydl_opts = {\r\n 'format': 'best[ext=mp4]',\r\n 'outtmpl': '%(title)s.%(ext)s',\r\n 'verbose': True,\r\n }\r\n\r\n if \"instagram.com\" in url:\r\n ydl_opts[\"cookies\"] = \"instagram_cookie.txt\"\r\n print(ydl_opts)\r\n \r\n with YoutubeDL(ydl_opts) as ydl:\r\n result = ydl.extract_info(url, download=True)\r\n file_path = ydl.prepare_filename(result)\r\n print(f\"{file_path}\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u307e\u3057\u305f\")\r\n \r\n return file_path\r\n\r\nif __name__ == \"__main__\":\r\n download_video(input(\"URL:\"))\r\n```\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\r\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n```shell\r\nURL:https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\r\n{'format': 'best[ext=mp4]', 'outtmpl': '%(title)s.%(ext)s', 'verbose': True, 'cookies': 'instagram_cookie.txt'}\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2024.07.13.232701 from yt-dlp/yt-dlp-nightly-builds [150ecc45d] (pip) API\r\n[debug] params: {'format': 'best[ext=mp4]', 'outtmpl': '%(title)s.%(ext)s', 'verbose': True, 'cookies': 'instagram_cookie.txt', 'compat_opts': set(), 'http_headers': {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.74 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-us,en;q=0.5', 'Sec-Fetch-Mode': 'navigate'}}\r\n[debug] Python 3.10.14 (CPython x86_64 64bit) - Linux-6.5.0-1023-gcp-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1834 extractors\r\n[Instagram] Extracting URL: https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\r\n[Instagram] C9SEsmYCx_M: Setting up session\r\n[Instagram] C9SEsmYCx_M: Downloading JSON metadata\r\nWARNING: [Instagram] C9SEsmYCx_M: General metadata extraction failed (some metadata might be missing).\r\n[Instagram] C9SEsmYCx_M: Downloading webpage\r\nWARNING: [Instagram] unable to extract shared data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\nWARNING: [Instagram] Main webpage is locked behind the login page. Retrying with embed webpage (some metadata might be missing).\r\n[Instagram] C9SEsmYCx_M: Downloading embed webpage\r\nWARNING: [Instagram] unable to extract additional data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\nERROR: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 740, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/instagram.py\", line 460, in _real_extract\r\n self.raise_login_required('Requested content is not available, rate-limit reached or login required')\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 1245, in raise_login_required\r\n raise ExtractorError(msg, expected=True)\r\n\r\nTraceback (most recent call last):\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1622, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1757, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 740, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/instagram.py\", line 460, in _real_extract\r\n self.raise_login_required('Requested content is not available, rate-limit reached or login required')\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 1245, in raise_login_required\r\n raise ExtractorError(msg, expected=True)\r\nyt_dlp.utils.ExtractorError: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/runner/moive-download-exe/main.py\", line 27, in <module>\r\n download_video(input(\"URL:\"))\r\n File \"/home/runner/moive-download-exe/main.py\", line 20, in download_video\r\n result = ydl.extract_info(url, download=True)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1611, in extract_info\r\n return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1640, in wrapper\r\n self.report_error(str(e), e.format_traceback())\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1088, in report_error\r\n self.trouble(f'{self._format_err(\"ERROR:\", self.Styles.ERROR)} {message}', *args, **kwargs)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1027, in trouble\r\n raise DownloadError(message, exc_info)\r\nyt_dlp.utils.DownloadError: ERROR: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8531d2b03bac9cc746f2ee8098aaf8f115505f5b", "files": [{"path": "yt_dlp/YoutubeDL.py", "Loc": {"('YoutubeDL', None, 189)": {"mod": [335]}}, "status": "modified"}, {"path": "yt_dlp/__init__.py", "Loc": {"(None, 'parse_options', 737)": {"mod": [901]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py", "yt_dlp/YoutubeDL.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "e59c82a74cda5139eb3928c75b0bd45484dbe7f0", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/11152", "iss_label": "question", "title": "How to use --merge-output-format?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nHello,\r\n\r\nThis is the first time I'm trying to use the \"--merge-output-format\" option to download and merge a video stream with an audio stream\u2026 and it failed:\r\n\r\n```\r\nyoutube-dlp.exe -qF\r\nyoutube-dlp.exe -f '160+140' --merge-output-format mp4 https://www.youtube.com/watch?v=123ABC\r\nRequested format is not available. Use --list-formats for a list of available formats\r\n```\r\n\r\nWhat is the right way to use that switch?\r\n\r\nThank you.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg 2024-06-13-git-0060a368b1-essentials_build-www.gyan.dev (setts), ffprobe 2024-06-13-git-0060a368b1-essentials_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1830 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\n[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec\r\n[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.09.27/SHA2-256SUMS\r\nCurrent version: stable@2024.08.06 from yt-dlp/yt-dlp\r\nLatest version: stable@2024.09.27 from yt-dlp/yt-dlp\r\nCurrent Build Hash: 468a6f8bf1d156ad173e000a40f696d4fbd69c5aa7360229329b9063a388e7d0\r\nUpdating to stable@2024.09.27 from yt-dlp/yt-dlp ...\r\n[debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp/releases/download/2024.09.27/yt-dlp.exe\r\nUpdated yt-dlp to stable@2024.09.27 from yt-dlp/yt-dlp\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e59c82a74cda5139eb3928c75b0bd45484dbe7f0", "files": [{"path": "README.md", "Loc": {"(None, None, 1430)": {"mod": [1430]}}, "status": "modified"}, {"path": "yt_dlp/options.py", "Loc": {"(None, 'create_parser', 219)": {"mod": [786, 790]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/options.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f18ebbd31645437afaa9738fcf2b5ed8b48cb021", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/6177", "iss_label": "Feature", "title": "Workflow that can follow different paths and skip some of them.", "body": "### Feature Idea\n\nHi.\r\nI am very interested in the ability to create a workflow that can follow different paths and skip some if they are not needed.\r\n\r\nFor example, I want to create an image and save it under a fixed name (unique). But tomorrow (or after restart) I want to run this workflow again and work with the already created image, which I created and saved earlier, and not waste time on its creation (upscale, modification, etc.), but just check if this image is in my folder, and if it is, then just load it and work with the loaded image, and the branch that creates the image will not run at all (skip this branch).\r\nBut it's important that the script does this by itself (without MUTE or BYPASS).\r\n\r\nExample \r\n![Screenshot_1](https://github.com/user-attachments/assets/840c86a0-7944-49ca-95fd-15825a632c7f)\r\n\r\nThis will help save a lot of time on complex workflows that need to be improved or modernized. And it can also save resources in case of a break or lack of memory - it will be possible to skip large parts of the scheme if they have already been made and saved (without keeping in memory models that have already worked).\r\n\n\n### Existing Solutions\n\nI've been trying for a long time to find out if such a possibility exists, but I couldn't find it. If such a feature is already implemented, where can I find it? Thanks.\n\n### Other\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ltdrdata", "pro": "ComfyUI-extension-tutorials", "path": ["ComfyUI-Impact-Pack/tutorial/switch.md"]}], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["ComfyUI-Impact-Pack/tutorial/switch.md"], "test": [], "config": [], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "834ab278d2761c452f8e76c83fb62d8f8ce39301", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/1064", "iss_label": "", "title": "Error occurred when executing CLIPVisionEncode", "body": "Hi there, \r\nsomehow i cant get unCLIP to work \r\n\r\nThe .png has the unclip example workflow i tried out, but it gets stuck in the CLIPVisionEncode Module.\r\nWhat can i do to solve this? \r\n\r\nError occurred when executing CLIPVisionEncode:\r\n\r\n'NoneType' object has no attribute 'encode_image'\r\n\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 144, in recursive_execute\r\noutput_data, output_ui = get_output_data(obj, input_data_all)\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 74, in get_output_data\r\nreturn_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 67, in map_node_over_list\r\nresults.append(getattr(obj, func)(**slice_dict(input_data_all, i)))\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 742, in encode\r\noutput = clip_vision.encode_image(image)\r\n\r\n\r\n\r\n![unclip_2pass](https://github.com/comfyanonymous/ComfyUI/assets/141161676/51b5ed7c-d5d9-4b88-a973-a54882039653)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "834ab278d2761c452f8e76c83fb62d8f8ce39301", "files": [{"path": "README.md", "Loc": {"(None, None, 30)": {"mod": [30]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "model\n+\nDoc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "3c60ecd7a83da43d694e26a77ca6b93106891251", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/5229", "iss_label": "User Support", "title": "Problem with ComfyUI workflow \"ControlNetApplySD3 'NoneType' object has no attribute 'copy'\"", "body": "### Your question\n\nI get the following error when running the workflow\r\n\r\nI leave a video of what I am working on as a reference.\r\n\r\nhttps://www.youtube.com/watch?v=MbQv8zoNEfY\r\n\r\nvideo of reference\n\n### Logs\n\n```powershell\n# ComfyUI Error Report\r\n## Error Details\r\n- **Node Type:** ControlNetApplySD3\r\n- **Exception Type:** AttributeError\r\n- **Exception Message:** 'NoneType' object has no attribute 'copy'\r\n## Stack Trace\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\n\r\n```\r\n## System Information\r\n- **ComfyUI Version:** v0.2.3-3-g6632365\r\n- **Arguments:** ComfyUI\\main.py --windows-standalone-build\r\n- **OS:** nt\r\n- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]\r\n- **Embedded Python:** true\r\n- **PyTorch Version:** 2.4.1+cu124\r\n## Devices\r\n\r\n- **Name:** cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n - **Type:** cuda\r\n - **VRAM Total:** 25769148416\r\n - **VRAM Free:** 19327837688\r\n - **Torch VRAM Total:** 5100273664\r\n - **Torch VRAM Free:** 57107960\r\n\r\n## Logs\r\n```\r\n2024-10-12 11:47:24,318 - root - INFO - Total VRAM 24575 MB, total RAM 65461 MB\r\n2024-10-12 11:47:24,318 - root - INFO - pytorch version: 2.4.1+cu124\r\n2024-10-12 11:47:24,318 - root - INFO - Set vram state to: NORMAL_VRAM\r\n2024-10-12 11:47:24,318 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n2024-10-12 11:47:26,738 - root - INFO - Using pytorch cross attention\r\n2024-10-12 11:47:32,778 - root - INFO - [Prompt Server] web root: D:\\ComfyUI_windows_portable\\ComfyUI\\web\r\n2024-10-12 11:47:36,818 - root - INFO - Total VRAM 24575 MB, total RAM 65461 MB\r\n2024-10-12 11:47:36,818 - root - INFO - pytorch version: 2.4.1+cu124\r\n2024-10-12 11:47:36,818 - root - INFO - Set vram state to: NORMAL_VRAM\r\n2024-10-12 11:47:36,818 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n2024-10-12 11:47:37,468 - root - INFO - \r\nImport times for custom nodes:\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\websocket_image_save.py\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\cg-use-everywhere\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI_UltimateSDUpscale\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\rgthree-comfy\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-KJNodes\r\n2024-10-12 11:47:37,468 - root - INFO - 0.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI_essentials\r\n2024-10-12 11:47:37,468 - root - INFO - 0.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\r\n2024-10-12 11:47:37,468 - root - INFO - 0.3 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-eesahesNodes\r\n2024-10-12 11:47:37,468 - root - INFO - 0.4 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Manager\r\n2024-10-12 11:47:37,468 - root - INFO - 0.4 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Impact-Pack\r\n2024-10-12 11:47:37,468 - root - INFO - 1.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-AdvancedLivePortrait\r\n2024-10-12 11:47:37,468 - root - INFO - \r\n2024-10-12 11:47:37,478 - root - INFO - Starting server\r\n\r\n2024-10-12 11:47:37,478 - root - INFO - To see the GUI go to: http://127.0.0.1:8188\r\n2024-10-12 12:16:10,093 - root - INFO - got prompt\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 147:\r\n2024-10-12 12:16:10,103 - root - ERROR - * UpscaleModelLoader 83:\r\n2024-10-12 12:16:10,103 - root - ERROR - - Value not in list: model_name: '4x-ClearRealityV1.pth' not in ['ClearRealityV1\\\\4x-ClearRealityV1.pth', 'ClearRealityV1\\\\4x-ClearRealityV1_Soft.pth', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp32.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp32.bin']\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 321:\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 311:\r\n2024-10-12 12:16:10,103 - root - ERROR - * InstantX Flux Union ControlNet Loader 334:\r\n2024-10-12 12:16:10,103 - root - ERROR - - Value not in list: control_net_name: 'flux\\InstantX_flux.safetensors' not in ['flux-canny-controlnet-v3.safetensors', 'flux-canny-controlnet.safetensors', 'flux-canny-controlnet_v2.safetensors', 'flux-depth-controlnet-v3.safetensors', 'flux-depth-controlnet.safetensors', 'flux-depth-controlnet_v2.safetensors', 'flux-hed-controlnet-v3.safetensors', 'flux-hed-controlnet.safetensors', 'flux\\\\diffusion_pytorch_model.safetensors']\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 301:\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 140:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 320:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 145:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 319:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 179:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 84:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 258:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 299:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 138:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 146:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 322:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 317:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 323:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 316:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 300:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 87:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 318:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 141:\r\n2024-10-12 12:16:10,118 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,647 - root - INFO - Using pytorch attention in VAE\r\n2024-10-12 12:16:10,647 - root - INFO - Using pytorch attention in VAE\r\n2024-10-12 12:16:18,202 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16\r\n2024-10-12 12:16:18,202 - root - INFO - model_type FLUX\r\n2024-10-12 12:27:11,335 - root - ERROR - error could not detect control model type.\r\n2024-10-12 12:27:11,335 - root - ERROR - error checkpoint does not contain controlnet or t2i adapter data D:\\ComfyUI_windows_portable\\ComfyUI\\models\\controlnet\\flux\\diffusion_pytorch_model.safetensors\r\n2024-10-12 12:27:13,290 - root - INFO - Requested to load FluxClipModel_\r\n2024-10-12 12:27:13,294 - root - INFO - Loading 1 new model\r\n2024-10-12 12:27:13,301 - root - INFO - loaded completely 0.0 4777.53759765625 True\r\n2024-10-12 12:27:51,099 - root - WARNING - clip missing: ['text_projection.weight']\r\n2024-10-12 12:27:52,730 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy'\r\n2024-10-12 12:27:52,745 - root - ERROR - Traceback (most recent call last):\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\nAttributeError: 'NoneType' object has no attribute 'copy'\r\n\r\n2024-10-12 12:27:52,750 - root - INFO - Prompt executed in 702.63 seconds\r\n2024-10-12 12:44:26,904 - root - INFO - got prompt\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 147:\r\n2024-10-12 12:44:26,917 - root - ERROR - * UpscaleModelLoader 83:\r\n2024-10-12 12:44:26,917 - root - ERROR - - Value not in list: model_name: '4x-ClearRealityV1.pth' not in ['ClearRealityV1\\\\4x-ClearRealityV1.pth', 'ClearRealityV1\\\\4x-ClearRealityV1_Soft.pth', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp32.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp32.bin']\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 321:\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 311:\r\n2024-10-12 12:44:26,917 - root - ERROR - * InstantX Flux Union ControlNet Loader 334:\r\n2024-10-12 12:44:26,917 - root - ERROR - - Value not in list: control_net_name: 'flux\\InstantX_flux.safetensors' not in ['flux-canny-controlnet-v3.safetensors', 'flux-canny-controlnet.safetensors', 'flux-canny-controlnet_v2.safetensors', 'flux-depth-controlnet-v3.safetensors', 'flux-depth-controlnet.safetensors', 'flux-depth-controlnet_v2.safetensors', 'flux-hed-controlnet-v3.safetensors', 'flux-hed-controlnet.safetensors', 'flux\\\\diffusion_pytorch_model.safetensors']\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 301:\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 140:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 320:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 145:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 319:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 179:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 84:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 258:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 299:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 138:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 146:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 322:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 317:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 323:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 316:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 300:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 87:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 318:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,932 - root - ERROR - Failed to validate prompt for output 141:\r\n2024-10-12 12:44:26,932 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,992 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy'\r\n2024-10-12 12:44:26,992 - root - ERROR - Traceback (most recent call last):\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\nAttributeError: 'NoneType' object has no attribute 'copy'\r\n\r\n2024-10-12 12:44:26,997 - root - INFO - Prompt executed in 0.06 seconds\r\n```\r\n## Attached Workflow\r\nPlease make sure that workflow does not contain any sensitive information such as API keys or passwords.\r\n```\r\nWorkflow too large. Please manually upload the workflow from local file system.\r\n```\r\n\r\n## Additional Context\r\n(Please add any additional context or steps to reproduce the error here)\n```\n\n\n### Other\n\n![Screenshot 2024-10-12 at 12-30-54 ComfyUI](https://github.com/user-attachments/assets/f0f76743-0561-4c02-8915-43143904b5b3)\r\n![Screenshot 2024-10-12 at 12-29-58 ComfyUI](https://github.com/user-attachments/assets/91e3539f-aa8b-4e68-bd58-4c4894345ce3)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "Shakker-Labs", "pro": "FLUX.1-dev-ControlNet-Union-Pro"}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["FLUX.1-dev-ControlNet-Union-Pro"]}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "494cfe5444598f22eced91b6f4bfffc24c4af339", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/96", "iss_label": "", "title": "Feature Request: model and output path setting", "body": "Sym linking is not ideal, setting a model folder is pretty standard these days and most of us use more than one software that uses models. \r\nThe same for choosing where to put the output images, personally mine go to a portable drive, not sure how to do that with ComfyUI.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "494cfe5444598f22eced91b6f4bfffc24c4af339", "files": [{"path": "extra_model_paths.yaml.example", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["extra_model_paths.yaml.example"], "asset": []}}, {"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f18ebbd31645437afaa9738fcf2b5ed8b48cb021", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/6186", "iss_label": "User Support\nCustom Nodes Bug", "title": "error", "body": "### Your question\n\n[Errno 2] No such file or directory: 'D:\\\\ComfyUI_windows_portable_nvidia\\\\ComfyUI_windows_portable\\\\ComfyUI\\\\custom_nodes\\\\comfyui_controlnet_aux\\\\ckpts\\\\LiheYoung\\\\Depth-Anything\\\\.cache\\\\huggingface\\\\download\\\\checkpoints\\\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete' is:issue \n\n### Logs\n\n```powershell\n.\n```\n\n\n### Other\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [".cache"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [".cache"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "e9df345a7853c52bfe98830bd2c9a07aaa7b81d9", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/159", "iss_label": "", "title": "Raspberry Pi Memory Error", "body": "* face_recognition version: 02.1\r\n* Python version: 2.7\r\n* Operating System: Raspian\r\n\r\n### Description\r\n\r\nI installed to face_recognition my raspberry pi successfully for python 3. Now i am trying to install for Python2 because i need it. When i was trying install i am taking a Memory Error. I attached the images from my error. Please help me \r\n\r\n![20170821_190454](https://user-images.githubusercontent.com/23421095/29530146-1e98e7be-86ab-11e7-91ea-e17c02170f63.jpg)\r\n![20170821_190501](https://user-images.githubusercontent.com/23421095/29530148-2113ac22-86ab-11e7-934d-e2062359f51a.jpg)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e9df345a7853c52bfe98830bd2c9a07aaa7b81d9", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "0961fd1aaf97336e544421318fcd4b55feeb1a79", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/533", "iss_label": "", "title": "knn neighbors name list?", "body": "In **face_recognition_knn.py**\r\nI want name list of 5 neighbors. So I change n_neighbors=5.\r\n`closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=5)`\r\nAnd it returned just 5 values of **distance_threshold** from trained .clf file\r\n\r\nI found that `knn_clf.predict(faces_encodings)` return only 1 best match name.\r\n\r\nHow can I get the name list of all that 5 people?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "scikit-learn"}, {"pro": "scikit-learn", "path": ["sklearn/neighbors/_classification.py"]}], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["sklearn/neighbors/_classification.py"], "doc": [], "test": [], "config": [], "asset": ["scikit-learn"]}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "f21631401119e4af2e919dd662c3817b2c480c75", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/135", "iss_label": "", "title": "face_recognization with python", "body": "* face_recognition version:\r\n* Python version:3.5\r\n* Operating System:windows\r\n\r\n### Description\r\n\r\n I am working with some python face reorganization code in which I want to compare sampleface.jpg which contains a sample face with facegrid.jpg. The facegrid.jpg itself has some 6 faces in it. I am getting results as true for every face while I should be getting only one. The code is below. \r\n\r\n```python\r\nimport face_recognition\r\nimage = face_recognition.load_image_file(\"faceGrid.jpg\")\r\nsample_image = face_recognition.load_image_file(\"sampleface.jpg\")\r\n\r\nsample_face_encoding = face_recognition.face_encodings(sample_image)\r\n\r\nface_locations = face_recognition.face_locations(image)\r\n\r\nprint (len(face_locations), \" Faces\")\r\n\r\nfor face_location in face_locations:\r\n top, right, bottom, left = face_location\r\n face_image = image[top:bottom, left:right]\r\n face_encodings = face_recognition.face_encodings(face_image)[0]\r\n if face_recognition.compare_faces(sample_face_encoding,face_encodings)[0]:\r\n print (\"Found!\")\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f21631401119e4af2e919dd662c3817b2c480c75", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "cea177b75f74fe4e8ce73cf33da2e7e38e658ba4", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/726", "iss_label": "", "title": "cv2.imshow error", "body": "Hi All,\r\n\r\nwith the help of docs i am trying to display image with below code and getting error. i tried all possible ways like file extension, path and python version to resolve this error and not able to rectify. So, please do needful,\r\n\r\nNote:- 1.image present in the path. \r\n 2. print statement result None as output.\r\n 3. i am using python 3.6 & opencv-python-4.0.0.21\r\n\r\nimport numpy\r\nimport cv2\r\n\r\nimg = cv2.imread('C:\\\\Users\\\\Public\\\\Pictures\\\\Sample Pictures\\\\Penguins.jpeg',0) # to read an image\r\n\r\ncv2.imshow('image',img) # to display image\r\ncv2.waitKey(0)\r\ncv2.destroyAllWindows()\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/rrmamidi/Desktop/old Desktop/compress_1/python/basic python scripts/about camera_opencv_cv2/about_img_read.py\", line 11, in <module>\r\n cv2.imshow('image',img) # to display image\r\ncv2.error: OpenCV(4.0.0) C:\\projects\\opencv-python\\opencv\\modules\\highgui\\src\\window.cpp:350: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'\r\n\r\nThanks,\r\nRaja", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [12], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "b8fed6f3c0ad5ab2dab72d6251c60843cad71386", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/643", "iss_label": "", "title": "Train model with more than 1 image per person", "body": "* face_recognition version: 1.2.3\r\n* Python version: 2.7.15\r\n* Operating System: Windows 10\r\n\r\n### Description\r\n\r\nI Would like to train the model with more than 1 image per each person to achieve better recognition results. Is it possible?\r\n\r\nOne more question is what does [0] mean here:\r\n```\r\nknown_face_encoding_user = face_recognition.face_encodings('image.jpg')[0]\r\n```\r\nIf I put [1] here I receive \"IndexError: list index out of range\" error.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b8fed6f3c0ad5ab2dab72d6251c60843cad71386", "files": [{"path": "examples/face_recognition_knn.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["examples/face_recognition_knn.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "aff06e965e895d8a6e781710e7c44c894e3011a3", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/68", "iss_label": "", "title": "cv2.error: /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp:3229: error: (-215) ssize.area() > 0 in function resize", "body": "* face_recognition version:\r\n* Python version: 3.4\r\n* Operating System: Jesse Raspbian\r\n\r\n### Description\r\n\r\nWhenever I try to run facerec_from_webcam_faster.py, I get the error below. Note that I have checked my local files, the image to be recognized is place appropriately. \r\n\r\n### \r\n\r\n\r\n```\r\nOpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229\r\nTraceback (most recent call last):\r\n File \"pj_webcam.py\", line 31, in <module>\r\n small_frame = cv2.resize(frame, (1, 1), fx=0.01, fy=0.01)\r\ncv2.error: /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp:3229: error: (-215) ssize.area() > 0 in function resize\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "aff06e965e895d8a6e781710e7c44c894e3011a3", "files": [{"path": "examples/facerec_from_webcam_faster.py", "Loc": {"(None, None, None)": {"mod": [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/facerec_from_webcam_faster.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "6da4a2ff0f0183280cdc2bffa58ddae8bf93ac41", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/181", "iss_label": "", "title": "does load_image_file have a version which read from byte[] not just from the disk file", "body": "does load_image_file have a version which read from byte array in memory not just from the disk file.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6da4a2ff0f0183280cdc2bffa58ddae8bf93ac41", "files": [{"path": "face_recognition/api.py", "Loc": {"(None, 'load_image_file', 73)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["face_recognition/api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "5f804870c14803c2664942c958f11112276a79cc", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/209", "iss_label": "", "title": "face_locations get wrong result but dlib is correct", "body": "* face_recognition version: 1.0.0\r\n* Python version: 3.5\r\n* Operating System: Ubuntu 16.04 LTS\r\n\r\n### Description\r\nI run the example find_faces_in_picture_cnn.py to process the image from this link.\r\nhttps://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1507896274082&di=824f7f59943a71e2e9904d22175ce92c&imgtype=0&src=http%3A%2F%2Fwww.moontalk.com.tw%2Fupload%2Fimages%2F20160606angelina-03.jpg\r\nThe program detect the hand as a face ,I check the code and run example in dlib from this link ,the result is correct.\r\nhttp://dlib.net/cnn_face_detector.py.html\r\nSo the problem maybe occur in load_image_file ?\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5f804870c14803c2664942c958f11112276a79cc", "files": [{"path": "examples/find_faces_in_picture_cnn.py", "Loc": {"(None, None, None)": {"mod": [12]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/find_faces_in_picture_cnn.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "a96484edc270697c666c1c32ba5163cf8e71b467", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/1004", "iss_label": "", "title": "IndexError: list index out of range while attempting to automatically recognize faces ", "body": "* face_recognition version: 1.2.3\r\n* Python version: 3.7.3\r\n* Operating System: Windows 10 x64\r\n\r\n### Description\r\n\r\nHello everyone,\r\nI was attempting to modify facerec_from_video_file.py in order to make it save the unknown faces in the video and recognize them based on the first frame they appear on for example if an unknown face appears on the frame 14 it should be recognized as \"new 14\" but i keep getting the error \"IndexError: list index out of range\" when a new face appears.\r\nSo here is my code and the traceback\r\n\r\n### What I Did\r\n\r\n```\r\nimport face_recognition\r\nimport cv2\r\n\r\ninput_movie = cv2.VideoCapture(\"video.mp4\")\r\nlength = int(input_movie.get(cv2.CAP_PROP_FRAME_COUNT))\r\n\r\n# Create an output movie file (make sure resolution/frame rate matches input video!)\r\nfourcc = cv2.VideoWriter_fourcc(*'XVID')\r\noutput_movie = cv2.VideoWriter('output.avi', fourcc, 29.97, (640, 360))\r\n\r\n\r\nnewimage = face_recognition.load_image_file(\"anchor.png\")\r\nnew_face_encoding = face_recognition.face_encodings(newimage)[0]\r\n\r\nknown_faces = [\r\n new_face_encoding,\r\n \r\n]\r\n\r\n# Initialize some variables\r\nface_locations = []\r\nface_encodings = []\r\nface_names = []\r\nframe_number = 0\r\n\r\n\r\ndef recog(frame_number, known_faces, face_names):\r\n toenc = []\r\n \r\n torec = face_recognition.load_image_file(r\"New\\Unknown%s.jpg\" %str(frame_number))\r\n \r\n #if not len(torec):\r\n # print(\"cannot find image\")\r\n #torec = face_recognition.load_image_file(r\"New\\Unknown%s.jpg\" %str(frame_number))\r\n toenc.append((face_recognition.face_encodings(torec))[0])\r\n if not len(toenc):\r\n print(\"can't be encoded\")\r\n known_faces.append(toenc.pop())\r\n face_names.append(\"new %s\" %str(frame_number)) \r\n \r\n# Load some sample pictures and learn how to recognize them.\r\n\r\nwhile True:\r\n # Grab a single frame of video\r\n ret, frame = input_movie.read()\r\n frame_number += 1\r\n\r\n # Quit when the input video file ends\r\n if not ret:\r\n break\r\n\r\n # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)\r\n rgb_frame = frame[:, :, ::-1]\r\n\r\n # Find all the faces and face encodings in the current frame of video\r\n face_locations = face_recognition.face_locations(rgb_frame)\r\n face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)\r\n\r\n #face_names = []\r\n for face_encoding in face_encodings:\r\n # See if the face is a match for the known face(s)\r\n match = face_recognition.compare_faces(known_faces, face_encoding)\r\n \r\n \r\n # If you had more than 2 faces, you could make this logic a lot prettier\r\n # but I kept it simple for the demo\r\n name = \"Unknown\"\r\n \r\n face_names.append(name)\r\n\r\n # Label the results\r\n for (top, right, bottom, left), name in zip(face_locations, face_names):\r\n if not name:\r\n continue\r\n\r\n # Draw a box around the face\r\n unface = cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)\r\n if name == \"Unknown\":\r\n res = frame[top:bottom, left:right]\r\n cv2.imwrite(r\"New\\Unknown%s.jpg\" %str(frame_number), res)\r\n recog(frame_number, known_faces, face_names)\r\n \r\n cv2.rectangle(frame, (left, bottom - 25), (right, bottom), (0, 0, 255), cv2.FILLED)\r\n font = cv2.FONT_HERSHEY_DUPLEX\r\n cv2.putText(frame, name, (left + 6, bottom - 6), font, 0.5, (255, 255, 255), 1)\r\n \r\n # Write the resulting image to the output video file\r\n print(\"Processing frame {} / {}\".format(frame_number, length))\r\n #output_movie.write(frame)\r\n cv2.imshow(\"frame\", frame)\r\n if( cv2.waitKey(27) & 0xFF == ord('q')):\r\n break\r\n\r\n# All done!\r\ninput_movie.release()\r\ncv2.destroyAllWindows()\r\n\r\n```\r\n### Output\r\n```\r\nIn [1]: runfile('D:/project_new/facerec_from_video_file.py', wdir='D:/project_new')\r\nProcessing frame 1 / 3291\r\nProcessing frame 2 / 3291\r\nProcessing frame 3 / 3291\r\nProcessing frame 4 / 3291\r\nProcessing frame 5 / 3291\r\nProcessing frame 6 / 3291\r\nProcessing frame 7 / 3291\r\nProcessing frame 8 / 3291\r\nProcessing frame 9 / 3291\r\nProcessing frame 10 / 3291\r\nProcessing frame 11 / 3291\r\nProcessing frame 12 / 3291\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-1-4b2c69ca71f8>\", line 1, in <module>\r\n runfile('D:/project_new/facerec_from_video_file.py', wdir='D:/project_new')\r\n\r\n File \"C:\\Users\\saber\\Anaconda3\\lib\\site-packages\\spyder_kernels\\customize\\spydercustomize.py\", line 827, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"C:\\Users\\saber\\Anaconda3\\lib\\site-packages\\spyder_kernels\\customize\\spydercustomize.py\", line 110, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"D:/project_new/facerec_from_video_file.py\", line 81, in <module>\r\n recog(frame_number, known_faces, face_names)\r\n\r\n File \"D:/project_new/facerec_from_video_file.py\", line 35, in recog\r\n toenc.append((face_recognition.face_encodings(torec))[0])\r\n\r\nIndexError: list index out of range\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a96484edc270697c666c1c32ba5163cf8e71b467", "files": [{"path": "examples/facerec_from_video_file.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["examples/facerec_from_video_file.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "a8830627e89bcfb9c9dda2c8f7cac5d2e5cfb6c0", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/178", "iss_label": "", "title": "IndexError: list index out of range", "body": "IndexError: list index out of range\r\n\r\nmy code:\r\n\r\nimport face_recognition\r\nknown_image = face_recognition.load_image_file(\"D:/1.jpg\")\r\nunknown_image = face_recognition.load_image_file(\"D:/2.jpg\")\r\nbiden_encoding = face_recognition.face_encodings(known_image)[0]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [8], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "7f183afd9c848f05830c145890c04181dcc1c46b", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/93", "iss_label": "", "title": "how to do live face recognition with RPi", "body": "* Operating System: Debian\r\n\r\n### Description\r\n\r\ni want to use the example ```facerec_from_webcam_faster.py``` \r\nbut i don't know how to change the video_output source to the PiCam\r\n\r\n### What I Did\r\n\r\n```\r\ncamera = picamera.PiCamera()\r\ncamera.resolution = (320, 240)\r\noutput = np.empty((240, 320, 3), dtype=np.uint8)\r\n\r\n\r\nwhile True:\r\n # Grab a single frame of video\r\n ret, frame = camera.capture(output, format=\"rgb\")\r\n```\r\nbut i got erros, so how can i use the picam as source?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [18], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "14318e392fbe2d69511441edf5a172c4c72d6961", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/7095", "iss_label": "status/close", "title": "\u6587\u672c\u68c0\u6d4b\u5b8c\u7684\u56fe\u7247\u600e\u4e48\u8fdb\u884c\u6587\u672c\u8bc6\u522b\u554a\uff1f", "body": "\u662f\u8981\u628a\u8fb9\u754c\u6846\u6846\u51fa\u7684\u56fe\u7247\u526a\u88c1\u4e0b\u6765\uff0c\u9001\u8fdb\u8bc6\u522b\u6a21\u578b\u5417\uff1f\u5173\u4e8e\u8fd9\u4e2a\u7684\u4ee3\u7801\u5728\u54ea\u91cc\u554a", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "14318e392fbe2d69511441edf5a172c4c72d6961", "files": [{"path": "tools/infer/predict_system.py", "Loc": {"('TextSystem', '__call__', 67)": {"mod": [69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/infer/predict_system.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "db60893201ad07a8c20d938a8224799f932779ad", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5641", "iss_label": "inference and deployment", "title": "PaddleServing\u600e\u6837\u4fee\u6539\u76f8\u5173\u53c2\u6570", "body": "\u6839\u636e [**\u57fa\u4e8ePaddleServing\u7684\u670d\u52a1\u90e8\u7f72**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/deploy/pdserving/README_CN.md) \u540e\uff0c\u600e\u6837\u5bf9\u6a21\u578b\u53ca\u670d\u52a1\u7684\u4e00\u4e9b\u53c2\u6570\u8fdb\u884c\u4fee\u6539\u5462\uff1f\r\n\u4f8b\u5982\u5982\u4e0b\u53c2\u6570\uff1a\r\nuse_tensorrt\r\nbatch_size\r\ndet_limit_side_len\r\nbatch_num\r\ntotal_process_num\r\n...\r\n\r\n\u7591\u60d1\uff1a\r\n1\u3001[**PaddleHub Serving\u90e8\u7f72**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/deploy/hubserving/readme.md)\uff0c\u652f\u6301\u4e00\u4e9b\u53c2\u6570\u4fee\u6539\r\n2\u3001[**\u57fa\u4e8ePython\u5f15\u64ce\u7684PP-OCR\u6a21\u578b\u5e93\u63a8\u7406**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/doc/doc_ch/inference_ppocr.md)\uff0c\u4e5f\u652f\u6301\u53c2\u6570\u4fee\u6539\r\n\r\n\u4e0a\u9762\u5217\u4e3e\u7684\u51e0\u4e2a\u53c2\u6570\u90fd\u6781\u5176\u91cd\u8981\uff0c\u4f46\u662fPaddleServing\u65b9\u6cd5\u5374\u4e0d\u652f\u6301\uff0c\u8bf7\u6307\u793a\uff01\u662f\u5426\u662f\u54ea\u91cc\u53ef\u4ee5\u8bbe\u7f6e\u800c\u6211\u6ca1\u627e\u5230", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "db60893201ad07a8c20d938a8224799f932779ad", "files": [{"path": "deploy/pdserving/web_service.py", "Loc": {"('DetOp', 'init_op', 30)": {"mod": [31]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["deploy/pdserving/web_service.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "0afe6c3262babda2012074110520fe9d1a3c63c0", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/2405", "iss_label": "status/close", "title": "\u8f7b\u91cf\u6a21\u578b\u7684\u63a8\u65ad\u4e2d\uff0c\u6bcf\u9694\u51e0\u884c\u5c31\u4f1a\u51fa\u73b0\u4e00\u884c\u8bc6\u522b\u4e3a\u4e71\u7801", "body": "![image](https://user-images.githubusercontent.com/62594309/113710560-73095c00-9716-11eb-828d-40026f37715e.png)\r\n\u5c31\u50cf\u8fd9\u91cc\u84dd\u8272\u5708\u8d77\u6765\u7684\u8fd9\u884c\r\n\r\n\u4f46\u662f\u901a\u7528\u6a21\u578b\u5c31\u6ca1\u6709\u8fd9\u4e2a\u95ee\u9898\r\n\u8fd9\u662f\u4ec0\u4e48\u539f\u56e0\u5f15\u8d77\u7684\u5462\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0afe6c3262babda2012074110520fe9d1a3c63c0", "files": [{"path": "deploy/hubserving/readme_en.md", "Loc": {"(None, None, 192)": {"mod": [192]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code\nDoc\nHow to modify own code"}, "loctype": {"code": [], "doc": ["deploy/hubserving/readme_en.md"], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "64edd41c277c60c672388be6d5764be85c1de43a", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5427", "iss_label": "status/close\nstale", "title": "rknn\u4e0d\u652f\u6301DepthwiseSeparable\u6a21\u5757\u4e2d\u7684ConvBNLayer\u5c42\u53c2\u6570stride(p1, p2) p1\u4e0ep2\u4e0d\u4e00\u81f4\u7b97\u5b50", "body": "rknn\u4e0d\u652f\u6301DepthwiseSeparable\u6a21\u5757\u4e2d\u7684ConvBNLayer\u5c42\u53c2\u6570stride(p1, p2) p1\u4e0ep2\u4e0d\u4e00\u81f4\u7b97\u5b50\uff0c\u8fd9\u6837\u6d89\u53ca\u5230\u4fee\u6539\u7f51\u7edc\u7ed3\u6784\uff0c\u6211\u770b\u4e86\u4e0bstride(p1, p2)\u4e2dp1\u4e0ep2\u4e0d\u4e00\u81f4\u7684\u60c5\u51b5\u662f\u4e3a\u4e86\u505a\u4e0b\u91c7\u6837\u7684\u64cd\u4f5c\uff0c\u8bf7\u95ee\u6211\u60f3\u4fdd\u6301p1\u4e0ep2\u76f8\u7b49\u7684\u60c5\u51b5\u4e0b\uff0c\u8be5\u5982\u4f55\u4fee\u6539DepthwiseSeparable\u6a21\u5757\u6216\u8005\u66f4\u4e0a\u5c42\u6a21\u5757\u7684\u53c2\u6570\u5462\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "64edd41c277c60c672388be6d5764be85c1de43a", "files": [{"path": "ppocr/modeling/backbones/rec_mobilenet_v3.py", "Loc": {"('MobileNetV3', '__init__', 23)": {"mod": [48]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ppocr/modeling/backbones/rec_mobilenet_v3.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "2e352dcc06ba86159099ec6a2928c7ce556a7245", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/7542", "iss_label": "status/close", "title": "PaddleOCR\u52a0\u8f7d\u81ea\u5df1\u7684\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u56fe\u50cf\u68c0\u6d4b+\u8bc6\u522b\u4e0e\u4ec5\u4f7f\u7528\u8bc6\u522b\u6a21\u578b\u65f6\u6548\u679c\u4e0d\u4e00\u81f4", "body": "\u5148\u7528PaddleOCR\u7684\u56fe\u50cf\u68c0\u6d4b\u529f\u80fd\uff0c\u6309\u7167\u5f97\u5230\u7684\u8bc6\u522b\u6846\u5e26\u6587\u5b57\u7684\u5c0f\u56fe\u88c1\u526a\u51fa\u6765\uff0c\u6807\u6ce8\u7528\u505a\u8bad\u7ec3\u96c6\uff0c\u5bf9\u6587\u5b57\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u4e86\u8bad\u7ec3\uff0c\u7136\u540e\u63a8\u7406\u6d4b\u8bd5\u4e86\u4e00\u4e0b\u6ca1\u6709\u95ee\u9898\uff0c\u4e8e\u662f\u4f7f\u7528PaddleOCR\u52a0\u8f7d\u65b0\u8bad\u7ec3\u7684\u6587\u5b57\u8bc6\u522b\u6a21\u578b\u8dd1\u68c0\u6d4b + \u8bc6\u522b\u7684\u6574\u4f53\u6d41\u7a0b\uff0c\u7ed3\u679c\u53d1\u73b0\u51fa\u73b0\u4e86\u8bc6\u522b\u7ed3\u679c\u4e0d\u4e00\u81f4\u7684\u60c5\u51b5\u3002\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aCenOS7\r\n- \u7248\u672c\u53f7/Version\uff1aPaddle\uff1a2.3.1.post112 PaddleOCR\uff1a2.6 \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1aPaddleOCR\r\n- python/Version: 3.9.12\r\n- \u4f7f\u7528\u6a21\u578bppocrv3\r\n\r\n\u95ee\u9898\u56fe\u7247\uff1a\r\n![image](https://user-images.githubusercontent.com/34825635/189260271-ee896330-02ea-4290-a6da-a8b16a644be2.png)\r\n\r\n* \u5355\u7528\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u63a8\u7406\u65f6\uff1a\uff08\u6709\u654f\u611f\u4fe1\u606f\u6b64\u5904\u6211\u906e\u6321\u4e86\uff09\r\n`\u524d\u8a00\uff0d\u5ba2\u6237\uff08\u201c\u7532\u65b9\u201d\uff09\u548cXXXXX\uff08\u201c\u4e59\u65b9\u201d\uff09\u6240\u7b7e\u8ba2\u7684\u4e1a\u52a1\u7ea6\u5b9a\u4e66\uff08\u201c\u4e1a\u52a1\u7ea6\u5b9a\u4e66\u201d\uff09\u53ca\u672c\u4e1a\u52a1\u6761\u6b3e\u5176\u540c\u6784\u6210`\r\n* \u4f7f\u7528PaddleOCR\u65f6\uff1a\r\n`\uff08\uff0c\uff09\uff087\uff0c\uff09\u662f\u65f6\uff08\uff0c\uff09\uff0d`\r\n\r\n- \u63a8\u7406\u547d\u4ee4\uff1a\r\n```\r\npython3 tools/infer/predict_rec.py --image_dir=/home/hr/projects/ppocr/PaddleOCR/data/train_data/rec/train/XXXXX.png --rec_model_dir=/home/hr/projects/ppocr/PaddleOCR/output/inference/rec_ppocr_v3_distillation/Teacher --rec_image_shape=\"3, 48, 640\" --rec_char_dict_path=/home/hr/projects/ppocr/PaddleOCR/ppocr/utils/ppocr_keys_v1.txt\r\n```\r\n- \u914d\u7f6e\u6587\u4ef6\u7684\u53c2\u6570\uff1a\r\n```\r\n# \u5bf9image_shape\u8fdb\u884c\u4e86\u66f4\u6539\r\nimage_shape: [3, 48, 640]\r\n```\r\n- PaddleOCR\u7684\u52a0\u8f7d\u53c2\u6570\u8bbe\u7f6e\uff1a\r\n```\r\npaddle_ocr_engine = PaddleOCR(\r\n use_angle_cls=True, \r\n lang=\"ch\", \r\n rec_model_dir=\"./output/inference/rec_ppocr_v3_distillation/Teacher\",\r\n rec_image_shape=\"3, 48, 640\",\r\n rec_char_dict_path=\"./ppocr/utils/ppocr_keys_v1.txt\") \r\n```\r\n\r\n\u5982\u679c\u80fd\u591f\u63d0\u4f9b\u4e00\u4e9b\u5e2e\u52a9\u6216\u8005\u5efa\u8bae\uff0c\u975e\u5e38\u611f\u8c22\uff01", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "2e352dcc06ba86159099ec6a2928c7ce556a7245", "files": [{"path": "paddleocr.py", "Loc": {"('PaddleOCR', '__init__', 445)": {"mod": [480]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["paddleocr.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "443de01526a1c7108934990c4b646ed992f0bce8", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5209", "iss_label": "status/close", "title": "pdserving \u6700\u540e\u600e\u4e48\u8fd4\u56de\u6587\u672c\u4ee5\u53ca\u6587\u672c\u5750\u6807", "body": "\u76ee\u524dpdserving \u53ea\u8fd4\u56de\u4e86 \u6587\u672c\u6ca1\u6709\u8fd4\u56de\u6587\u672c\u5750\u6807\uff0c\u8bf7\u95ee\u5982\u4f55\u8fd4\u56de\u6587\u672c\u5750\u6807\u5462", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "443de01526a1c7108934990c4b646ed992f0bce8", "files": [{"path": "deploy/pdserving/ocr_reader.py", "Loc": {"('OCRReader', 'postprocess', 425)": {"mod": []}}, "status": "modified"}, {"path": "deploy/pdserving/web_service.py", "Loc": {"('DetOp', 'postprocess', 57)": {"mod": [63]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["deploy/pdserving/web_service.py", "deploy/pdserving/ocr_reader.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "ab16f2e4f9a4eac2eeb5f0324ab950b2215780d0", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3735", "iss_label": "", "title": "\u505a\u6570\u5b57\u8bad\u7ec3\u7684\u56fe\u50cf\u3002\u5728\u628a\u68c0\u6d4b\u548c\u8bc6\u522b\u4e32\u8d77\u6765\u7684\u65f6\u5019\uff0c\u8bc6\u522b\u51fa\u6765\u7684\u4e3a\u4ec0\u4e48\u662f\u4e2d\u6587\uff1f", "body": "\u81ea\u5df1\u8bad\u7ec3\u6570\u5b57\u6a21\u578b\uff0c\u7528\u5230\u68c0\u6d4b\u548c\u8bc6\u522b\uff0c\u5728\u8f6cinference\u6a21\u578b\u524d\uff0c\u8bc6\u522b\u7684\u662f\u6570\u5b57\u3002\u4f46\u5c06\u68c0\u6d4b\u548c\u8bc6\u522b\u4e32\u8054\u7684\u65f6\u5019\uff0c\u6309\u7167\u5b98\u65b9\u6559\u7a0b\uff0c\u8f6c\u6362\u6210inference\u6a21\u578b\uff0c\u4e3a\u4ec0\u4e48\u8bc6\u522b\u51fa\u6765\u7684\u662f\u4e2d\u6587\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ab16f2e4f9a4eac2eeb5f0324ab950b2215780d0", "files": [{"path": "configs/det/det_mv3_db.yml", "Loc": {"(None, None, 116)": {"mod": [116]}}, "status": "modified"}, {"path": "tools/infer/predict_det.py", "Loc": {"('TextDetector', '__init__', 38)": {"mod": [42]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/infer/predict_det.py"], "doc": [], "test": [], "config": ["configs/det/det_mv3_db.yml"], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "efc01375c942d87dc1e20856c7159096db16a9ab", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/11715", "iss_label": "", "title": "Can ch_PP-OCRv4_rec_server_infer's support for english be put into the documentation?", "body": "I notice if I am calling\r\n\r\n```\r\nfrom paddleocr import PaddleOCR\r\nocr = Paddle.OCR(\r\ndet_model_dir=ch_PP-OCRv4_det_server_infer,\r\nrec_model_dir=ch_PP-OCRv4_rec_infer\r\nlang='en')\r\n...\r\nresult = ocr.ocr(my_image)\r\n```\r\nthis works fine. However, If i set the rec model to the server version as well (`ch_PP-OCRv4_rec_server_infer`), then I get the following error:\r\n\r\n```\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/paddleocr.py\", line 661, in ocr\r\n dt_boxes, rec_res, _ = self.__call__(img, cls)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/tools/infer/predict_system.py\", line 105, in __call__\r\n rec_res, elapse = self.text_recognizer(img_crop_list)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/tools/infer/predict_rec.py\", line 628, in __call__\r\n rec_result = self.postprocess_op(preds)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 121, in __call__\r\n text = self.decode(preds_idx, preds_prob, is_remove_duplicate=True)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 83, in decode\r\n char_list = [\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 84, in <listcomp>\r\n self.character[text_id]\r\nIndexError: list index out of range\r\n```\r\n\r\nWhich I'm guessing is because it's trying to output Chinese, which has an 8000 character dict, whereas English only has 90 or so. Because it says english is supported by the server model (https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/doc/doc_ch/models_list.md), is it possible to get the ppocrv4 server model to output english successfully? \r\n<img width=\"1274\" alt=\"Screen Shot 2024-03-11 at 10 12 15 PM\" src=\"https://github.com/PaddlePaddle/PaddleOCR/assets/21298347/f0b204ea-c7d3-4368-a939-4c9f99b111fb\">\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "efc01375c942d87dc1e20856c7159096db16a9ab", "files": [{"path": "paddleocr.py", "Loc": {"(None, None, None)": {"mod": [76, 80]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["paddleocr.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "9d44728da81e7d56ea5f437845d8d48bc278b086", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3248", "iss_label": "", "title": "\u68c0\u6d4b\u548c\u8bc6\u522b\u600e\u4e48\u8fde\u63a5", "body": "\u60f3\u7528\u8f7b\u91cf\u5316\u7684\u68c0\u6d4b\u6a21\u578b\u914d\u5408RCNN\u8bc6\u522b\uff0c\u4e0d\u77e5\u9053\u600e\u4e48\u5c06\u4e24\u4e2a\u9636\u6bb5\u8fde\u63a5\u5728\u4e00\u8d77\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "9d44728da81e7d56ea5f437845d8d48bc278b086", "files": [{"path": "doc/doc_ch/inference.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["doc/doc_ch/inference.md"], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "582e868cf84fca911e195596053f503f890b561b", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/8641", "iss_label": "status/close", "title": "\u8bf7\u5236\u4f5cPP-Structure\u7684PaddleServing\u4f8b\u5b50\u5427", "body": "\u8981\u5199PP-Structure\u5728paddle_serving_server.web_service\u4e2d\u7684Op\u7c7b\uff0c\u611f\u89c9\u6211\u8fd9\u4e2a\u65b0\u624b\u505a\u4e0d\u5230\u554a\u3002\r\n\u6709\u6ca1\u6709\u5927\u795e\u505a\u597d\u4f8b\u5b50\uff0c\u8ba9\u65b0\u624b\u590d\u7528\u5462", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "582e868cf84fca911e195596053f503f890b561b", "files": [{"path": "deploy/hubserving/readme.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["deploy/hubserving/readme.md"], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "35449b5c7440f7706e5a4558e5b3efeb76944432", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3844", "iss_label": "", "title": "HOW TO RESUME TRAINING FROM LAST CHECKPOINT?", "body": "Hi,\r\nI have been training a model on my own dataset, How I can resume the training from last checkpoint saved? And also when I train the model does it save Best weights automatically to some path or we need to provide some argument to do it. \r\nPlease help me on this.\r\n\r\nThanks..", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "35449b5c7440f7706e5a4558e5b3efeb76944432", "files": [{"path": "tools/program.py", "Loc": {"('ArgsParser', '__init__', 39)": {"mod": [42, 42]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["tools/program.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "adba814904eb4f0aeeec186f158cfb6c212a6e26", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3942", "iss_label": "", "title": "\u6a21\u578b\u5e93404", "body": "ch_ppocr_mobile_slim_v2.1_det \u63a8\u7406\u6a21\u578b\r\nch_ppocr_mobile_v2.1_det \u63a8\u7406\u548c\u8bad\u7ec3\u6a21\u578b\r\n\u4e0a\u9762\u7684\u5230\u76ee\u524d\u662f404\u72b6\u6001\uff0c\u65e0\u6cd5\u4e0b\u8f7d", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "adba814904eb4f0aeeec186f158cfb6c212a6e26", "files": [{"path": "doc/doc_ch/models_list.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["doc/doc_ch/models_list.md"], "test": [], "config": [], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "c167df2f60d08085167cdc9431101f4b45a8a019", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/6838", "iss_label": "status/close", "title": "Mac M1 Pro can't install paddleOCR2.0.1~2.5.0.3, but I can install paddleOCR 1.1.1 and run successful.", "body": "\u8bf7\u63d0\u4f9b\u4e0b\u8ff0\u5b8c\u6574\u4fe1\u606f\u4ee5\u4fbf\u5feb\u901f\u5b9a\u4f4d\u95ee\u9898/Please provide the following information to quickly locate the problem\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aMacBook\u00a0Pro\uff0814\u82f1\u5bf8\uff0c2021\u5e74\uff09\uff0cApple M1 Pro 16 GB\uff0c\r\n- \u7248\u672c\u53f7/Version\uff1aPycharm2022.1.2 and Anaconda create Python 3.8 environment.\r\n- Paddle\uff1a Monterey 12.3\r\n- PaddleOCR\uff1a2.0.1~2.5.0.3\r\n- \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1aPaddleOCR\u3001Numpy\r\n- \u8fd0\u884c\u6307\u4ee4/Command Code\uff1a\r\n\r\n1. python3 -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple \uff08\u8fd0\u884c\u6b63\u5e38\uff0cRun OK !\uff09\r\n2. pip install \"paddleocr>=2.0.1\"\uff08\u8fd0\u884c\u5931\u8d25\uff0c\u62a5\u9519\uff0cToo much ERROR\uff01\uff09\uff08\u5982\u679c\u6211\u4e0d\u6307\u5b9apaddleOCR1\u7684\u7248\u672c\u53f7\uff0c\u4f1a\u81ea\u52a8\u5b89\u88c5paddleOCR 1.1.1\uff0c\u5e76\u4e14\u53ef\u4ee5\u6b63\u5e38\u8fd0\u884c\u4f7f\u7528\uff0c\u53732.0.1\u7248\u672c\u5f00\u59cb\u5168\u90e8\u5b89\u88c5\u5931\u8d25\uff09\r\n\r\n- \u5b8c\u6574\u62a5\u9519/Complete Error Message\uff08\u8be6\u89c1markdown\u6587\u6863\uff0c\u592a\u957f\u4e86\uff0c\u8fd9\u91cc\u4f20\u4e0d\u4e0a\u6765\uff09\r\n[\u3010Error Log\u3011Mac M1 Pro can't install paddleOCR2.0.1~2.5.0.3.md](https://github.com/PaddlePaddle/PaddleOCR/files/9075892/Error.Log.Mac.M1.Pro.can.t.install.paddleOCR2.0.1.2.5.0.3.md)\r\n\uff1a\r\n- `ERROR: Cannot install paddleocr==2.0.1, paddleocr==2.0.2, paddleocr==2.0.3, paddleocr==2.0.4, paddleocr==2.0.5, paddleocr==2.0.6, paddleocr==2.2, paddleocr==2.2.0.1, paddleocr==2.2.0.2, paddleocr==2.3, paddleocr==2.3.0.1, paddleocr==2.3.0.2, paddleocr==2.4, paddleocr==2.4.0.1, paddleocr==2.4.0.2, paddleocr==2.4.0.3, paddleocr==2.4.0.4, paddleocr==2.5, paddleocr==2.5.0.2 and paddleocr==2.5.0.3 because these package versions have conflicting dependencies.\r\n\r\nThe conflict is caused by:\r\n paddleocr 2.5.0.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.5.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.5 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.4 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.1 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3.0.1 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.2.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.2.0.1 depends on opencv-contrib-python==4.2.0.32\r\n paddleocr 2.2 depends on opencv-contrib-python==4.2.0.32\r\n paddleocr 2.0.6 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.5 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.4 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.3 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.2 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.1 depends on opencv-python==4.2.0.32\r\n\r\nTo fix this you could try to:\r\n\r\n1. loosen the range of package versions you've specified\r\n2. remove package versions to allow pip attempt to solve the dependency conflict\r\n\r\nERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies`\r\n<img width=\"1544\" alt=\"image\" src=\"https://user-images.githubusercontent.com/29346824/178091955-5d71f63b-6bd5-477e-88e4-cb29cb161124.png\">\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c167df2f60d08085167cdc9431101f4b45a8a019", "files": [{"path": "requirements.txt", "Loc": {"(None, None, 10)": {"mod": [10]}}, "status": "modified"}, {"path": "setup.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "e44c2af7622c97d3faecd37b062e7f1cb922fd40", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/10298", "iss_label": "status/close", "title": "train warning", "body": "\u8bf7\u63d0\u4f9b\u4e0b\u8ff0\u5b8c\u6574\u4fe1\u606f\u4ee5\u4fbf\u5feb\u901f\u5b9a\u4f4d\u95ee\u9898/Please provide the following information to quickly locate the problem\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aubantu \r\n- \u7248\u672c\u53f7/Version\uff1aPaddle\uff1a PaddleOCR\uff1a \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1apaddle develop 0.0.0.post116\r\n\r\n- \u8fd0\u884c\u6307\u4ee4/Command Code\uff1a\r\n- \u5b8c\u6574\u62a5\u9519/Complete Error Message\uff1a\r\n\u4e00\u76f4\u6709\u597d\u591a\u8fd9\u79cd\u8b66\u544a\r\nI0705 11:55:13.443581 28582 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify 'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e44c2af7622c97d3faecd37b062e7f1cb922fd40", "files": [{"path": "tools/program.py", "Loc": {"(None, 'train', 176)": {"mod": [349]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/program.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "dc866c91b9191bce083ec908c5665b7f2f40bd17", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/201", "iss_label": "", "title": "gpt 3", "body": "hi\r\ncan we use gpt 3 api free key ?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "dc866c91b9191bce083ec908c5665b7f2f40bd17", "files": [{"path": "scripts/rerun_edited_message_logs.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/rerun_edited_message_logs.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "5505ec41dd49eb1e86aa405335f40d7a8fa20b0a", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/497", "iss_label": "", "title": "main.py is missing?", "body": "main.py is missing?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5505ec41dd49eb1e86aa405335f40d7a8fa20b0a", "files": [{"path": "gpt_engineer/", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5\n\u8be2\u95ee\u6587\u4ef6\u7684\u4f4d\u7f6e", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gpt_engineer/"]}}, {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "a55265ddb46462548a842dae914bb5fcb22181fa", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/509", "iss_label": "", "title": "Error with Promtfile", "body": "When I try to run the example file I get this error even though there is something in the prompt file, which you can see from the screenshots is in the example folder. Does anyone know how I can solve this problem?\r\n\r\n![Screenshot_Error](https://github.com/AntonOsika/gpt-engineer/assets/62028361/cf8c7992-eca9-4bed-b258-bc1bf279082b)\r\n\r\n![Screenshot_of_promt](https://github.com/AntonOsika/gpt-engineer/assets/62028361/a3d573b1-b9da-4201-9980-709c543dadde)\r\n\r\n![image](https://github.com/AntonOsika/gpt-engineer/assets/62028361/dd8edc0d-3248-4d1c-9813-c388f4b81fb5)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a55265ddb46462548a842dae914bb5fcb22181fa", "files": [{"path": "projects/example/prompt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["projects/example/prompt"]}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "cca0ca704a713ab153938e78de6787609c723cad", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1147", "iss_label": "", "title": "urllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party..", "body": "Hello Guys.\r\nThis is the error I'm getting when I am trying to use the image prompt issue\r\n\r\nurllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>\r\nTotal time: 21.12 seconds\r\n\r\nDo you happen to know what could be the problem?\r\n\r\nthanks in advance!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "cca0ca704a713ab153938e78de6787609c723cad", "files": [{"path": "troubleshoot.md", "Loc": {"(None, None, 43)": {"mod": [43]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "lllyasviel", "pro": "misc", "path": ["ip-adapter-plus-face_sdxl_vit-h.bin"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["troubleshoot.md"], "test": [], "config": [], "asset": ["ip-adapter-plus-face_sdxl_vit-h.bin"]}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "f3084894402a4c0b7ed9e7164466bcedd5f5428d", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1508", "iss_label": "", "title": "Problems with installation and correct operation.", "body": "Hello, I had problems installing Fooocus on a GNU/Linux system, many errors occurred during the installation and they were all different. I was not able to capture some of them, but in general terms the errors were as follows: \"could not find versions of python packages that satisfy dependencies (error during installation)\",\"(when clicking the \"generate\" button) \"nvidia drivers were not available found, make sure you have them installed \"link to official website\".\r\n\r\nI managed to save the output of the following errors:\r\n\r\n\r\nERROR: Could not find a version that satisfies the requirement accelerate==0.21.0 (from -r requirements_versions.txt (line 5)) (from versions: 0.0.1, 0.1.0, 0.2.0, 0.2.1, 0.3.0, 0.4.0, 0.5.0, 0.5.1, 0.6.0, 0.6.1, 0.6.2, 0.7.0, 0.7.1, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0.13.2, 0.14.0, 0.15.0, 0.16.0, 0.17.0, 0.17.1, 0.18.0, 0.19.0, 0.20.0, 0.20.1, 0.20.2, 0.20.3)\r\nERROR: No matching distribution found for accelerate==0.21.0 (from -r requirements_versions.txt (line 5))\r\n\r\n\r\n\r\n\r\n\r\npython entry_with_update.py\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py']\r\nPython 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]\r\nFooocus version: 2.1.853\r\nRunning on local URL: http://127.0.0.1:7865\r\n\r\nTo create a public link, set share=True in launch().\r\nTotal VRAM 12006 MB, total RAM 31850 MB\r\nxformers version: 0.0.16\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/Fooocus/ldm_patched/modules/model_management.py\", line 222, in <module>\r\n import accelerate\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/__init__.py\", line 3, in <module>\r\n from .accelerator import Accelerator\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/accelerator.py\", line 35, in <module>\r\n from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/checkpointing.py\", line 24, in <module>\r\n from .utils import (\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/__init__.py\", line 131, in <module>\r\n from .bnb import has_4bit_bnb_layers, load_and_quantize_model\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/bnb.py\", line 42, in <module>\r\n import bitsandbytes as bnb\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/__init__.py\", line 6, in <module>\r\n from .autograd._functions import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py\", line 5, in <module>\r\n import bitsandbytes.functional as F\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/functional.py\", line 13, in <module>\r\n from .cextension import COMPILED_WITH_CUDA, lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 113, in <module>\r\n lib = CUDASetup.get_instance().lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 109, in get_instance\r\n cls._instance.initialize()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 59, in initialize\r\n binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 125, in evaluate_cuda_setup\r\n cuda_version_string = get_cuda_version(cuda, cudart_path)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 45, in get_cuda_version\r\n check_cuda_result(cuda, cudart.cudaRuntimeGetVersion(ctypes.byref(version)))\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 387, in getattr\r\n func = self.getitem(name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 392, in getitem\r\n func = self._FuncPtr((name_or_ordinal, self))\r\nAttributeError: python: undefined symbol: cudaRuntimeGetVersion\r\n\r\nERROR: LOW VRAM MODE NEEDS accelerate.\r\nSet vram state to: NORMAL_VRAM\r\nAlways offload VRAM\r\nDevice: cuda:0 NVIDIA GeForce RTX 4070 : \r\nVAE dtype: torch.float32\r\nUsing xformers cross attention\r\nException in thread Thread-2 (worker):\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1086, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py\", line 27, in <module>\r\n from ...modeling_utils import PreTrainedModel\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 85, in <module>\r\n from accelerate import version as accelerate_version\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/__init__.py\", line 3, in <module>\r\n from .accelerator import Accelerator\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/accelerator.py\", line 35, in <module>\r\n from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/checkpointing.py\", line 24, in <module>\r\n from .utils import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/__init__.py\", line 131, in <module>\r\n from .bnb import has_4bit_bnb_layers, load_and_quantize_model\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/bnb.py\", line 42, in <module>\r\n import bitsandbytes as bnb\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/__init__.py\", line 6, in <module>\r\n from .autograd._functions import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n\r\nFile \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py\", line 5, in <module>\r\n import bitsandbytes.functional as F\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/functional.py\", line 13, in <module>\r\n from .cextension import COMPILED_WITH_CUDA, lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 113, in <module>\r\n lib = CUDASetup.get_instance().lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 109, in get_instance\r\n cls._instance.initialize()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 59, in initialize\r\n binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 125, in evaluate_cuda_setup\r\n cuda_version_string = get_cuda_version(cuda, cudart_path)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 45, in get_cuda_version\r\n check_cuda_result(cuda, cudart.cudaRuntimeGetVersion(ctypes.byref(version)))\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 387, in getattr\r\n func = self.getitem(name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 392, in getitem\r\n func = self._FuncPtr((name_or_ordinal, self))\r\nAttributeError: python: undefined symbol: cudaRuntimeGetVersion\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/threading.py\", line 1016, in _bootstrap_inner\r\n self.run()\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/threading.py\", line 953, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/home/dragon_flow/Fooocus/modules/async_worker.py\", line 25, in worker\r\n import modules.default_pipeline as pipeline\r\n File \"/home/dragon_flow/Fooocus/modules/default_pipeline.py\", line 1, in <module>\r\n import modules.core as core\r\n File \"/home/dragon_flow/Fooocus/modules/core.py\", line 1, in <module>\r\n from modules.patch import patch_all\r\n File \"/home/dragon_flow/Fooocus/modules/patch.py\", line 29, in <module>\r\n from modules.patch_clip import patch_all_clip\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/Fooocus/modules/patch_clip.py\", line 23, in <module>\r\n from transformers import CLIPTextModel, CLIPTextConfig, modeling_utils, CLIPVisionConfig, CLIPVisionModelWithProjection\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"<frozen importlib._bootstrap>\", line 1075, in _handle_fromlist\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1077, in getattr\r\n value = getattr(module, name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1076, in getattr\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1088, in _get_module\r\n raise RuntimeError(\r\n\r\nRuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):\r\npython: undefined symbol: cudaRuntimeGetVersion\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f3084894402a4c0b7ed9e7164466bcedd5f5428d", "files": [{"path": "requirements_versions.txt", "Loc": {"(None, None, 5)": {"mod": [5]}}, "status": "modified"}, {"path": "readme.md", "Loc": {"(None, None, 152)": {"mod": [152]}}, "status": "modified"}, {"path": "troubleshoot.md", "Loc": {"(None, None, 107)": {"mod": [107]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["readme.md", "troubleshoot.md"], "test": [], "config": ["requirements_versions.txt"], "asset": []}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "225947ac1a603124b0274da3e94d2c6cba65f732", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/500", "iss_label": "", "title": "is this a local model or not", "body": "is this a local model or not\r\n\r\ni dont get how it could show someone elses promts if its local", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "225947ac1a603124b0274da3e94d2c6cba65f732", "files": [{"path": "models/checkpoints", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["models/checkpoints"]}}, {"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d7439b2d6004d50a0fda19108603a8d1941a185e", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/3689", "iss_label": "bug\ntriage", "title": "[Bug]: Exits upon attempting to load a model on Windows", "body": "### Checklist\n\n- [X] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)\n- [X] The issue exists on a clean installation of Fooocus\n- [X] The issue exists in the current version of Fooocus\n- [X] The issue has not been reported before recently\n- [ ] The issue has been reported before but has not been fixed yet\n\n### What happened?\n\nAttempting to run Fooocus on Windows 11 (and possibly 10, haven't tested) simply exits when attempting to load the default model, no error or nothing.\n\n### Steps to reproduce the problem\n\n1. Install Fooocus on Windows 11 with a NVIDIA GPU\r\n2. Attempt to run it.\n\n### What should have happened?\n\nIt should've loaded the model successfully.\n\n### What browsers do you use to access Fooocus?\n\nMozilla Firefox\n\n### Where are you running Fooocus?\n\nLocally\n\n### What operating system are you using?\n\nWindows 11 (23H2)\n\n### Console logs\n\n```Shell\n(fooocus_env) D:\\Misc4\\Fooocus>python entry_with_update.py\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py']\r\nPython 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]\r\nFooocus version: 2.5.5\r\n[Cleanup] Attempting to delete content of temp dir C:\\Users\\hkcu\\AppData\\Local\\Temp\\fooocus\r\n[Cleanup] Cleanup successful\r\nTotal VRAM 12281 MB, total RAM 16317 MB\r\nSet vram state to: NORMAL_VRAM\r\nAlways offload VRAM\r\nDevice: cuda:0 NVIDIA GeForce RTX 4070 : native\r\nVAE dtype: torch.bfloat16\r\nUsing pytorch cross attention\r\nRefiner unloaded.\r\nIMPORTANT: You are using gradio version 3.41.2, however version 4.44.1 is available, please upgrade.\r\n--------\r\nRunning on local URL: http://127.0.0.1:7865\r\n\r\nTo create a public link, set `share=True` in `launch()`.\r\n\r\n(fooocus_env) D:\\Misc4\\Fooocus>\n```\n\n\n### Additional information\n\nUsing Fooocus on the exact same machine, with the exact same amount of swap configured (4Gb) works as normal.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d7439b2d6004d50a0fda19108603a8d1941a185e", "files": [{"path": "presets/default.json", "Loc": {"(None, None, 2)": {"mod": [2]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": ["config.txt", "config_modification_tutorial.txt"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\n1", "info_type": "Config\n"}, "loctype": {"code": ["presets/default.json"], "doc": [], "test": [], "config": ["config.txt", "config_modification_tutorial.txt"], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6383113e8527e1c73049e26d2b3482a1b0f54b30", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/376", "iss_label": "", "title": "\u5173\u4e8epublic url", "body": "![Screenshot 2023-04-08 170556](https://user-images.githubusercontent.com/78332286/230713429-e0cc9a3f-1da9-4e76-b24a-67c35624a866.png)\r\n\r\n\u8fd9\u4e2apublic url \u662f\u7ecf\u8fc7\u535a\u4e3b\u81ea\u5df1\u642d\u5efa\u7684\u670d\u52a1\u5668\u7684\u5417\uff1f\u6211\u672c\u5730\u642d\u5efa\u4e4b\u540e\u5728\u624b\u673a\u6253\u5f00\u8fd9\u4e2aurl\u4e5f\u80fd\u7528", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6383113e8527e1c73049e26d2b3482a1b0f54b30", "files": [{"path": "main.py", "Loc": {"(None, None, None)": {"mod": [174]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["main.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6c13bb7b46519312222f9afacedaa16225b673a9", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1545", "iss_label": "ToDo", "title": "[Bug]: Qwen1.5-14B-chat \u8fd0\u884c\u4e0d\u4e86", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOneKeyInstall (\u4e00\u952e\u5b89\u88c5\u811a\u672c-windows)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\nTraceback (most recent call last):\r\n File \".\\request_llms\\local_llm_class.py\", line 158, in run\r\n for response_full in self.llm_stream_generator(**kwargs):\r\n File \".\\request_llms\\bridge_qwen_local.py\", line 46, in llm_stream_generator\r\n for response in self._model.chat_stream(self._tokenizer, query, history=history):\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\anaconda3\\envs\\GPT_academic371\\Lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1688, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'Qwen2ForCausalLM' object has no attribute 'chat_stream'\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\nTraceback (most recent call last):\r\n File \".\\request_llms\\local_llm_class.py\", line 158, in run\r\n for response_full in self.llm_stream_generator(**kwargs):\r\n File \".\\request_llms\\bridge_qwen_local.py\", line 46, in llm_stream_generator\r\n for response in self._model.chat_stream(self._tokenizer, query, history=history):\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\anaconda3\\envs\\GPT_academic371\\Lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1688, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'Qwen2ForCausalLM' object has no attribute 'chat_stream'\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6c13bb7b46519312222f9afacedaa16225b673a9", "files": [{"path": "request_llms/bridge_qwen_local.py", "Loc": {"('GetQwenLMHandle', 'llm_stream_generator', 34)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_qwen_local.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "dd7a01cda53628ea07ef6192bf257f9ad51f5f47", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/978", "iss_label": "", "title": "[Bug]: \u4ee3\u7406\u914d\u7f6e\u6210\u529f\uff0c\u4ee3\u7406\u6240\u5728\u5730\u67e5\u8be2\u8d85\u65f6\uff0c\u4ee3\u7406\u53ef\u80fd\u65e0\u6548", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nDocker\uff08Windows/Mac\uff09\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6309\u7167\u8981\u6c42\u4fee\u6539\u4ee3\u7406\u914d\u7f6e\u6587\u4ef6`config.py`\uff0c\u57fa\u4e8e`Dockerfile`\u6784\u5efa\u4e4b\u540e\u8fd0\u884c\u51fa\u73b0\uff0c`\u4ee3\u7406\u914d\u7f6e\u6210\u529f\uff0c\u4ee3\u7406\u6240\u5728\u5730\u67e5\u8be2\u8d85\u65f6\uff0c\u4ee3\u7406\u53ef\u80fd\u65e0\u6548`\u7684\u8b66\u544a\u26a0\ufe0f\uff0c\u5b9e\u9645\u8fd0\u884c\u62a5\u9519`ConnectionRefusedError: [Errno 111] Connection refused`\uff0c\u8bf7\u5e2e\u5e2e\u6211\u54ea\u91cc\u914d\u7f6e\u53ef\u80fd\u6709\u8bef\r\nps.\u4ee3\u7406\u670d\u52a1\u5730\u5740\u7aef\u53e3\u914d\u7f6e\u6b63\u786e\uff0c\u4e14\u8fd0\u884c\u6b63\u5e38\uff0c\u53ef\u4ee5\u8bbf\u95ee\u5916\u7f51\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n<img width=\"921\" alt=\"\u622a\u5c4f2023-07-21 21 12 53\" src=\"https://github.com/binary-husky/gpt_academic/assets/97352201/5f54b0b4-a515-4ae6-8360-1b1504683688\">\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "dd7a01cda53628ea07ef6192bf257f9ad51f5f47", "files": [{"path": "check_proxy.py", "Loc": {"(None, 'check_proxy', 2)": {"mod": [6]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["check_proxy.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "ea4e03b1d892d462f71bab76ee0bec65d541f6b7", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1286", "iss_label": "", "title": "[Feature]: \u8bf7\u95ee\u662f\u5426\u6210\u529f\u4fee\u6539 api2d-gpt-3.5-turbo-16k \u7cfb\u5217\u6a21\u578b max_token \u4e3a 16385 ", "body": "### Class | \u7c7b\u578b\n\n\u5927\u8bed\u8a00\u6a21\u578b\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ea4e03b1d892d462f71bab76ee0bec65d541f6b7", "files": [{"path": "request_llms/bridge_all.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_all.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "526b4d8ecd1adbdcf97946b3bca4c89feda6ec04", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/850", "iss_label": "cause of issue is clear", "title": "[Bug]: Json\u5f02\u5e38 \u201cerror\u201d:", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\nTraceback (most recent call last):\r\n File \"./request_llm/bridge_chatgpt.py\", line 189, in predict\r\n if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0][\"delta\"]) == 0):\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nJson\u5f02\u5e38 \u201cerror\u201d: { \u201cmessage\u201d: \u201c\u201d, \u201ctype\u201d: \u201cinvalid_request_error\u201d, \u201cparam\u201d: null, \u201ccode\u201d: \u201cinvalid_api_key\u201d }}\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n<img width=\"1341\" alt=\"image\" src=\"https://github.com/binary-husky/gpt_academic/assets/125801419/c448d538-e762-4bbe-b76a-05d921c34ded\">\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\ngpt-3.5-turbo : 0 : 1 ..........\r\nTraceback (most recent call last):\r\n File \"/Users/zihengli/chatgpt_academic/request_llm/bridge_chatgpt.py\", line 189, in predict\r\n if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0][\"delta\"]) == 0):\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "526b4d8ecd1adbdcf97946b3bca4c89feda6ec04", "files": [{"path": "config.py", "Loc": {"(None, None, None)": {"mod": [1]}}, "status": "modified"}, {"path": "README.md", "Loc": {"(None, None, 101)": {"mod": [101]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["config.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "fdffbee1b02bd515ceb4519ae2a830a547b695b4", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1137", "iss_label": "", "title": "[Bug]: Connection errored out", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nLinux\n\n### Describe the bug | \u7b80\u8ff0\n\n\u4f60\u597d, \u7248\u672c3.54\r\n\u90e8\u7f72\u5728vps\u4e0a, os\u662fubuntu 20.04\r\n\u6302\u5728\u4e86\u516c\u7f51, \u6b64\u524d\u5747\u53ef\u6b63\u5e38\u4f7f\u7528\r\n\u4f46\u662f\u7a81\u7136\u51fa\u73b0\u4e86\u8fd9\u6837\u7684\u95ee\u9898, \u5982\u4e0b\u56fe\r\n\r\n\u8bf7\u95ee\u8fd9\u662f\u4ec0\u4e48\u539f\u56e0\u5462? \u662f\u8be5vps\u7684ip\u4e0d\u884c, \u88abopenai ban\u4e86\u4e48? \u8fd8\u662f\u4ec0\u4e48\u522b\u7684\u539f\u56e0, \u8c22\u8c22\r\n\r\n\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![Snipaste_2023-09-30_15-01-00](https://github.com/binary-husky/gpt_academic/assets/59535777/9567364a-6bff-4878-b92a-94087a02c655)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "fdffbee1b02bd515ceb4519ae2a830a547b695b4", "files": [{"path": "main.py", "Loc": {"(None, 'main', 3)": {"mod": [287]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": ["nginx.conf"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0\n2\uff1f", "info_type": "Config"}, "loctype": {"code": ["main.py", "nginx.conf"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "a2002ebd85f441b3cd563bae28e9966006068ad6", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/462", "iss_label": "", "title": "ERROR: Invalid requirement: '__pycache__/' (from line 2 of requirements.txt)", "body": "**Describe the bug \u7b80\u8ff0**\r\nERROR: Invalid requirement: '__pycache__/' (from line 2 of requirements.txt)\r\n**Screen Shot \u622a\u56fe**\r\n![image](https://user-images.githubusercontent.com/46212839/231796758-e537f323-bb03-4fb1-97c8-3b80fddc8476.png)\r\n\r\n![image](https://user-images.githubusercontent.com/46212839/231796688-14d0eb47-8ea7-4d73-9ccd-259b1b10f5df.png)\r\n\r\n**Terminal Traceback \u7ec8\u7aeftraceback\uff08\u5982\u679c\u6709\uff09**\r\n\r\n\r\nBefore submitting an issue \u63d0\u4ea4issue\u4e4b\u524d\uff1a\r\n- Please try to upgrade your code. \u5982\u679c\u60a8\u7684\u4ee3\u7801\u4e0d\u662f\u6700\u65b0\u7684\uff0c\u5efa\u8bae\u60a8\u5148\u5c1d\u8bd5\u66f4\u65b0\u4ee3\u7801\r\n- Please check project wiki for common problem solutions.\u9879\u76ee[wiki](https://github.com/binary-husky/chatgpt_academic/wiki)\u6709\u4e00\u4e9b\u5e38\u89c1\u95ee\u9898\u7684\u89e3\u51b3\u65b9\u6cd5\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a2002ebd85f441b3cd563bae28e9966006068ad6", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "0485d01d67d6a41bb0810d6112f40602af1167a9", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/476", "iss_label": "cause of issue is clear", "title": "\u4e0a\u4f20\u6587\u4ef6\u65f6\u91cd\u590d\u4e0a\u4f20", "body": "\r\n\u4e0a\u4f20\u6587\u4ef6\u65f6\u91cd\u590d\u4e0a\u4f20\r\n\u6837\u4f8b\u6587\u4ef6[1.docx](https://github.com/binary-husky/chatgpt_academic/files/11230280/1.docx)\r\n\u754c\u9762![TE$JF@(Q$565$CWJ4)9(A(P](https://user-images.githubusercontent.com/51219393/231979388-e73140de-f563-40c6-9e97-7f0148505cec.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0485d01d67d6a41bb0810d6112f40602af1167a9", "files": [{"path": "requirements.txt", "Loc": {"(None, None, 1)": {"mod": [1]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e594e1b928aadb36d291184bca1deee8601621a8", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1489", "iss_label": "", "title": "[Bug]: \u867d\u7136PDF\u751f\u6210\u5931\u8d25\u4e86, \u4f46\u8bf7\u67e5\u6536\u7ed3\u679c\uff08\u538b\u7f29\u5305\uff09, \u5185\u542b\u5df2\u7ecf\u7ffb\u8bd1\u7684Tex\u6587\u6863, \u60a8\u53ef\u4ee5\u5230Github Issue\u533a, \u7528\u8be5\u538b\u7f29\u5305\u8fdb\u884c\u53cd\u9988\u3002\u5982\u7cfb\u7edf\u662fLinux\uff0c\u8bf7\u68c0\u67e5\u7cfb\u7edf\u5b57\u4f53\uff08\u89c1Github wiki\uff09 ...", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nAnaconda (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n\u7531\u4e8e\u6700\u4e3a\u5173\u952e\u7684\u8f6c\u5316PDF\u7f16\u8bd1\u5931\u8d25, \u5c06\u6839\u636e\u62a5\u9519\u4fe1\u606f\u4fee\u6b63tex\u6e90\u6587\u4ef6\u5e76\u91cd\u8bd5, \u5f53\u524d\u62a5\u9519\u7684latex\u4ee3\u7801\u5904\u4e8e\u7b2c[-1]\u884c ...\r\n\r\n\u867d\u7136PDF\u751f\u6210\u5931\u8d25\u4e86, \u4f46\u8bf7\u67e5\u6536\u7ed3\u679c\uff08\u538b\u7f29\u5305\uff09, \u5185\u542b\u5df2\u7ecf\u7ffb\u8bd1\u7684Tex\u6587\u6863, \u60a8\u53ef\u4ee5\u5230Github Issue\u533a, \u7528\u8be5\u538b\u7f29\u5305\u8fdb\u884c\u53cd\u9988\u3002\u5982\u7cfb\u7edf\u662fLinux\uff0c\u8bf7\u68c0\u67e5\u7cfb\u7edf\u5b57\u4f53\uff08\u89c1Github wiki\uff09 ...\r\n\r\n\u62a5\u544a\u5df2\u7ecf\u6dfb\u52a0\u5230\u53f3\u4fa7\u201c\u6587\u4ef6\u4e0a\u4f20\u533a\u201d\uff08\u53ef\u80fd\u5904\u4e8e\u6298\u53e0\u72b6\u6001\uff09\uff0c\u8bf7\u67e5\u6536\u3002\r\n[gpt_log\\default_user\\shared\\2024-01-18-14-25-51-result.zip](http://localhost:50649/file=C:/Users/admin/gpt_academic/gpt_log/default_user/shared/2024-01-18-14-25-51-result.zip)\r\n[gpt_log\\default_user\\shared\\2024-01-18-14-25-41.trans.html](http://localhost:50649/file=C:/Users/admin/gpt_academic/gpt_log/default_user/shared/2024-01-18-14-25-41.trans.html)\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![image](https://github.com/binary-husky/gpt_academic/assets/102421741/01fc2c02-ea15-4717-af77-e89797e407d1)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n[2024-01-18-14-25-51-result.zip](https://github.com/binary-husky/gpt_academic/files/13973247/2024-01-18-14-25-51-result.zip)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"path": ".tex"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": [".tex"]}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "9540cf9448026a1c8135c750866b63d320909718", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/257", "iss_label": "", "title": "Something went wrong Connection errored out.", "body": "### Describe the bug\r\n\r\n\u542f\u52a8\u7a0b\u5e8f\u540e\uff0c\u80fd\u6253\u5f00\u9875\u9762\u6b63\u5e38\u663e\u793a\uff0c\u4f46\u662f\u4e0a\u4f20\u6587\u6863\u6216\u8005\u53d1\u9001\u63d0\u95ee\u6cd5\u4f1a\u51fa\u9519\u201cSomething went wrong Connection errored out.\u201d\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [ ] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n\u6309\u7167\u6b63\u5e38\u6b65\u9aa4\uff1a\r\ngit clone https://github.com/binary-husky/chatgpt_academic.git\r\ncd chatgpt_academic\r\npython -m pip install -r requirements.txt \r\npython main.py\r\n\r\nconfig.py\u7684\u914d\u7f6e\u662f\uff1a\r\nUSE_PROXY = True\r\n\r\n### Screenshot\r\n\r\n<img width=\"1400\" alt=\"image\" src=\"https://user-images.githubusercontent.com/66538098/229296702-36166ed4-d077-4ee8-9af3-d263b3039dc5.png\">\r\n<img width=\"1320\" alt=\"image\" src=\"https://user-images.githubusercontent.com/66538098/229327959-e8d3857d-9495-4c28-8a3f-cf1a8d294248.png\">\r\n\u7ed9\u51fa\u4e86\u6b63\u786e\u7684API key\uff0c\u5374\u53d1\u73b0\u4ece\u6ca1\u4f7f\u7528\u8fc7\r\n<img width=\"809\" alt=\"image\" src=\"https://user-images.githubusercontent.com/66538098/229331202-a1850a02-d1f2-4a69-97d1-cb5e285d8e8f.png\">\r\n\r\n\r\n### Logs\r\n\r\n```shell\r\n\u63a7\u5236\u53f0\u62a5\u9519[Error] WebSocket connection to 'ws://localhost:62694/queue/join' failed: There was a bad response from the server. (x4)\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\ngradio:3.24.1\r\nProductName:macOS\r\nProductVersion:13.3\r\nBuildVersion:22E252\r\n```\r\n\r\n\r\n### Severity\r\n\r\nannoying", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "gradio-app", "pro": "gradio", "path": ["gradio/routes.py"]}], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gradio/routes.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "bfa6661367b7592e82225515e5e4845c4aad95bb", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/252", "iss_label": "", "title": "\u80fd\u4e0d\u80fd\u4f7f\u7528azure openai key?", "body": "\u4ee3\u7406\u670d\u52a1\u5668\u4e0d\u591f\u7a33\u5b9a\uff0c\u66f4\u9ebb\u70e6\u7684\u662f\u7ed9openai\u7eed\u8d39\uff0c\u8fd8\u8981\u4e2a\u7f8e\u56fd\u4fe1\u7528\u5361\r\n\r\n\u975e\u5e38\u597d\u7684\u5e94\u7528\uff0c\u5e0c\u671b\u51fa\u66f4\u591a\u7684\u63d2\u4ef6\u529f\u80fd\uff0c\u8c22\u8c22", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "bfa6661367b7592e82225515e5e4845c4aad95bb", "files": [{"path": "config.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "2d2e02040d7d91d2f2a4c34f4d0bf677873b5f4d", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1328", "iss_label": "", "title": "[Bug]: \u7cbe\u51c6\u7ffb\u8bd1PDF\u6587\u6863(NOUGAT)\u529f\u80fd\u51fa\u9519\uff0c", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOthers (Please Describe)\n\n### Version | \u7248\u672c\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6d4b\u8bd5\u670d\u52a1\u5668\uff0c\u7cbe\u51c6\u7ffb\u8bd1PDF\u6587\u6863(NOUGAT)\u529f\u80fd\u51fa\u9519\uff0c\u4f46\u662f\u53ef\u4ee5\u4f7f\u7528\u7cbe\u51c6\u7ffb\u8bd1PDF\u7684\u529f\u80fd\r\n\r\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/b9a0db8c-282a-4e02-a527-97fcf63eaaa0)\r\n\r\n\u62a5\u9519\u4fe1\u606f\u5982\u4e0b\r\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/b40e4d8b-ade9-4e27-86e5-75f6027fbbb0)\r\n\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/ac4995d5-0a68-433b-aa44-2b3c82bbc1e3)\r\nTraceback (most recent call last):\r\n File \"./toolbox.py\", line 159, in decorated\r\n yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n File \"./crazy_functions/\u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863_NOUGAT.py\", line 93, in \u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863\r\n yield from \u89e3\u6790PDF_\u57fa\u4e8eNOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)\r\n File \"./crazy_functions/\u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863_NOUGAT.py\", line 111, in \u89e3\u6790PDF_\u57fa\u4e8eNOUGAT\r\n fpp = yield from nougat_handle.NOUGAT_parse_pdf(fp, chatbot, history)\r\n File \"./crazy_functions/crazy_utils.py\", line 761, in NOUGAT_parse_pdf\r\n raise RuntimeError(\"Nougat\u89e3\u6790\u8bba\u6587\u5931\u8d25\u3002\")\r\nRuntimeError: Nougat\u89e3\u6790\u8bba\u6587\u5931\u8d25\u3002\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "2d2e02040d7d91d2f2a4c34f4d0bf677873b5f4d", "files": [{"path": "crazy_functions/crazy_utils.py", "Loc": {"('nougat_interface', 'NOUGAT_parse_pdf', 739)": {"mod": [752]}, "('nougat_interface', None, 719)": {"mod": [723]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["crazy_functions/crazy_utils.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "17abd29d5035b5b227deaad69d32cf437b23e542", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/94", "iss_label": "", "title": "[\u4e00\u4e9b\u5efa\u8bae]input\u6846\u8fd8\u662f\u592a\u5c0f\u4e86", "body": "RT \u591a\u884c\u8f93\u5165\u8fd8\u662f\u4e0d\u65b9\u4fbf\uff0c\u5982\u679c\u9002\u5f53\u8c03\u6574\u4f1a\u66f4\u597d\u7528\u3002\r\n\r\n\u5e0c\u671b\u91c7\u7eb3\uff0c\u611f\u8c22\u5206\u4eab\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "17abd29d5035b5b227deaad69d32cf437b23e542", "files": [{"path": "main.py", "Loc": {"(None, None, None)": {"mod": [1]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["main.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "37744a9cb173477398a2609f02d5e7cef47eb677", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1438", "iss_label": "", "title": "[Bug]: \u6d6e\u52a8\u8f93\u5165\u6846\u5728\u62d6\u81f3\u9876\u90e8\u540e\uff0c\u65e0\u6cd5\u91cd\u65b0\u79fb\u4f4d", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOthers (Please Describe)\n\n### Version | \u7248\u672c\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6d6e\u52a8\u8f93\u5165\u6846\u5728\u62d6\u81f3\u9876\u90e8\u540e\uff0c\u65e0\u6cd5\u91cd\u65b0\u79fb\u4f4d\r\n\r\n\u671f\u671b\uff1a\u91cd\u65b0\u52fe\u9009\u540e\uff0c\u5e94\u8be5\u56de\u5230\u521d\u59cb\u4f4d\u7f6e\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![2024-01-02 14 36 52](https://github.com/binary-husky/gpt_academic/assets/46100050/86a648dc-ab38-486f-9a0b-7f71dde0bd57)\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gradio-fix/commit/fb67dd12f58aa53c75a90378cddbc811ac3c01d2", "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "binary-husky", "pro": "gradio-fix", "path": ["{'base_commit': 'fb67dd12f58aa53c75a90378cddbc811ac3c01d2', 'files': [{'path': 'js/app/src/components/Floating/StaticFloating.svelte', 'status': 'modified', 'Loc': {(None, None, 48): {'add': [48]}}}]}"]}], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gradio-fix", "js/app/src/components/Floating/StaticFloating.svelte"]}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6538c58b8e5a4a7ae08dfa1ae9970bc422158096", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/620", "iss_label": "", "title": "\u60f3\u95ee\u95eenewbing\u7684cookies\u600e\u4e48\u586b\u5199\uff0c\u6211\u4ecejavascript:alert(document.cookie)\u627e\u5230\u4e86cookies\u4f46\u662f\u4e00\u76f4\u663e\u793acookies\u6709\u9519", "body": "![image](https://user-images.githubusercontent.com/73226302/234341095-273ea6e0-aadc-4e19-8966-05709d61f9b1.png)\r\n![image](https://user-images.githubusercontent.com/73226302/234341151-017d0634-620a-4377-b972-ddb2d7a22d2a.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6538c58b8e5a4a7ae08dfa1ae9970bc422158096", "files": [{"path": "config.py", "Loc": {"(None, None, None)": {"mod": [69]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "", "info_type": "Other"}, "loctype": {"code": ["config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6d8c8cd3f0b9d2b6fe8d412b83f902cbd43fa0bd", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/150", "iss_label": "documentation\nhigh value issue", "title": "\u6709\u6ca1\u6709\u5b8c\u5168\u90e8\u7f72\u6210\u529f\u7684\u5927\u795e\u51fa\u4e2a\u8be6\u7ec6\u7684\u90e8\u7f72\u6b65\u9aa4\u5440\uff1fWindows \u6709\u622a\u56fe\uff0c\u8dea\u6c42", "body": "Windows\u5b89\u88c5\u90e8\u7f72\r\n\u57fa\u672c\u73af\uff1a\u5b89\u88c5anaconda\r\n1.\u4e0b\u8f7d\u9879\u76ee CMD\r\n\u9009\u62e9\u8def\u5f84\r\ngit clone https://github.com/binary-husky/chatgpt_academic.git\r\ncd chatgpt_academic\r\n\u6211\u4eec\u5efa\u8bae\u5c06config.py\u590d\u5236\u4e3aconfig_private.py\u5e76\u5c06\u540e\u8005\u7528\u4f5c\u4e2a\u6027\u5316\u914d\u7f6e\u6587\u4ef6\u4ee5\u907f\u514dconfig.py\u4e2d\u7684\u53d8\u66f4\u5f71\u54cd\u4f60\u7684\u4f7f\u7528\u6216\u4e0d\u5c0f\u5fc3\u5c06\u5305\u542b\u4f60\u7684OpenAI API KEY\u7684config.py\u63d0\u4ea4\u81f3\u672c\u9879\u76ee\u3002\r\ncp config.py config_private.py\r\n2.\u521b\u5efa\u865a\u62df\u73af\u5883 python 3.11\r\nconda create -n chatgpt python=3.11.0 #\u65b0\u5efa\u73af\u5883\u3001\r\n3.\u8fdb\u5165\u9879\u76ee\u4e0b\u8f7d\u8def\u5f84\r\n\u4f8b\u5982 cd G:\\python\\Program\\chatgpt_academic\r\n4.\u542f\u52a8\u865a\u62df\u73af\u5883\r\nconda activate chatgpt\r\n5. \u5b89\u88c5 gradio>=3.23\r\n\uff081\uff09\u5230https://pypi.org/project/gradio/ \u4e0b\u8f7dwhl\u7248\u672c\r\n\uff082\uff09pip install G:\\python\\Program\\chatgpt_academic\\gradio-3.23.0-py3-none-any.whl\r\n6.\u914d\u7f6e\u5176\u4ed6\u73af\u5883\r\n\uff081\uff09\u6253\u5f00requirements.txt\uff0c\u6ce8\u91ca\u6389gradio\uff0c\u7136\u540e\u4fdd\u5b58\r\n\uff082\uff09\u8fd0\u884c python -m pip install -r requirements.txt\r\n7.\u542f\u52a8\u4ee3\u7406\r\n8. \u914d\u7f6econfig_private.py\r\n\uff081\uff09\u6dfb\u52a0API_KEY\r\n\uff082\uff09\u4fee\u6539USE_PROXY = Ture\r\n\uff083\uff09\u4fee\u6539proxies\r\n\u5728\u6d4f\u89c8\u5668\u8f93\u5165: https://ipapi.co/json/\r\n\u6d4f\u89c8\u5668\u4e0a\u53f3\u952e->\u68c0\u67e5->\u7f51\u7edc->ctrl+r\r\n\u6253\u5f00json\uff0c\u5c06\u8fdc\u7a0b\u5730\u5740\u4fee\u6539\u5230proxies = { \"http\": \"104.26.9.44:443\", \"https\": \"104.26.9.44:443\", }\r\n9.\u542f\u52a8\u7a0b\u5e8f\r\npython main.py", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6d8c8cd3f0b9d2b6fe8d412b83f902cbd43fa0bd", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\n+ \nDoc"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e20070939c6c7eeca33a8438041c9e038836957b", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/568", "iss_label": "enhancement", "title": "\u80fd\u5426\u589e\u52a0\u804a\u5929\u5185\u5bb9\u5bfc\u51fa\u529f\u80fd\uff1f", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["gpt_log/chat_secrets.log"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gpt_log/chat_secrets.log"]}}, {"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6c448b9a601ba4b9cc84e8bc625a3a91b1982ba4", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/756", "iss_label": "", "title": "[Bug]: ", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt and python>=3.8)\n\n### Describe the bug | \u7b80\u8ff0\n\n\u53ea\u6709\u51fa\u53bb\u7684\u6d88\u606f\uff0c\u6ca1\u6709\u8fd4\u56de\u6d88\u606f\uff0c\u8bd5\u8fc7\u4e86ap2id\u548cnewbing\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![\u5fae\u4fe1\u622a\u56fe_20230517095200](https://github.com/binary-husky/gpt_academic/assets/43396544/32d9bc41-351b-4ceb-a7d2-99e09b21ddb5)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6c448b9a601ba4b9cc84e8bc625a3a91b1982ba4", "files": [{"path": "request_llms/requirements_newbing.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "\u4f9d\u8d56"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["request_llms/requirements_newbing.txt"], "asset": []}}, {"organization": "deepseek-ai", "repo_name": "DeepSeek-V3", "base_commit": "6a30b43249a5710a3adb18c11763222d3fca8756", "iss_html_url": "https://github.com/deepseek-ai/DeepSeek-V3/issues/566", "iss_label": "", "title": "Please provide the code for your model architecture.", "body": "**Is your feature request related to a problem? Please describe.**\nThis repo only provides weights. It makes it difficult to confirm claims from the article.\n\n**Describe the solution you'd like**\n A repo where the code to the model architecture is provided. \n\n**Describe alternatives you've considered**\nClearly state that the model is not open source. \n\n**Additional context**\nNone\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6a30b43249a5710a3adb18c11763222d3fca8756", "files": [{"path": "inference/model.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["inference/model.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepseek-ai", "repo_name": "DeepSeek-V3", "base_commit": "0d16ea24c8030a30d4fe8a75b28e05b03b4e0970", "iss_html_url": "https://github.com/deepseek-ai/DeepSeek-V3/issues/210", "iss_label": "", "title": "[BUG]convert\u540e\u8fd0\u884c\u9519\u8bef", "body": "**Describe the bug**\r\n[rank0]: ValueError: Unrecognized model in ../DV3-hf-32/. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, audio-spectrogram-transformer, autoformer, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, git, glm, glpn, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, graphormer, grounding-dino, groupvit, hiera, hubert, ibert, idefics, idefics2, idefics3, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rwkv, sam, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, siglip, siglip_vision_model, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, time_series_transformer, timesformer, timm_backbone, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zoedepth\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior.\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["tokenizer.json", "tokenizer_config.json"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["tokenizer_config.json", "tokenizer.json"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "c529bd4f1cb3a8abc53574b7211fc0b887107073", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/98", "iss_label": "wontfix", "title": "IndexError: list index out of range on training", "body": "```\r\n# python3.6 faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/stalone -m ~/faceswap/models/\r\nModel A Directory: /root/faceswap/data/trump\r\nModel B Directory: /root/faceswap/data/stalone\r\nTraining data directory: /root/faceswap/models\r\nLoading data, this may take a while...\r\nLoading Model from Model_Original plugin...\r\n/usr/local/lib/python3.6/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\nUsing TensorFlow backend.\r\n/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6\r\n return f(*args, **kwds)\r\nFailed loading existing training data.\r\nUnable to open file (unable to open file: name = '/root/faceswap/models/encoder.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)\r\nLoading Trainer from Model_Original plugin...\r\nStarting. Press \"Enter\" to stop training and save model\r\nException in thread Thread-2:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/threading.py\", line 916, in _bootstrap_inner\r\n self.run()\r\n File \"/root/faceswap/lib/utils.py\", line 42, in run\r\n for item in self.generator:\r\n File \"/root/faceswap/lib/training_data.py\", line 43, in minibatch\r\n rtn = numpy.float32([read_image(data[j]) for j in range(i,i+size)])\r\n File \"/root/faceswap/lib/training_data.py\", line 43, in <listcomp>\r\n rtn = numpy.float32([read_image(data[j]) for j in range(i,i+size)])\r\nIndexError: list index out of range\r\n```\r\n\r\n## Expected behavior\r\nThere shouldn't be \"IndexError: list index out of range\"\r\n\r\n## Actual behavior\r\n\r\n*Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.*\r\n\r\n## Steps to reproduce\r\n\r\n## Other relevant information\r\nH/W: 4 cores, 16GB, Nvidial P100\r\nS/W: Ubuntu 16.04, NVIDIA binary driver - version 384.111\r\nCUDA 8.0\r\nCuDNN 6\r\nPython 3.6\r\nfaceswap commit: 0f8d9db826d7588f9feb151ab234f2aaf0d8ecf2\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c529bd4f1cb3a8abc53574b7211fc0b887107073", "files": [{"path": "lib/training_data.py", "Loc": {"(None, 'minibatch', 33)": {"mod": [38]}}, "status": "modified"}, {"path": "lib/cli/args_train.py", "Loc": {"('TrainArgs', 'get_argument_list', 35)": {"mod": [140]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/cli/args_train.py", "lib/training_data.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "183aee37e93708c0ae73845face5b4469319ebd3", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1208", "iss_label": "", "title": "[Question] Which part of code to implement 'Configure Settings' GUI?", "body": "Which part of code to implement 'Configure Settings' GUI?\r\n\r\n![a](https://user-images.githubusercontent.com/32773605/152643917-b26f4b16-71e0-4f9a-8209-93206355f1b6.jpg)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "183aee37e93708c0ae73845face5b4469319ebd3", "files": [{"path": "lib/gui/popup_configure.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/gui/popup_configure.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "3ba44f75518e8010befab88042247e5147d0f212", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/15", "iss_label": "question\ndata", "title": "do i have to rename the given training data to src? ", "body": "if not, where to put the unzip data into directory. sorry for asking newby questions. \r\ni am using pycharm and docker. thanks\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "3ba44f75518e8010befab88042247e5147d0f212", "files": [{"path": "convert_trump_cage.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["convert_trump_cage.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "68ef3b992674d87d0c73da9c29a4c5a0e735f04b", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/101", "iss_label": "", "title": "help me", "body": "virtualenv '/home/test/Desktop/faceswap-master'\r\npython3.5 '/home/test/Desktop/faceswap-master/faceswap.py' -h\r\n\r\ntest@ubuntu:~$ virtualenv '/home/test/Desktop/faceswap-master'\r\nNew python executable in /home/test/Desktop/faceswap-master/bin/python\r\nInstalling setuptools, pip, wheel...done.\r\ntest@ubuntu:~$ python3.5 '/home/test/Desktop/faceswap-master/faceswap.py' -h\r\nTraceback (most recent call last):\r\n File \"/home/test/Desktop/faceswap-master/faceswap.py\", line 8, in <module>\r\n from lib.utils import FullHelpArgumentParser\r\n File \"/home/test/Desktop/faceswap-master/lib/utils.py\", line 5, in <module>\r\n from scandir import scandir\r\nImportError: No module named 'scandir'\r\ntest@ubuntu:~$ \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "68ef3b992674d87d0c73da9c29a4c5a0e735f04b", "files": [{"path": "requirements-gpu.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements-gpu.txt"], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "629c02a61e1ad5f769f8f7388a091d5ce9aa8160", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1254", "iss_label": "", "title": "Can't Open GUI on Windows", "body": "**Describe the bug**\r\nWhenever I try to open the GUI of Faceswap, I get an error and it doesn't open. I am on Windows, and I have uninstalled and reinstalled multiple times, including redoing the conda environment. CLI functions work, but the main GUI does not open, either from the shortcut or a manual terminal run. I have also tried running with and without admin\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Uninstall old Faceswap versions\r\n2. Install the latest windows version\r\n3. Run the Faceswap program in GUI mode\r\n4. See error\r\n\r\n**Expected behavior**\r\nI want the Faceswap GUI to open. It doesn't.\r\n\r\n**Screenshots**\r\n![image](https://user-images.githubusercontent.com/63259343/183342692-6f1c1bec-df9b-4f71-8a23-f2f77fccc008.png)\r\n![image](https://user-images.githubusercontent.com/63259343/183342835-998834e0-66d6-4751-84e9-a1c150e22063.png)\r\n\r\n\r\n**Desktop:**\r\n - OS: [Windows 11]\r\n - Python Version [3.9.12]\r\n - Conda Version [4.13.0]\r\n - Commit ID [6b2aac6]\r\n\r\n\r\n**Crash Report**\r\n[crash_report.2022.08.07.224753577271.log](https://github.com/deepfakes/faceswap/files/9278810/crash_report.2022.08.07.224753577271.log)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "629c02a61e1ad5f769f8f7388a091d5ce9aa8160", "files": [{"path": "requirements/_requirements_base.txt", "Loc": {"(None, None, 15)": {"mod": [15]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements/_requirements_base.txt"], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9696b5606fd0963814fc0c3644565aa60face69d", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/462", "iss_label": "", "title": "Modify extractor to focus on mouth", "body": "I'd like to modify the extractor script to focus on the lower half of the face - specifically the mouth area. \r\n\r\nI'm experimenting with changing people's mouth movements, and I want to train a higher resolution \"mouth only\" network, so I can create new speech patterns that are re-composited onto the original footage. \r\n\r\nIs there a way to modify which facial landmarks the extractor looks at so it just takes the mouth?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "9696b5606fd0963814fc0c3644565aa60face69d", "files": [{"path": "lib/aligner.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/aligner.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "e518206c8ef935ebc1b1ff64ae2901cc8ef05f94", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/57", "iss_label": "", "title": "Cannot install tensorflow-gpu requirement", "body": "\r\nTried installing the requirements-gpu.txt and get this error:\r\n\r\nCollecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) Cache entry deserialization failed, entry ignored Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: ) No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n\r\nI went here to troubleshoot the issue: https://github.com/tensorflow/tensorflow/issues/8251\r\nInstalled Python 64bit. Opened new command prompt window and typed in: pip3 install --upgrade tensorflow-gpu\r\n\r\nSuccessfully uninstalled setuptools-28.8.0\r\nSuccessfully installed bleach-1.5.0 enum34-1.1.6 html5lib-0.9999999 markdown-2.6.11 numpy-1.13.3 protobuf-3.5.1 setuptools-38.4.0 six-1.11.0 tensorflow-gpu-1.4.0 tensorflow-tensorboard-0.4.0rc3 werkzeug-0.14.1 wheel-0.30.0\r\n\r\nWent back to my faceswap env to enter the requirements-gpu.txt and still get the same error:\r\n(faceswap) C:\\faceswap>pip install -r requirements-gpu.txt\r\nCollecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: )\r\nNo matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n\r\n## Other relevant information\r\n\r\n- **Operating system and version:** Windows 10\r\nPython 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32\r\n- **Faceswap version:** 1/5/2018\r\n- **Faceswap method:** CPU/GPU \"CPU method only works\"\r\n- ...\r\n\r\n ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e518206c8ef935ebc1b1ff64ae2901cc8ef05f94", "files": [{"path": "requirements-gpu.txt", "Loc": {"(None, None, 6)": {"mod": [6]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements-gpu.txt"], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "51f1993d93e0ffb581d44416f327f0cf731c34e8", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/209", "iss_label": "", "title": "doesn't work on 2GB GTX 960 even with LowMem model (what params could be reduced?)", "body": "LowMem is different from the common model with 2 lines:\r\nENCODER_DIM = 512 # instead of 1024\r\n#x = self.conv(1024)(x) - commented out.\r\n\r\nBut it's still not enough to run under Ubuntu 16.04, cuda8, 1.7Gb of free video RAM.\r\nIt fails with OOM on any batch size, even with bs=1 and bs=2.\r\n\r\nWhat about having some configurable params here? Like reducing filters numbers or ENCODER_DIM or smth else? \r\nAlso that would be great to have some doc which describes few main params and their influence on quality etc. For example fakeapp allows to select number of layers, nodes etc.\r\n\r\nP.S. I managed to run it with ENCODER_DIM = 64 and bs=16, but results are not so good (after 15 hours).\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "51f1993d93e0ffb581d44416f327f0cf731c34e8", "files": [{"path": "faceswap.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["faceswap.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1361", "iss_label": "", "title": "Bounding boxes coordinates", "body": "It has been 2 weeks I have been working on it but cannot find the solution.\r\n\r\nI want the bounding boxes on the original image, of the result that is produced by the \"Extract\" process of faceswap code.\r\n\r\n\"Extract\" writes the faces extracted from the input image(s). I just want the coordinates from which this face is extracted (from original image).\r\n\r\nIf you could help me. I would be very grateful and would also help other people searching for the same problem.\r\nThank you.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd", "files": [{"path": "lib/align/detected_face.py", "Loc": {"('DetectedFace', '__init__', 82)": {"mod": [84, 85, 86, 87]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/align/detected_face.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "iss_html_url": "https://github.com/3b1b/manim/issues/659", "iss_label": "", "title": "Problem with FadeOutAndShift", "body": "t3 text is not going through FadeOutAndShift.\r\nAlso tell me how I can FadeOutAndShift t1 and t3 together\r\n\r\n```# python -m manim try3.py test1 -pm\r\n\r\nfrom manimlib.imports import *\r\n\r\nclass test1(Scene):\r\n\tdef construct(self):\r\n\t\tt1=TextMobject(\"Hi!\")\r\n\t\tt2=TextMobject(\"My name is\")\r\n\t\tt3=TextMobject(\"Girish\")\r\n\r\n\t\tt1.set_color(RED)\r\n\t\tt3.set_color(BLUE)\r\n\r\n\t\tself.play(Write(t1), run_time=2)\r\n\t\tself.play(ApplyMethod(t1.shift, 1*UP))\r\n\t\tself.play(FadeIn(t2))\r\n\t\tself.play(Transform(t2, t3), run_time=2)\r\n\t\tself.wait(2)\r\n\t\tself.play(FadeOutAndShift(t1))\r\n self.play(FadeOutAndShift(t3))\r\n\t\t\r\n\r\n\t\t\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "files": [{"path": "manimlib/scene/scene.py", "Loc": {"('Scene', 'play', 455)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/scene/scene.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "ce06e58505dff26cccd497a9bd43969f74ae0da9", "iss_html_url": "https://github.com/3b1b/manim/issues/274", "iss_label": "", "title": "ImportError: No module named animation", "body": "I've installed manim on Win10. After run \"python extract_scene.py -s example_scenes.py\",\r\n\r\nthe next error is shown in the python interactive interpretor:\r\n\r\n> Traceback (most recent call last):\r\n File \"extract_scene.py\", line 15, in <module>\r\n from scene.scene import Scene\r\n File \"G:\\python\\manim\\scene\\scene.py\", line 16, in <module>\r\n from animation.transform import MoveToTarget\r\n File \"G:\\python\\manim\\animation\\transform.py\", line 8, in <module>\r\n from animation.animation import Animation\r\nImportError: No module named animation\r\n\r\nWhat I can do? I'm looking forward to get help to solve this problem. ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ce06e58505dff26cccd497a9bd43969f74ae0da9", "files": [{"path": "animation/transform.py", "Loc": {"(None, None, None)": {"mod": [8]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["animation/transform.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "55ece141e898577ce44e71d718212a1ee816ed74", "iss_html_url": "https://github.com/3b1b/manim/issues/658", "iss_label": "", "title": "How to add sound to video?", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "55ece141e898577ce44e71d718212a1ee816ed74", "files": [{"path": "manimlib/scene/scene.py", "Loc": {"('Scene', 'add_sound', 543)": {"mod": []}}, "status": "modified"}, {"path": "old_projects/clacks/solution2/simple_scenes.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["old_projects/clacks/solution2/simple_scenes.py", "manimlib/scene/scene.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "97a0a707d759e0235450ea8c20f55a2529bd2973", "iss_html_url": "https://github.com/3b1b/manim/issues/878", "iss_label": "", "title": "Swedish characters not working", "body": "\r\n\r\nInclude at least:\r\n1. Steps to reproduce the issue (e.g. the command you ran)\r\n2. The unexpected behavior that occurred (e.g. error messages or screenshots)\r\n3. The environment (e.g. operating system and version of manim)\r\n\r\nI am new to manim and want to include swedish characters in a text, but it gives an error message when rendering.\r\nCode:\r\nclass Swe(Scene):\r\n\tdef construct(self):\r\n\t\ttext = TextMobject(r\"$\\\"o$\")\r\n\t\tself.add(text)\r\n\t\tself.wait()\r\n\r\nError message:\r\nTraceback (most recent call last):\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\extract_scene.py\", line 153, in main\r\n scene = SceneClass(**scene_kwargs)\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\scene\\scene.py\", line 54, in __init__\r\n self.construct()\r\n File \"Geony.py\", line 115, in construct\r\n text = TextMobject(r\"$\\\"o$\")\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\mobject\\svg\\tex_mobject.py\", line 144, in __init__\r\n self, self.arg_separator.join(tex_strings), **kwargs\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\mobject\\svg\\tex_mobject.py\", line 45, in __init__\r\n self.template_tex_file_body\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\utils\\tex_file_writing.py\", line 19, in tex_to_svg_file\r\n dvi_file = tex_to_dvi(tex_file)\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\utils\\tex_file_writing.py\", line 67, in tex_to_dvi\r\n \"See log output above or the log file: %s\" % log_file)\r\nException: Latex error converting to dvi. See log output above or the log file: C:\\Manim\\manim\\manim2020\\manimlib\\files\\Tex\\a26fbd67dc90adbc.log\r\n\r\nI am running python 3.7 (64 bit) and MikTex 2.9. All other features of manim are working fine.\r\nAny help would be much appreciated. Also, please keep in mind that I am new to manim and programing in general.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [12], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "6880ebcbc2525b2f3c0731439bef7ff981b4b5b4", "iss_html_url": "https://github.com/3b1b/manim/issues/924", "iss_label": "", "title": "Reconsidering TEX_USE_CTEX / using XeLaTeX", "body": "I worked on manim back in 2018. I added the function for using CTeX (XeLaTeX package for Chinese) and XeLaTeX instead of LaTeX using the flag `TEX_USE_CTEX` in constants.py (#315).\r\n\r\nI have stopped working on manim since 2019, but over the months there are apparently more and more people who want to use LaTeX rendering in non-English languages, and even on very old issues I still occasionally see people asking how to do that... Looking back at my change I really should have **decoupled using CTeX (TeX template) from XeLaTeX (rendering tool)**. This has caused a *lot* of confusions and made weird hacks/fixes necessary for only using XeLaTeX, especially for a language that is not Chinese or English, with the most recent #858 and #840. It really should have been a flag `TEX_USE_XELATEX` and another flag `TEMPLATE_TEX_NAME`, and the flag `TEX_USE_CTEX` is such that when it is `True`, `TEX_USE_XELATEX` is `True` and `TEMPLATE_TEX_NAME` is `\"ctex_template.tex\"`; otherwise `TEX_USE_XELATEX` is `False` and `TEMPLATE_TEX_NAME` is `\"tex_template.tex\"`. Then set `TEMPLATE_TEX_FILE` to `os.path.join(os.path.dirname(os.path.realpath(__file__)), TEMPLATE_TEX_NAME)`. Corresponding logic: constants.py lines 74\u201379.\r\n\r\nIt might be even better to set it dynamically using a function or as a parameter of `TexMobject()`, (see issues like #891). I looked at the source code and this is definitely possible. The options I can think of are\r\n1. Use the current `TEX_USE_CTEX`\r\n2. Add flags `TEX_USE_XELATEX` and `TEMPLATE_TEX_NAME`, and rework `TEX_USE_CTEX`\r\n3. Add parameters for `TexMobject()` like `use_xelatex=False` and `tex_template=\"tex_template.tex\"`\r\n4. Use the flags of 2. as a default, and make it possible to change the default using 3.\r\n\r\nNot really sure if this is the right place to raise this issue.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "ManimCommunity"}, {"pro": "manim", "path": ["manim/utils/tex_templates.py"]}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manim/utils/tex_templates.py"], "doc": [], "test": [], "config": [], "asset": ["ManimCommunity"]}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "iss_html_url": "https://github.com/3b1b/manim/issues/660", "iss_label": "", "title": "ColorByCaracter help ", "body": "I want to color only theta of ```{ e }^{ i\\theta }```\r\n\r\nI was going through ColorByCaracter in 3_text_like_arrays.py . \r\nBut I fail to understand how you people separate the tex formula into arrays. I know about arrays but I can only copy the tex code from [Daum Equation Editor](http://s1.daumcdn.net/editor/fp/service_nc/pencil/Pencil_chromestore.html) and paste it. I don't know how to divide them into arrays.\r\n\r\nPlease help me.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "files": [{"path": "manimlib/mobject/svg/tex_mobject.py", "Loc": {"('TexMobject', None, 132)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/svg/tex_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "32abbb9371308e8dff7410de387fe78e64b6fe7a", "iss_html_url": "https://github.com/3b1b/manim/issues/700", "iss_label": "", "title": "OSError: No file matching Suv.svg in image directory", "body": "I've tried putting the .SVG image into */media/designs/svg_images. But when I want to quote it in the .py file it still reports errors:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jason/Documents/manim/manimlib/extract_scene.py\", line 155, in main\r\n scene = SceneClass(**scene_kwargs)\r\n File \"/home/jason/Documents/manim/manimlib/scene/scene.py\", line 53, in __init__\r\n self.construct()\r\n File \"SVGTEST.py\", line 44, in construct\r\n height=height_size\r\n File \"/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py\", line 45, in __init__\r\n self.ensure_valid_file()\r\n File \"/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py\", line 63, in ensure_valid_file\r\n self.file_name)\r\nOSError: No file matching MYSVG.svg in image directory\r\n\r\n```\r\n(Manjaro Linux, Texlive)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "32abbb9371308e8dff7410de387fe78e64b6fe7a", "files": [{"path": "manimlib/mobject/svg/svg_mobject.py", "Loc": {"('SVGMobject', 'ensure_valid_file', 49)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/svg/svg_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "b74e5ca254bccc1575b4c7b7de3c1cb2010aac75", "iss_html_url": "https://github.com/3b1b/manim/issues/694", "iss_label": "", "title": "can't graph trigonometric function of secx, cscx, cotx, tanx,...", "body": "source code:\r\n\r\nclass PlotFunctions(GraphScene):\r\n CONFIG = {\r\n \"x_min\" : -10,\r\n \"x_max\" : 10.3,\r\n \"y_min\" : -1.5,\r\n \"y_max\" : 1.5,\r\n \"graph_origin\" : ORIGIN ,\r\n \"function_color\" : RED ,\r\n \"axes_color\" : GREEN,\r\n \"x_labeled_nums\" :range(-10,12,2),\r\n\r\n }\r\n def construct(self):\r\n self.setup_axes(animate=True)\r\n func_graph=self.get_graph(self.func_to_graph,self.function_color)\r\n func_graph2=self.get_graph(self.func_to_graph2)\r\n vert_line = self.get_vertical_line_to_graph(TAU,func_graph,color=YELLOW)\r\n graph_lab = self.get_graph_label(func_graph, label = \"\\\\cos(x)\")\r\n graph_lab2=self.get_graph_label(func_graph2,label = \"\\\\sin(x)\", x_val=-10, direction=UP/2)\r\n two_pi = TexMobject(\"x = 2 \\\\pi\")\r\n label_coord = self.input_to_graph_point(TAU,func_graph)\r\n two_pi.next_to(label_coord,RIGHT+UP)\r\n\r\n\r\n\r\n self.play(ShowCreation(func_graph),ShowCreation(func_graph2))\r\n self.play(ShowCreation(vert_line), ShowCreation(graph_lab), ShowCreation(graph_lab2),ShowCreation(two_pi))\r\n\r\n\r\n def func_to_graph(self,x):\r\n #return np.cos(x)\r\n return np.tan(x)\r\n\r\n def func_to_graph2(self,x):\r\n return np.sin(x)\r\n\r\nI replaced \"return np.cos(x)\" to \"return np.tan(x)\"...i got this:\r\n![image](https://user-images.githubusercontent.com/36161299/63267544-e140a700-c2c4-11e9-9164-a14d37ee8673.png)\r\n\r\nand then I replaced \"return np.cos(x)\" to \"return np.sec(x)/cot(x)/csc(x)\"...i got this:\r\nAttributeError: module 'numpy' has no attribute 'sec'...\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b74e5ca254bccc1575b4c7b7de3c1cb2010aac75", "files": [{"path": "manimlib/mobject/types/vectorized_mobject.py", "Loc": {"('VGroup', None, 868)": {"mod": []}}, "status": "modified"}, {"Loc": {"(None, None, None)": {"mod": [17]}}, "path": null}]}, "own_code_loc": [{"Loc": {"(None, None, None)": {"mod": [17]}}, "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "loctype": {"code": [null, "manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "fc153bb49a529e8cbb02dd1514f06387cbf0ee6e", "iss_html_url": "https://github.com/3b1b/manim/issues/1206", "iss_label": "", "title": "Manim can't find my png file", "body": "I'm new to coding and am trying to learn manim, which I'm using on my macbook pro. I'm trying to create a scene where manim draws a png file I saved. I saved the png file as \"shirt.png\" in my manim folder. I then ran the following code:\r\n\r\n\r\n```\r\nfrom manimlib.imports import *\r\n\r\nclass OutFit(Scene):\r\n\tdef construct(self):\r\n\t\t\r\n\t\tshirt = ImageMobject(\"shirt\")\r\n\t\t\r\n\t\tself.play(Write(shirt))\r\n```\r\nI've looked up several ways of how to get manim to do images and some solutions, but since I'm pretty new at this I don't always understand the answers I've found from other people's issues or if it applies to mine. I keep getting this error response:\r\n\r\nraise IOError(\"File {} not Found\".format(file_name))\r\nOSError: File shirt not Found\r\n\r\nAny help is much appreciated. \r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "fc153bb49a529e8cbb02dd1514f06387cbf0ee6e", "files": [{"path": "manimlib/animation/fading.py", "Loc": {"('FadeIn', None, 34)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/animation/fading.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "3b1b", "repo_name": "manim", "base_commit": "64c960041b5b9dcb0aac50019268a3bdf69d9563", "iss_html_url": "https://github.com/3b1b/manim/issues/608", "iss_label": "", "title": "What is VMobject exactly?", "body": "Can anyone explain what is the purpose of `VMobject` and how it differs from `Mobject`?\r\n\r\nI am trying to make some `old_projects` work. For example, I had to change `PMobject` to inherit from `VMobject` instead of `Mobject` in order to fix `NumberLineScene`. I do not know if it is correct thing to do or how will it affect the other scripts because I am unable to find the fundamental differences between the two objects. The wiki does not explain a lot, so please tell some detailed information.\r\n\r\nI dug commit histories and saw \r\n\r\n> \"Starting to vectorize all things\"\r\n\r\n kind of commit messages when the `VMobject` class is added to the engine. What does it mean \"Vectorize\" in this context?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "64c960041b5b9dcb0aac50019268a3bdf69d9563", "files": [{"path": "manimlib/mobject/types/vectorized_mobject.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "a2779fe2f6c9ab29508676f21242b1c6b88e2f67", "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/5229", "iss_label": "documentation\nenhancement\nfix-me", "title": "[Documentation]: Micro-agents", "body": "**What problem or use case are you trying to solve?**\r\n\r\nCurrently in the `openhands/agenthub/codeact_agent` directory, we have an implementation of micro agents, but this is not documented.\r\n\r\nTo do so, we can:\r\n1. read the implementation of codeact agent\r\n2. read an example microagent in `openhands/agenthub/codeact_agent/micro/github.md`\r\n3. add documentation to `openhands/agenthub/codeact_agent/README.md`\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a2779fe2f6c9ab29508676f21242b1c6b88e2f67", "files": [{"path": "microagents/README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["microagents/README.md"], "test": [], "config": [], "asset": []}}, {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "08a2dfb01af1aec6743f5e4c23507d63980726c0", "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/635", "iss_label": "bug", "title": "Ollama support issue.", "body": "<!-- You MUST fill out this template. We will close issues that don't include enough information to reproduce -->\r\n#### Describe the bug\r\n\r\nWhen trying to configure OpenDevin to run with Ollama there are requests that are being sent to the ollama server like this:\r\n\r\n\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/1931e068-0341-429b-8c4e-0dd2da36f54c)\r\n\r\n\r\nThe post request should look like this:\r\n`\"POST /chat/completions HTTP/1.1\"`\r\n\r\n<!-- a short description of the problem -->\r\n\r\n#### Setup and configuration\r\n**Current version**:\r\n<!-- run `git log -n 1` to see this -->\r\n```bash\r\ncommit 5c640c99cafb3c718dad60f377f3a725a8bab1de (HEAD -> local-llm-flag, origin/main, origin/HEAD, main)\r\n```\r\n\r\n<!-- tell us everything about your environment -->\r\n**My config.toml and environment vars** (be sure to redact API keys):\r\n```toml\r\nWORKSPACE_DIR=\"./workspace\"\r\nLLM_BASE_URL=\"http://localhost:8000\"\r\nLLM_MODEL=\"ollama/starcoder2:15b\"\r\nLLM_EMBEDDING_MODEL=\"ollama/starcoder2:15b\"\r\n```\r\n\r\n**My model and agent** (you can see these settings in the UI):\r\n* Model: ollama/starcoder2\r\n* Agent: MonologueAgent\r\n\r\n**Commands I ran to install and run OpenDevin**:\r\n```\r\ngit clone ...\r\nmake build\r\nmake start-backend\r\nmake start-frontend\r\n```\r\n\r\n**Steps to Reproduce**:\r\n1. In `opendevin/llm/llm.py` in `__init__` replace `self.model = model if model else DEFAULT_MODEL_NAME` with `self.model_name = DEFAULT_MODEL_NAME`\r\n2. Run your local model on litellm `litellm --model ollama/starcoder2:15b --port 8000`\r\n3. Run `make build` then `make start-backend` and `make start-frontend`\r\n4. Ask devin to do anything ex 'make a hello world script in python'\r\n5. Observe 404 errors spammed in litellm server log\r\n\r\n**Logs, error messages, and screenshots**:\r\nThis is a log from the backend server running from `make start-backend` steps 0-99 all look the same.\r\n```\r\n==============\r\nSTEP 99\r\n\r\nPLAN:\r\nplease make a simple flask app that says hello world.\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1436, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 31, in condense\r\n resp = llm.completion(messages=messages)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 328, in completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 325, in completion\r\n response = self.function_with_fallbacks(**kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1419, in function_with_fallbacks\r\n raise original_exception\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1344, in function_with_fallbacks\r\n response = self.function_with_retries(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1496, in function_with_retries\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1462, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nERROR:\r\nError condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1436, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 31, in condense\r\n resp = llm.completion(messages=messages)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 328, in completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 325, in completion\r\n response = self.function_with_fallbacks(**kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1419, in function_with_fallbacks\r\n raise original_exception\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1344, in function_with_fallbacks\r\n response = self.function_with_retries(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1496, in function_with_retries\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1462, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/opendevin/controller/agent_controller.py\", line 112, in step\r\n action = self.agent.step(self.state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/agent.py\", line 153, in step\r\n self._add_event(prev_action.to_dict())\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/agent.py\", line 96, in _add_event\r\n self.monologue.condense(self.llm)\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 36, in condense\r\n raise RuntimeError(f\"Error condensing thoughts: {e}\")\r\nRuntimeError: Error condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nOBSERVATION:\r\nError condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\nExited before finishing\r\n```\r\n\r\n#### Additional Context\r\n\r\nLitellm for local models is expecting api calls in the following format:\r\n\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/67b10c26-a9e6-44a1-a79e-908fc7d3749f)\r\n\r\nFrom: `http://localhost:8000/#/`\r\n\r\nI know that the problem is whatever is managing the api calls is set to call `/api/generate/` because this is the convention, but for local server that is not supported. I do not know where to look to fix this, any ideas?\r\n\r\nThe server responds when I test it like this:\r\n```\r\ndef query_local_llm(prompt, limit=TOKEN_LIMIT):\r\n # Replace with your actual server address and port\r\n url = \"http://0.0.0.0:8000/chat/completions\"\r\n payload = {\r\n \"model\": \"ollama/mistral\",\r\n \"messages\" : [{\"content\": prompt, \"role\": \"user\"}],\r\n \"max_tokens\": limit\r\n }\r\n response = requests.post(url, json=payload)\r\n```\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/b9bae877-5bd4-4864-b672-9678bb9a294e)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "08a2dfb01af1aec6743f5e4c23507d63980726c0", "files": [{"path": "opendevin/llm/LOCAL_LLM_GUIDE.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["opendevin/llm/LOCAL_LLM_GUIDE.md"], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "d636e5baa8a077e2869bfe3b76525efec42392ec", "iss_html_url": "https://github.com/scrapy/scrapy/issues/2276", "iss_label": "", "title": "can LinkExtractor extract scrapy.link with node info", "body": "the html is like below, i want to extract the link `/example/category/pg{page}/`, but the `scrapy.link` does not contains the node info(`currentPage` and `totalPage`), how can i extract the link with the node info \n\n``` html\n<div class=\"page-box\">\n <div page-url=\"/example/category/pg{page}/\"\n totalPage=\"35\"\n currentPage=\"1\" \n </div>\n</div>\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d636e5baa8a077e2869bfe3b76525efec42392ec", "files": [{"path": "scrapy/http/response/text.py", "Loc": {"('TextResponse', 'css', 117)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/http/response/text.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "892467cb8a40c54840284a08d0f98ab1b3af7bc4", "iss_html_url": "https://github.com/scrapy/scrapy/issues/4565", "iss_label": "", "title": "AttributeError: module 'resource' has no attribute 'getrusage'", "body": "version : Scrapy 2.1.0\r\n\r\n```\r\n2020-05-11 20:05:28 [scrapy.core.engine] INFO: Spider opened\r\n2020-05-11 20:05:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)\r\n2020-05-11 20:05:28 [dy] INFO: Spider opened: dy\r\n2020-05-11 20:05:28 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method MemoryUsage.engine_started of <scrapy.extensions.memusage.MemoryUsage object at 0x0000000004D3A358>>\r\nTraceback (most recent call last):\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\utils\\defer.py\", line 161, in maybeDeferred_coro\r\n result = f(*args, **kw)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\pydispatch\\robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 55, in engine_started\r\n self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 48, in get_virtual_size\r\n size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss\r\nAttributeError: module 'resource' has no attribute 'getrusage'\r\n```\r\n\r\n```\r\n2020-05-11 20:05:43 [scrapy.core.engine] INFO: Closing spider (finished)\r\n2020-05-11 20:05:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:\r\n{'downloader/request_bytes': 6751,\r\n 'downloader/request_count': 14,\r\n 'downloader/request_method_count/GET': 14,\r\n 'downloader/response_bytes': 12380415,\r\n 'downloader/response_count': 14,\r\n 'downloader/response_status_count/200': 10,\r\n 'downloader/response_status_count/302': 4,\r\n 'elapsed_time_seconds': 14.631021,\r\n 'finish_reason': 'finished',\r\n 'finish_time': datetime.datetime(2020, 5, 11, 12, 5, 43, 378200),\r\n 'item_scraped_count': 65,\r\n 'log_count/DEBUG': 85,\r\n 'log_count/ERROR': 1,\r\n 'log_count/INFO': 9,\r\n 'request_depth_max': 1,\r\n 'response_received_count': 10,\r\n 'scheduler/dequeued': 6,\r\n 'scheduler/dequeued/memory': 6,\r\n 'scheduler/enqueued': 6,\r\n 'scheduler/enqueued/memory': 6,\r\n 'start_time': datetime.datetime(2020, 5, 11, 12, 5, 28, 747179)}\r\n2020-05-11 20:05:43 [scrapy.core.engine] INFO: Spider closed (finished)\r\n2020-05-11 20:05:43 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method MemoryUsage.engine_stopped of <scrapy.extensions.memusage.MemoryUsage object at 0x0000000004D3A358>>\r\nTraceback (most recent call last):\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\utils\\defer.py\", line 161, in maybeDeferred_coro\r\n result = f(*args, **kw)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\pydispatch\\robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 70, in engine_stopped\r\n for tsk in self.tasks:\r\nAttributeError: 'MemoryUsage' object has no attribute 'tasks'\r\n```\r\n\r\n(edited for text formatting)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "892467cb8a40c54840284a08d0f98ab1b3af7bc4", "files": [{"path": "scrapy/commands/settings.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/commands/settings.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66", "iss_html_url": "https://github.com/psf/requests/issues/775", "iss_label": "", "title": "Content marked as consumed in 0.13.6", "body": "Content is immediately marked as consumed in 0.13.6, causing calls to e.g. response.iter_content() to throw an error.\n\nTest code (tested with python 2.6):\n\n```\nimport requests\n\nr = requests.get('http://docs.python-requests.org/')\nif r._content_consumed:\n print 'consumed'\nelse:\n print 'not consumed'\n```\n\nIn 0.13.5 this prints:\nnot consumed\n\nIn 0.13.6 this prints:\nconsumed\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66", "files": [{"path": "requests/models.py", "Loc": {"('Request', '__init__', 47)": {"mod": [62]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["requests/models.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "2de907ad778de270911acaffe93883f0e2729a4a", "iss_html_url": "https://github.com/psf/requests/issues/4602", "iss_label": "", "title": "Chunk-encoded request doesn't recognize iter_content generator", "body": "Passing a generator created by iter_content() as request data raises \"TypeError: sendall() argument 1 must be string or buffer, not generator\".\r\n\r\n## Expected Result\r\n\r\nThe POST request successfully delives the content from the GET request.\r\n\r\n## Actual Result\r\n\r\nA TypeError is raised:\r\n```\r\nTraceback (most recent call last):\r\n File \"..\\test.py\", line 7, in <module>\r\n PostForward(\"http://myhost/img/foo.png\", \"http://myotherhost/convert\")\r\n File \"..\\test.py\", line 6, in PostForward\r\n return requests.post(url=dst, data=data, headers={'Content-Length': length})\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\adapters.py\", line 440, in send\r\n timeout=timeout\r\n File \"C:\\Python27\\lib\\site-packages\\urllib3\\connectionpool.py\", line 601, in urlopen\r\n chunked=chunked)\r\n File \"C:\\Python27\\lib\\site-packages\\urllib3\\connectionpool.py\", line 357, in _make_request\r\n conn.request(method, url, **httplib_request_kw)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1042, in request\r\n self._send_request(method, url, body, headers)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1082, in _send_request\r\n self.endheaders(body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1038, in endheaders\r\n self._send_output(message_body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 886, in _send_output\r\n self.send(message_body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 858, in send\r\n self.sock.sendall(data)\r\n File \"C:\\Python27\\lib\\socket.py\", line 228, in meth\r\n return getattr(self._sock,name)(*args)\r\nTypeError: sendall() argument 1 must be string or buffer, not generator\r\n```\r\n\r\n## Reproduction Steps\r\n\r\n```python\r\nimport requests\r\ndef PostForward(src, dst):\r\n\twith requests.get(url=src, stream=True) as srcResponse:\r\n\t\tlength = srcResponse.headers['Content-Length']\r\n\t\tdata = srcResponse.iter_content(1024)\r\n\t\treturn requests.post(url=dst, data=data, headers={'Content-Length': length})\r\nPostForward(\"http://myhost/img/foo.png\", \"http://myotherhost/convert\")\r\n```\r\n\r\n## System Information\r\n\r\n $ python -m requests.help\r\n\r\n```\r\n{\r\n \"chardet\": {\r\n \"version\": \"3.0.4\"\r\n },\r\n \"cryptography\": {\r\n \"version\": \"\"\r\n },\r\n \"idna\": {\r\n \"version\": \"2.6\"\r\n },\r\n \"implementation\": {\r\n \"name\": \"CPython\",\r\n \"version\": \"2.7.14\"\r\n },\r\n \"platform\": {\r\n \"release\": \"10\",\r\n \"system\": \"Windows\"\r\n },\r\n \"pyOpenSSL\": {\r\n \"openssl_version\": \"\",\r\n \"version\": null\r\n },\r\n \"requests\": {\r\n \"version\": \"2.18.4\"\r\n },\r\n \"system_ssl\": {\r\n \"version\": \"100020bf\"\r\n },\r\n \"urllib3\": {\r\n \"version\": \"1.22\"\r\n },\r\n \"using_pyopenssl\": false\r\n}\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "requests"}, {"pro": "toolbelt", "path": ["requests_toolbelt/streaming_iterator.py"]}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["requests_toolbelt/streaming_iterator.py"], "doc": [], "test": [], "config": [], "asset": ["requests"]}}, {"organization": "psf", "repo_name": "requests", "base_commit": "f17ef753d2c1f4db0d7f5aec51261da1db20d611", "iss_html_url": "https://github.com/psf/requests/issues/3031", "iss_label": "Needs Info\nQuestion/Not a bug", "title": "[WinError 10048] Only one usage of each socket address ...", "body": "I notice that despite using requests.Session() - I still seem to be creating new connections/sockets which eventually exhaust (TIME_WAIT) and I get the following error:\n\n> [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted',))\n\n```\ns = requests.Session()\ndata = zip(url_routes, cycle(s))\ncalc_routes = pool.map(processRequest, data)\n\n```\n\nI posted a bit more [here](http://stackoverflow.com/questions/35793908/python-multiprocessing-associate-a-process-with-a-session), however not sure how to address this\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [8], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "6f659a41794045292b836859f1281d33eeed8260", "iss_html_url": "https://github.com/psf/requests/issues/3740", "iss_label": "", "title": "File download weirdness", "body": "I noticed this issue building conda recipes. Conda uses requests to download files from the internet.\r\n\r\nThe file that is being fetched is: https://dakota.sandia.gov/sites/default/files/distributions/public/dakota-6.5-public.src.tar.gz\r\n(link found here: https://dakota.sandia.gov/download.html)\r\n\r\nDownloading with curl -O\r\nfilesize: 78MB\r\nmd5: 02c46e904d40bba6b308065db34c1ad7\r\n\r\nDownloading with urllib2 (from the standard library):\r\nfilesize: 78MB\r\nmd5: 02c46e904d40bba6b308065db34c1ad7\r\n\r\nDownloading with requests-2.12.1 (supplied with conda)\r\nfilesize: 248MB\r\nmd5: 41e4268140d850756812510512d8eee8\r\ntar -tf doesn't indicate any corruption.\r\n\r\nI'm not sure what is different with this particular URL, but the other files I tried with requests worked. I don't know where the extra 170MB is coming from?\r\n\r\ncode used to download files:\r\n```python\r\ndef download_file(url, fn):\r\n r = requests.get(url, stream=True)\r\n with open(fn, 'wb') as f:\r\n for chunk in r.iter_content(chunk_size=1024): \r\n if chunk:\r\n f.write(chunk)\r\n\r\ndef download_urllib2(url, fn):\r\n f = urllib2.urlopen(url)\r\n with open(fn, 'wb') as fh:\r\n for x in iter(lambda: f.read(1024), b''):\r\n fh.write(x)\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6f659a41794045292b836859f1281d33eeed8260", "files": [{"path": "docs/user/quickstart.rst", "Loc": {"(None, None, 166)": {"mod": [166]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/user/quickstart.rst"], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "62176a1ca7207db37273365b4691ed599203b828", "iss_html_url": "https://github.com/psf/requests/issues/3849", "iss_label": "", "title": "Received response with content-encoding: gzip, but failed to decode it", "body": "```python\r\nimport requests\r\n\r\nrequests.get('http://gett.bike/')\r\n```\r\nThis code raises the following exception:\r\n```python\r\nContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.',\r\nerror('Error -3 while decompressing data: incorrect data check',))\r\n```\r\nArch linux x64\r\nrequests==2.13.0\r\npython=3.6.0", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "62176a1ca7207db37273365b4691ed599203b828", "files": [{"path": "src/requests/api.py", "Loc": {"(None, 'request', 14)": {"mod": [24]}}, "status": "modified"}, {"Loc": {"(None, None, None)": {"mod": [4]}}, "path": null}]}, "own_code_loc": [{"Loc": {"(None, None, None)": {"mod": [4]}}, "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "src/requests/api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "057722af23edf3f69bf7bdfed7c6c32cbe1ce2e7", "iss_html_url": "https://github.com/psf/requests/issues/3015", "iss_label": "", "title": "Ability to set timeout after response", "body": "For devs who use this great library, it would be very beneficial to be able to set the timeout AFTER initial connection. There are a few scenarios where this is useful but one of the main patterns/use cases is this:\n\n```\n\nimport requests\nimport socket\n\n# May or may not subclass threading.Thread\nclass Getter(object):\n def __init__(self):\n self.request = requests.get(url, stream=True)\n\n def run(self):\n with open(path, 'r+b') as file:\n\n bytes_consumed = 0\n while True:\n try:\n\n chunk = self.request.raw.read(size)\n if not chunk:\n break\n chunk_length = len(chunk)\n\n file.write(chunk)\n bytes_consumed += chunk_length\n\n except socket.timeout:\n # handle incomplete download by using range header next time, etc.\n```\n\nHandling incomplete downloads due to connection loss is common and especially important when downloading large or many files (or both). As you can see, this can be achieved in a fairly straightforward way. The issue is there is really no good way to write tests for this. Each method would involve OS specific code which would also be a no-go for CI services.\n\nWhat would be an option is the ability to set the timeout after establishing a connection. This way in a test you could do \"r.timeout = (None, 0.00001)\" and during reading it would simulate a timeout.\n\nTo my knowledge this is no way currently to inject a new Timeout class retroactively. Is this correct?\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "psf", "repo_name": "requests", "base_commit": "1285f576ae0a848de27af10d917c19b60940d1fa", "iss_html_url": "https://github.com/psf/requests/issues/3774", "iss_label": "", "title": "bad handshake error with ssl3", "body": "I have an inhouse IIS server with ssl3 but an expired certificate, so I used requests without certificate verification and it was working fine with requests 2.11.1. But after I upgrade requests to 2.12.0, there was an error occured. \r\n\r\nthe code is:\r\n...\r\nrequests.get('https://10.192.8.89:8080/yps_report', verify=False)\r\n...\r\n\r\nerror message:\r\nTraceback (most recent call last):\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\contrib\\pyopenssl.py\", line 417, in wrap_socket\r\n cnx.do_handshake()\r\n File \"c:\\python35\\lib\\site-packages\\OpenSSL\\SSL.py\", line 1426, in do_handshake\r\n self._raise_ssl_error(self._ssl, result)\r\n File \"c:\\python35\\lib\\site-packages\\OpenSSL\\SSL.py\", line 1167, in _raise_ssl_error\r\n raise SysCallError(-1, \"Unexpected EOF\")\r\nOpenSSL.SSL.SysCallError: (-1, 'Unexpected EOF')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 594, in urlopen\r\n chunked=chunked)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 350, in _make_request\r\n self._validate_conn(conn)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 835, in _validate_conn\r\n conn.connect()\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connection.py\", line 323, in connect\r\n ssl_context=context)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\util\\ssl_.py\", line 324, in ssl_wrap_socket\r\n return context.wrap_socket(sock, server_hostname=server_hostname)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\contrib\\pyopenssl.py\", line 424, in wrap_socket\r\n raise ssl.SSLError('bad handshake: %r' % e)\r\nssl.SSLError: (\"bad handshake: SysCallError(-1, 'Unexpected EOF')\",)\r\n...\r\n\r\nI tried to downgrade requests to 2.11.1 and the error was gone. I have no idea how to fix this.\r\nfrom requests.adapters import HTTPAdapter\nfrom requests.packages.urllib3.util.ssl_ import create_urllib3_context\n\n# This is the 2.11 Requests cipher string.\nCIPHERS = (\n 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'\n 'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:'\n '!eNULL:!MD5'\n)\n\nclass DESAdapter(HTTPAdapter):\n def init_poolmanager(self, *args, **kwargs):\n context = create_urllib3_context(ciphers=CIPHERS)\n kwargs['ssl_context'] = context\n return super(HTTPAdapter, self).init_poolmanager(*args, **kwargs)\n\ns = requests.Session()\ns.mount('https://10.192.8.89', DESAdapter())", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [41], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n\u9700\u8981\u5c06\u4e0b\u9762\u7684user\u7684\u4e00\u4e2acomment\u4e2duser\u7684\u4ee3\u7801\u653e\u5165\u5176\u4e2d", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "a6d4c3ff7cf43c24be6622102cee834fc5096496", "iss_html_url": "https://github.com/ansible/ansible/issues/78759", "iss_label": "module\nsupport:core\nbug\naffects_2.9", "title": "\"Invalid data passed to 'loop', it requires a list, got this instead: <built-in method values of dict object at 0x7f63b782bf80>.", "body": "### Summary\r\n\r\nWhen trying to pass a variable called i.e. sysctl.values to loop, I will get the above error.\r\n\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ndebug (only used for debugging)\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\nansible 2.9.27\r\n config file = /home/rf/.ansible.cfg\r\n configured module search path = ['/home/rf/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.10/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.10.6 (main, Aug 2 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\n[I] </m/d/playground>-2-> ansible-config dump --only-changed\r\nANSIBLE_PIPELINING(/home/rf/.ansible.cfg) = True\r\nANSIBLE_SSH_ARGS(/home/rf/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s\r\nDEFAULT_FORKS(/home/rf/.ansible.cfg) = 50\r\nDEFAULT_HOST_LIST(/home/rf/.ansible.cfg) = ['/home/rf/hosts']\r\nINVENTORY_CACHE_ENABLED(/home/rf/.ansible.cfg) = True\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nFedora 36\r\n\r\n### Steps to Reproduce\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n- name: Test\r\n hosts: localhost\r\n gather_facts: True\r\n tasks:\r\n - debug:\r\n msg: \"{{ item }}\"\r\n loop: \"{{ sysctl2 }}\"\r\n - debug:\r\n msg: \"{{ item }}\"\r\n loop: \"{{ sysctl.values }}\"\r\n vars:\r\n sysctl:\r\n values:\r\n - { name: \"net.ipv4.ip_forward\", value: \"1\" }\r\n sysctl2:\r\n - { name: \"net.ipv4.ip_forward\", value: \"1\" }\r\n```\r\n\r\n\r\n\r\n\r\n### Expected Results\r\n\r\nOutput of debug using sysctl.values\r\n\r\n### Actual Results\r\n\r\n```console\r\nPLAY [Test] ********************************************************************************************************************************************************************************************\r\n\r\nTASK [Gathering Facts] *********************************************************************************************************************************************************************************\r\nok: [localhost]\r\n\r\nTASK [debug] *******************************************************************************************************************************************************************************************\r\nok: [localhost] => (item={'name': 'net.ipv4.ip_forward', 'value': '1'}) => {\r\n \"msg\": {\r\n \"name\": \"net.ipv4.ip_forward\",\r\n \"value\": \"1\"\r\n }\r\n}\r\n\r\nTASK [debug] *******************************************************************************************************************************************************************************************\r\nfatal: [localhost]: FAILED! => {\"msg\": \"Invalid data passed to 'loop', it requires a list, got this instead: <built-in method values of dict object at 0x7f63b782bf80>. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup.\"}\r\n\r\nPLAY RECAP *********************************************************************************************************************************************************************************************\r\nlocalhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0\r\n```\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [59], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "bcf9cd1e2a01d8e111a28db157ebc255a5592dca", "iss_html_url": "https://github.com/ansible/ansible/issues/20085", "iss_label": "cloud\naffects_2.1\nmodule\ndocker\nbug", "title": "docker_container task fail on exit code", "body": "Unless i'm missing something i expect that if I were to do something like the following the task would fail? But it does not \ud83d\ude1f \r\n\r\n```yaml\r\n tasks:\r\n docker_container:\r\n name: \"exit-test\"\r\n image: \"ubuntu:latest\"\r\n command: \"bash -c 'exit 123'\"\r\n```\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\ndocker_container\r\n\r\n##### ANSIBLE VERSION\r\n```\r\n2.1.1.0\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### STEPS TO REPRODUCE\r\n```yaml\r\n tasks:\r\n docker_container:\r\n name: \"exit-test\"\r\n image: \"ubuntu:latest\"\r\n command: \"bash -c 'exit 123'\"\r\n```\r\n##### EXPECTED RESULTS\r\nShould fail the task\r\n\r\n##### ACTUAL RESULTS\r\nTask is ok.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ansible", "pro": "ansible-modules-core", "path": ["cloud/docker/docker_container.py"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["cloud/docker/docker_container.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "d5324c11a0c389d2ede8375e2024cb37b9eb8ce5", "iss_html_url": "https://github.com/ansible/ansible/issues/19352", "iss_label": "affects_2.0\nmodule\nsupport:core\nbug\nfiles", "title": "Template update convert \\n to actual new line", "body": "##### ISSUE TYPE\r\n\r\n Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\ntemplate\r\n\r\n##### ANSIBLE VERSION\r\n\r\n2.0 and higher\r\nCONFIGURATION\r\n```\r\n[ssh_connection]\r\ncontrol_path = %(directory)s/%%C\r\n```\r\n##### OS / ENVIRONMENT\r\n\r\nMac OS X 10.11.6\r\nCentos 6.x, 7.x\r\nSUMMARY\r\n\r\nIn the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing `(?m)\\n` . The output generated by the template module in versions 2.0 and later, treats the \\n as actual line break. Where as versions up to 1.9.6 retains the literal `(?m)\\n` without replacing the \\n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x.\r\n\r\nAny way we can work around this issue? Thank you for your help.\r\n##### STEPS TO REPRODUCE\r\n\r\nOur execution flow is probably not the nicest - we want to reengineer it soon. Basic steps:\r\n\r\n Run a shell script with ansible-playbook command that pass in an env variable with `(?m)\\n` literal.\r\n Playbook calls a main yaml file and assigns shell environment var to a included task yaml file.\r\n The task yaml file invokes the template module.\r\n\r\nIn the snippet below I stripped out other lines/vars for clarity.\r\n\r\nmain shell\r\n```\r\nset GROK_PATTERN_GENERAL_ERROR_PG=\"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n```\r\n```\r\nansible-playbook -i ../common/host.inventory \\\r\n -${VERBOSE} \\\r\n t.yml \\\r\n ${CHECK_ONLY} \\\r\n --extra-vars \"hosts='${HOST}'\r\n xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}'\r\n \"\r\n```\r\nt.yml\r\n```\r\n---\r\n- hosts: 127.0.0.1\r\n connection: local\r\n\r\n tasks:\r\n - include_vars: ../common/defaults/main.yml\r\n - name: generate logstash kafka logscan filter config file\r\n include: tasks/t.yml\r\n vars:\r\n logstash_grok_general_error: \"{{xlogstash_grok_general_error}}\"\r\n```\r\ntasks/t.yml\r\n```\r\n---\r\n - name: generate logstash kafka logscan filter config file\r\n template: src=../common/templates/my.conf.j2\r\n dest=\"./500-filter.conf\"\r\n```\r\nmy.conf.j2\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"{{logstash_grok_general_error}}\"\r\n ]\r\n }\r\n```\r\nNote the `(?m)\\n` are still on the same line.\r\n##### EXPECTED RESULTS\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```\r\n##### ACTUAL RESULTS\r\n\r\nNote `(?m)\\n` now has the `\\n` as actual line break.\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\r\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d5324c11a0c389d2ede8375e2024cb37b9eb8ce5", "files": [{"path": "lib/ansible/template/__init__.py", "Loc": {}}, {"path": "t.yml", "Loc": {"(None, None, None)": {"mod": [60]}}}]}, "own_code_loc": [{"path": "t.yml", "Loc": {"(None, None, None)": {"mod": [60]}}}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/template/__init__.py"], "doc": [], "test": [], "config": ["t.yml"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7", "iss_html_url": "https://github.com/ansible/ansible/issues/73922", "iss_label": "python3\nmodule\nsupport:core\nbug\naffects_2.10", "title": "cron: Remove/delete an environment variable", "body": "### Summary\r\n\r\nWith `env=yes`, `cron` add environment variable (with the `name` & `value`) parameters.\r\nI though that having `env` + `state=absent` would remove said variable, but that's not the case (the cron file is actually removed).\r\nAs such there is no way to remove a variable and the more obvious way to attempt to do it results in a surprising result.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\nansible.builtin.cron\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\nansible 2.10.5\r\n config file = /home/user/.ansible.cfg\r\n configured module search path = ['/usr/share/ansible']\r\n ansible python module location = /home/user/.local/lib/python3.8/site-packages/ansible\r\n executable location = /home/user/.local/bin/ansible\r\n python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]\r\n\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nUbuntu 20.04\r\n\r\n### Steps to Reproduce\r\n\r\n```yaml\r\n cron:\r\n cron_file: foobar\r\n user: root\r\n env: yes\r\n name: \"VAR\"\r\n value: \"False\"\r\n state: absent\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nThe \"VAR\" variable is removed from /etc/cron.d/foobar\r\n\r\n### Actual Results\r\n\r\n/etc/cron.d/foobar is removed.\r\nThere is no way to remove the \"VAR\" variable.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7", "files": [{"path": "lib/ansible/modules/cron.py", "Loc": {"(None, None, None)": {"mod": [15]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["lib/ansible/modules/cron.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "7490044bbe28029afa9e3099d86eae9fda5f88b7", "iss_html_url": "https://github.com/ansible/ansible/issues/11351", "iss_label": "affects_2.0\naffects_2.3\nc:executor/playbook_executor\nsupport:core\nfeature\nP3", "title": "enable do/until with async tasks", "body": "##### ISSUE TYPE\nFeature Idea\n\n##### COMPONENT NAME\ncore\n\n##### ANSIBLE VERSION\n2.0\n\n##### CONFIGURATION\n\n\n##### OS / ENVIRONMENT\n\n\n##### SUMMARY\nWhen a task is marked as async, there is no way to loop until a condition is met.\nWith poll:0 and async_status you can poll for async task to complete but you cannot repeat the original async task itself until a condition is met.\n\n```\ncat /tmp/async-test.yml \n\n---\n# Run through the test of an async command\n\n- hosts: all\n tasks:\n - name: \"Check an async command\"\n command: /bin/sleep 3\n async: 5\n poll: 1\n register: command_result\n until: command_result.failed\n retries: 5\n delay: 10\n```\n\n```\n$ansible-playbook -i localhost, /tmp/async-test.yml \n ____________\n< PLAY [all] >\n ------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\n _________________\n< GATHERING FACTS >\n -----------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\nok: [localhost]\n ______________________________\n< TASK: Check an async command >\n ------------------------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\nfatal: [localhost] => error while evaluating conditional: command_result.failed: {% if command_result.failed %} True {% else %} False {% endif %}\n\nFATAL: all hosts have already failed -- aborting\n ____________\n< PLAY RECAP >\n ------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\n to retry, use: --limit @/opt/ashishkh/async-test.retry\n\nlocalhost : ok=1 changed=0 unreachable=2 failed=0 \n```\n\n\n##### STEPS TO REPRODUCE\n\n\n##### EXPECTED RESULTS\n\n\n##### ACTUAL RESULTS\n\n\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"path": "/tmp/async-test.yml", "Loc": [33]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["/tmp/async-test.yml"], "asset": []}}, {"organization": "ansible", "repo_name": "ansible", "base_commit": "833970483100bfe89123a5718606234115921aec", "iss_html_url": "https://github.com/ansible/ansible/issues/67993", "iss_label": "cloud\naws\nopenstack\nmodule\nsupport:community\naffects_2.5\nbug\ntraceback\nsystem", "title": "Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol(unable to disable stickiness not supported in NLB)", "body": "##### SUMMARY\r\nWe are using Ansible 2.5 to deploy AWS resources in our environment. From March 02, 2019 our deployment is failing with the below error.\r\n\r\nERROR:\r\n=====\r\nTASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:\r\nAn error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation: \r\nStickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\r\n17:21:08 fatal: [localhost]: FAILED! => {\"changed\": false, \"error\": {\"code\": \"InvalidConfigurationRequest\", \"message\": \"Stickiness type 'lb_cookie'\r\nis not supported for target groups with the TCP protocol\", \"type\": \"Sender\"}, \"msg\": \"An error occurred (InvalidConfigurationRequest) \r\nwhen calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\", \r\n\"response_metadata\": {\"http_headers\": {\"connection\": \"close\", \"content-length\": \"359\", \"content-type\": \"text/xml\", \"date\": \"Tue, 03 Mar 2020 11:51:08 GMT\", \r\n\"x-amzn-requestid\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\"}, \"http_status_code\": 400, \"request_id\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\", \"retry_attempts\": 0}}\r\n\r\n##### ISSUE TYPE\r\n- Bug Report - Unable to disable stickiness not supported in NLB\r\n\r\n##### COMPONENT NAME\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n\r\n##### ANSIBLE VERSION\r\n```paste below\r\nAnsible version = 2.5.0\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- Paste verbatim output from \"ansible-config dump --only-changed\" between quotes -->\r\n```paste below\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 18.04 LTS / AWS environment\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\nKindly use the below playbook to deploy loadbalancer using Ansible on AWS cloud.\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\nAn AWS Network loadbalancer will be created.\r\n\r\n\r\n##### ACTUAL RESULTS\r\nThe deployment fails with below error.\r\n\r\n<!--- Paste verbatim command output between quotes -->\r\n```paste below\r\n TASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***\r\n17:21:08 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:\r\nAn error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation: \r\nStickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\r\n17:21:08 fatal: [localhost]: FAILED! => {\"changed\": false, \"error\": {\"code\": \"InvalidConfigurationRequest\", \"message\": \"Stickiness type 'lb_cookie'\r\nis not supported for target groups with the TCP protocol\", \"type\": \"Sender\"}, \"msg\": \"An error occurred (InvalidConfigurationRequest) \r\nwhen calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\", \r\n\"response_metadata\": {\"http_headers\": {\"connection\": \"close\", \"content-length\": \"359\", \"content-type\": \"text/xml\", \"date\": \"Tue, 03 Mar 2020 11:51:08 GMT\", \r\n\"x-amzn-requestid\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\"}, \"http_status_code\": 400, \"request_id\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\", \"retry_attempts\": 0}}\r\n\r\n```\r\n\r\n##### References\r\nI can see a similar issue occurred for terraform users as well.\r\n\r\nhttps://github.com/terraform-providers/terraform-provider-aws/issues/10494\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "6f718cee740e7cd423edd1136db78c5be49fa7c0", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2467", "iss_label": "question\nStale", "title": "Problems with weights", "body": "## \u2754Question\r\nHello, I have just run trainy.py script with my data and faced a problem - you wrote that weights are saved in runs directory, but in my case I have not found them. Everything is fine with hyp.yaml and opt.yaml but folder \"weights\" is empty. \r\nDo you have any guesses about this issue? \r\n\r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6f718cee740e7cd423edd1136db78c5be49fa7c0", "files": [{"path": "train.py", "Loc": {"(None, None, None)": {"mod": [470, 454]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nweights\u627e\u4e0d\u89c1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "06831aa9e905e0fa703958f6b3f3db443cf477f3", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/9079", "iss_label": "", "title": "Does adjusting the number of classes of a pretrained model work?", "body": "### Search before asking\r\n\r\n- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. *\r\n\r\n### Question\r\n\r\nHi everyone,\r\n\r\nI'm a bit confused about how to properly load a pretrained model with an adjusted number of classes for training with a custom dataset.\r\n\r\nOn the [Load YOLOv5 from PyTorch Hub \u2b50](https://github.com/ultralytics/yolov5/issues/36) page you've explained that one can adjust the number of classes in the pretrained model by using the following command. `model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)`\r\n\r\n<img width=\"999\" alt=\"Bildschirmfoto 2022-08-22 um 08 13 15\" src=\"https://user-images.githubusercontent.com/5917496/185851461-b177aa78-2b56-46a1-9c43-081d2a746938.png\">\r\n\r\nWhen I do so, I can see that a model.yaml file is overwritten, but I do not know where this file is stored. \r\n\r\nNow, what actually confuses me about the number of classes, is that when I try to use this pretrained model in detection, without any further training. I see an error, that the model was trained with nc=80 and my data is incompatible with nc=13:\r\n\r\n`AssertionError: ['yolov5s6.pt'] (80 classes) trained on different --data than what you passed (13 classes). Pass correct combination of --weights and --data that are trained together.`\r\n\r\nI know that I can not expect any proper predictions since the last layers are initialized with random weights, but I was expecting that the model is compatible with the 13 classes dataset.\r\n\r\nIs this behavior to be expected or am I doing something wrong here? \r\nDo I need to find and use the model.yaml file and is the only thing changed in there 'nc=13'?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "06831aa9e905e0fa703958f6b3f3db443cf477f3", "files": [{"path": "train.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "ee8988b8a2ed07af1b7c8807d39aad35369f0e28", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/8", "iss_label": "Stale", "title": "training actually can not work", "body": "After trained on several epochs, I found the mAP is still very low. Does the training really works?\r\n\r\n```\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 14/299 6.4G 0.02273 0.002925 0.0003764 0.02603 11 640: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [54:20<00:00, 2.13it/s]\r\n Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [13:37<00:00, 8.51it/s]\r\n all 5.57e+04 1.74e+05 0.000332 0.00039 2.4e-06 8.59e-07\r\n\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 15/299 6.4G 0.02232 0.002874 0.000371 0.02556 7 640: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [54:36<00:00, 2.12it/s]\r\n Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [14:23<00:00, 8.06it/s]\r\n all 5.57e+04 1.74e+05 0.000342 0.000401 2.44e-06 8.66e-07\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ee8988b8a2ed07af1b7c8807d39aad35369f0e28", "files": [{"path": "models/yolov5s.yaml", "Loc": {"(None, None, 2)": {"mod": [2]}}, "status": "modified"}, {"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": ["models/yolov5s.yaml"], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "901243c7806be07b31073440cf721e73532a0734", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/894", "iss_label": "question", "title": "training stuck when loading dataset", "body": "## \u2754Question\r\nI follow the instructions to run coco128, \r\n```\r\npython train.py --img 640 --batch 16 --epochs 5 --data ./data/coco128.yaml --cfg ./models/yolov5s.yaml --weights '',\r\n```\r\nthe ouput is \r\n```\r\nImage sizes 640 train, 640 test\r\nUsing 8 dataloader workers\r\nStarting training for 5 epochs...\r\n\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 0%| | 0/8 [00:00<?, ?it/s\r\n```\r\nthen it is stuck, I found that it is stucking at loading the dataset, \r\nin https://github.com/ultralytics/yolov5/blob/master/train.py#L244, \r\n```\r\nfor i, (imgs, targets, paths, _) in pbar:\r\n```\r\nit just stops here, could you help me ?\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "901243c7806be07b31073440cf721e73532a0734", "files": [{"path": "train.py", "Loc": {"(None, None, None)": {"mod": [388]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "63060910a68bfde238872d629ab88e2e7bc736e8", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/3735", "iss_label": "question\nStale", "title": "Results interpretation", "body": "Hello,\r\n\r\nAnother question to do with results interpretation. I am not very sure how to interpret the results.txt file that gets generated after training is over. Also, is there any way to extract the number of false positives, true positives, false negatives, as well as to see the total mean average accuracy and loss (like with yolov4)?\r\n\r\nFurther, after training is done, can the best weights obtained from training be used to test on unseen data (more specifically, multiple images)?\r\n\r\nThanks in advance again!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "63060910a68bfde238872d629ab88e2e7bc736e8", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "dc54ed5763720ced4f6784552c47534af5413d45", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/6062", "iss_label": "question\nStale", "title": "How to add some private information into .pt file?", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nyolov5 is a great algorithm, but I'm having some problems. Specifically, I want to add some private information to the .pt file, can this be done?\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "dc54ed5763720ced4f6784552c47534af5413d45", "files": [{"path": "train.py", "Loc": {"(None, 'train', 58)": {"mod": [377]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "79af1144c270ac7169553d450b9170f9c60f92e4", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4517", "iss_label": "question\nStale", "title": "what is moasic and what is its default and how to delete it", "body": "what is the meaning of moasic\r\n\r\nwhere I can find its default parameter\r\n\r\nhow to stop moasic and stop augmentation in general\r\n\r\nI use only this line is it augment data by default or not? how to stop augmentation if exist \r\n```\r\n!python train.py --img 640 --batch 16 --epochs 400 --data /mydrive/data.yaml \\\r\n --weights /mydrive/yolov5s.pt --cache --project /mydrive/train/\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "79af1144c270ac7169553d450b9170f9c60f92e4", "files": [{"path": "data/hyps/hyp.scratch.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "\u914d\u7f6e\u6587\u4ef6"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["data/hyps/hyp.scratch.yaml"], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "0d8a1842373e55f8f639adede0c3d378f1ffbea5", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4717", "iss_label": "bug", "title": "[onnx export.py error] Unsupported ONNX opset version", "body": "`ONNX: starting export with onnx 1.10.1...`\r\n`ONNX: export failure: Unsupported ONNX opset version: 13`\r\n\r\nI'm using\r\nyolov5-5.0, pytorch1.7.0+cu101 and python3.7.9.\r\n\r\nHow to solve it?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0d8a1842373e55f8f639adede0c3d378f1ffbea5", "files": [{"path": "export.py", "Loc": {"(None, 'parse_opt', 166)": {"mod": [179]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["export.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "886f1c03d839575afecb059accf74296fad395b6", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2432", "iss_label": "question", "title": "Experiments on GhostNet", "body": "## \u2754Question\r\nI am just wondering about the performance when using GhostNet in experimental.py. Could you please share this experiment?\r\n\r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "886f1c03d839575afecb059accf74296fad395b6", "files": [{"path": "Models/yolov5l.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "\u914d\u7f6e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["Models/yolov5l.yaml"], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "2026d4c5eb4e3e48b5295106db85c844000d95d1", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1498", "iss_label": "question\nStale", "title": "calculate fps on local system", "body": "## \u2754Question\r\nI have been using the code to do detection from webcam. How can I know what is the speed of detection (fps) in my local system?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "2026d4c5eb4e3e48b5295106db85c844000d95d1", "files": [{"path": "README.md", "Loc": {"(None, None, 61)": {"mod": [61]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "14797370646d25e226f0093a5982d5cd54ba729a", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2797", "iss_label": "question", "title": "large scale dataset use --cache-images flag", "body": "## \u2754Question\r\nhello ~ , i have dataset with a million images about 450GB and i want to use --cache-images accelerate training\uff08i have 128GB RAM\uff09\uff0ccan i split the whole dataset into many sub dataset and training them one by one\uff08like resume training\uff09 \uff1f\r\n\r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "14797370646d25e226f0093a5982d5cd54ba729a", "files": [{"path": "train.py", "Loc": {"(None, None, None)": {"mod": [466]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "f5335f22bbd6037124d60edb3c2d1934d7673e23", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/8907", "iss_label": "question\nStale", "title": "I am making UI by QT for Yolov5 training. Where is making the result image (results.png) after training? ", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nI am making UI by QT for Yolov5 training. Where is making the result image (results.png) after training? \r\n\r\nI would like to draw the graph for (train/box_loss), (metrics/precision), and (metrics/recall) per each an epoch every time an epoch of the train is finished.\r\n\r\n Where is making the result image (results.png) after training? \r\n\r\nThank you for your help.\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f5335f22bbd6037124d60edb3c2d1934d7673e23", "files": [{"path": "utils/plots.py", "Loc": {"(None, 'plot_results', 418)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/plots.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "0ab303b04499b6b912d8212a4fa10fe3fcb78efa", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/8708", "iss_label": "question\nStale", "title": "Significance of --half?", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nCan you please let me know the significance of --half during training process....\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0ab303b04499b6b912d8212a4fa10fe3fcb78efa", "files": [{"path": "val.py", "Loc": {"(None, 'parse_opt', 330)": {"mod": [351]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["val.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "b74929c910f9cd99d2ece587e57bce1ae000d3ba", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4252", "iss_label": "question", "title": "Training speed and memory", "body": "I noticed your instructions about training,\r\nRun commands below to reproduce results on COCO dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest --batch-size your GPU allows (batch sizes shown for 16 GB devices).\r\nI want to train from scratch on the coco dataset.(A100 x1).The code was just downloaded.\r\n\r\nThe following is the situation during my training.The specific parameters can be seen in the screenshot.\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 64 -> 16min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 128 ->16min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 192 ->20min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 192 --workers 16->16min/epoch\r\n![sendpix7](https://user-images.githubusercontent.com/39581901/127759471-a110c68f-d1d4-4580-afd2-ae8c8a17ef4a.jpg)\r\nMy question\r\n1. Why I increased the batch size but the time required for training did not decrease\r\n2. The relationship between workers and batch size, because I noticed that you seem to set it to a maximum of 8 in the code (why it is 8),\r\n3. When epoch=0 and 1, the GPU memory has changed, about x1.5? What may be the reason for this,", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b74929c910f9cd99d2ece587e57bce1ae000d3ba", "files": [{"path": "train.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "404749a33cc29d119f54b2ce35bf3b33a847a487", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2186", "iss_label": "question", "title": "Can we return objectness score and class score?", "body": "## \u2754Question\r\nI am wondering if it is possible to return confidence scores for objectness and classification separately for each predicted box during inference? I might be conceptually off base here, but I am interested in understanding if the model is unsure if the box itself is correct or if the class it is assigning to the box is correct. My understanding is the `conf` that is returned now is a combo of the two? ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "404749a33cc29d119f54b2ce35bf3b33a847a487", "files": [{"path": "detect.py", "Loc": {"(None, 'detect', 18)": {"mod": [103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113]}}, "status": "modified"}, {"path": "utils/general.py", "Loc": {"(None, 'non_max_suppression', 340)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/general.py", "detect.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "dabad5793a638cba1e5a2bbb878c9b87fe1a14a0", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/3942", "iss_label": "enhancement\nStale", "title": "For online cutting training and detection can be improve", "body": "## \ud83d\ude80 Feature\r\n\r\nFor big image training, usually people thinking about to cut the images, but yolov5 can only resize the image to small size. Such as VisDrone dataset, the smallest image can have 960*540 size, if resize to 640*640, size would be 640*360, but the target in dataset mostly are small object, resize the image make the target become more smaller, but if use bigger resolution, the cuda memory would exceed.\r\n\r\nSo I thought online cutting training and detection would be a good feature for yolov5 to improve, although cutting image would also increase the train time, but it would be a great idea for people who don't have large computing power GPU, also I think cutting image would be effective for small object detection. Although it's not a new idea in detection, it would be a useful way for people to their own detector.\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "dabad5793a638cba1e5a2bbb878c9b87fe1a14a0", "files": [{"path": "utils/augmentations.py", "Loc": {"('Albumentations', '__init__', 16)": {"mod": [22]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/augmentations.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "c8c5ef36c9a19c7843993ee8d51aebb685467eca", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1238", "iss_label": "question", "title": "img-weights", "body": "## \u2754Question\r\nparser.add_argument('--img-weights', action='store_true', help='use weighted image selection for training')\r\nin order to make --iimg-weights work, what else I need to do? \r\ndataset = LoadImagesAndLabels(path, imgsz, batch_size,\r\n augment=augment, # augment images\r\n hyp=hyp, # augmentation hyperparameters\r\n rect=rect, # rectangular training\r\n cache_images=cache,\r\n single_cls=opt.single_cls,\r\n stride=int(stride),\r\n pad=pad),\r\n should I add an extra param image_weights=True??\r\n\r\n \r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c8c5ef36c9a19c7843993ee8d51aebb685467eca", "files": [{"path": "train.py", "Loc": {"(None, None, None)": {"mod": [397]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/7072", "iss_label": "question", "title": "why can't I reproduce the mAP provided by README.md\uff08v6.1\uff09\uff1f", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nI used the method recommended by README.md(v6.1) to reproduce the mAP, but I failed. \r\n'python train.py --data coco.yaml --cfg yolov5s.yaml --weights ' ' --hyp hyp.scratch-low.yaml --img 640 --batch-size 64 --epochs 300' .\r\nAll is default value,then I got the best mAP\uff08yolov5s\uff09 is 37.057%(the best mAP verified at the end of each epoch, 5000 images), it still has a gap of 0.4% mAP(37.4%). \r\nSimilarly, I reproduced the mAP\uff08yolov5n\uff09\uff0c27.586%----28.0%\uff0cNever get published results.\r\nMy GPU is GTX NVIDIA RTX A4000\uff0816116MiB\uff09, and I think it may be enough.\r\n\r\nIs this a normal error caused by equipment\uff08GPU) differences, or are there other reasons\uff1f\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31", "files": [{"path": "data/scripts/get_coco.sh", "Loc": {"(None, None, 13)": {"mod": [13]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["data/scripts/get_coco.sh"]}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "079b36d72ba2ef298f7ae4dc283d8c7975eb02f6", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/6540", "iss_label": "question", "title": "Is YOLOv5 able to detect a specific number of classes according to the project's need, like just 2 or 3 classes?", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nHi, I'm using YOLOv5 in my project and I have a question. If I use \"--classes \" it could detect one type of class, but is there anyway that I can detect more than one type, like 2 or 3 different types? I've already tried \"-- classes 0 1\" or \"-- classes [0] [1]\" but without success. Thanks for the help!\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "079b36d72ba2ef298f7ae4dc283d8c7975eb02f6", "files": [{"path": "detect.py", "Loc": {"(None, 'parse_opt', 216)": {"mod": [231]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["detect.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "e96c74b5a1c4a27934c5d8ad52cde778af248ed8", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4357", "iss_label": "question\nStale", "title": "Average Precision for each class", "body": "## Is there any way to see the average precision for each class?\r\n\r\nI have run my model for 1,000 epochs and I have a bunch of metrics (which are AMAZING by the way, thanks so making it so easy to see them!) and I have mAP, but I was wondering if there was a way to see the AP for each class? Like a table or something. \r\n\r\nIn addition, is it possible to see the precision-recall graphs for each class? I can see something in the images tab on wandb, but as I have 80 classes, it looks very messy. ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e96c74b5a1c4a27934c5d8ad52cde778af248ed8", "files": [{"path": "val.py", "Loc": {"(None, 'parse_opt', 293)": {"mod": [305]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["val.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "96e36a7c913e2433446ff410a4cf60041010a524", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4152", "iss_label": "question", "title": "Format of data for testing trained model", "body": "In what format do I need to feed the validation dataset to the val.py file? Should images and markup be in the same folder or in different ones? In what format should the coordinates of the bounding boxes be in - yolo or something else?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "96e36a7c913e2433446ff410a4cf60041010a524", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "eaf5ec4467795344e7d9601515b017fd8c46e44b", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/439", "iss_label": "", "title": " decoding error in preprocessing synthesizer", "body": "I get the following error while running `synthesizer_preprocess_audio.py`.\r\n\r\n```\r\nArguments:\r\n datasets_root: /home/amin/voice_cloning/libri_100\r\n out_dir: /home/amin/voice_cloning/libri_100/SV2TTS/synthesizer\r\n n_processes: None\r\n skip_existing: True\r\n hparams: \r\n\r\nUsing data from:\r\n /home/amin/voice_cloning/libri_100/LibriSpeech/train-clean-100\r\nLibriSpeech: 0%| | 0/502 [00:00<?, ?speakers/s]\r\nmultiprocessing.pool.RemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/multiprocessing/pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py\", line 62, in preprocess_speaker\r\n alignments = [line.rstrip().split(\" \") for line in alignments_file]\r\n File \"/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py\", line 62, in <listcomp>\r\n alignments = [line.rstrip().split(\" \") for line in alignments_file]\r\n File \"/usr/lib/python3.6/codecs.py\", line 321, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 37: invalid start byte\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"synthesizer_preprocess_audio.py\", line 52, in <module>\r\n preprocess_librispeech(**vars(args)) \r\n File \"/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py\", line 36, in preprocess_librispeech\r\n for speaker_metadata in tqdm(job, \"LibriSpeech\", len(speaker_dirs), unit=\"speakers\"):\r\n File \"/home/amin/.local/lib/python3.6/site-packages/tqdm/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"/usr/lib/python3.6/multiprocessing/pool.py\", line 735, in next\r\n raise value\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 37: invalid start byte\r\n```\r\n\r\nCan anyone help? It can save a lot of time for me.\r\nThanks.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "eaf5ec4467795344e7d9601515b017fd8c46e44b", "files": [{"path": "synthesizer/preprocess.py", "Loc": {"(None, 'preprocess_speaker', 54)": {"mod": [60]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/629", "iss_label": "", "title": "Error in macOS when trying to launch the toolbox", "body": "Traceback (most recent call last):\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/demo_toolbox.py\", line 2, in <module>\r\n from toolbox import Toolbox\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/__init__.py\", line 1, in <module>\r\n from toolbox.ui import UI\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/ui.py\", line 6, in <module>\r\n from encoder.inference import plot_embedding_as_heatmap\r\nModuleNotFoundError: No module named 'encoder.inference'", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5425557efe30863267f805851f918124191e0be0", "files": [{"path": "encoder/inference.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["encoder/inference.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1156", "iss_label": "", "title": "missing SV2TTS/", "body": "Hey, I'm trying to finetune the pretrained model but it looks like I am missing the SV2TTS/ directory which contains train.txt, etc.\r\nI have a saved_models/ directory which has three *.pt files for the three components of this TTS architecture.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5", "files": [{"path": "synthesizer_preprocess_audio.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer_preprocess_audio.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "e32cf8f4ddb63d9a7603eeb31f1855b54926aee6", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/549", "iss_label": "", "title": "Import Error", "body": "Hey, i am trying to run this code and everytime i run demo_toolbox.py there comes an error \"failed to load qt binding\" i tried reinstalling matplotlib and also tried installing PYQt5 . \r\n\r\nNeed Help !!!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e32cf8f4ddb63d9a7603eeb31f1855b54926aee6", "files": [{"path": "toolbox/ui.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/117", "iss_label": "", "title": "ModuleNotFoundError: No module named 'tensorflow.contrib.seq2seq'", "body": "When running demo_cli.py\r\n\r\nPython = 3.7.4\r\nTensorFlow = 2.0 RC\r\nCUDA = 10.1\r\ncuDNN = Installed for right CUDA version\r\nWindows = 10", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/275", "iss_label": "", "title": "Speaker verification implementation", "body": "I need just the speaker verification part which is the implementation of [GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION](https://arxiv.org/pdf/1710.10467.pdf) paper, how I can proceed to get it please?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "files": [{"path": "encoder/", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5\n\u8be2\u95ee\u529f\u80fd\u5b9e\u73b0\u6240\u5728\u5730", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["encoder/"]}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "7432046efc23cabf176f9fdc8d2fd67020059478", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/855", "iss_label": "", "title": "Output audio spectrum - low frequences", "body": "Hi, Im am trying to train new model in polish language but after 476k steps output sound is very \"robotic\". I was trying to find why that's happened and noticed (based on my output and @blue-fish samples: https://blue-fish.github.io/experiments/RTVC-FT-1.html) that spectrum of this model don't include high frequences compared to google. Both in logarithmic scale. \r\n\r\nOur output: \r\n<img width=\"610\" alt=\"Zrzut ekranu 2021-10-2 o 20 29 59\" src=\"https://user-images.githubusercontent.com/6368894/135728051-397ec675-d2ac-4e5a-af89-a8e0fcef8ff7.png\">\r\n \r\nGoogle: (take a note its logarithmic scale)\r\n<img width=\"610\" alt=\"Zrzut ekranu 2021-10-2 o 20 30 30\" src=\"https://user-images.githubusercontent.com/6368894/135728056-5a7b83dd-f228-4a4f-9dae-44ce86d1e2b1.png\">\r\n\r\nDo you have any idea how to improve this?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "7432046efc23cabf176f9fdc8d2fd67020059478", "files": [{"path": "synthesizer/hparams.py", "Loc": {"(None, None, None)": {"mod": [77]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1122", "iss_label": "", "title": "Requirements.txt failed to install with obscure issue with installing audioread", "body": "I ran into a few issues along the way that I was able to solve, namely errors like this:\r\n\r\n WARNING: Failed to write executable - trying to use .deleteme logic\r\n ERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified: \r\n 'C:\\\\Python310\\\\Scripts\\\\f2py.exe' -> 'C:\\\\Python310\\\\Scripts\\\\f2py.exe.deleteme'\r\n\r\nI fixed these by adding `--user` to the pip command.\r\n\r\nI also had to change requirements.txt to a newer version of numpy (1.22.1) to prevent it from failing to install due to older versions not being compatible with the version of Python I already have installed (3.10.6)\r\n\r\nBut now I'm stuck on this one:\r\n\r\n Requirement already satisfied: jsonpointer>=1.9 in c:\\users\\michael\\appdata\\roaming\\python\\python310\\site-packages (from jsonpatch->visdom==0.1.8.9->-r R:\\requirements.txt (line 15)) (2.3)\r\n Using legacy 'setup.py install' for umap-learn, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for visdom, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for audioread, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for pynndescent, since package 'wheel' is not installed.\r\n Installing collected packages: audioread, visdom, SoundFile, sounddevice, scikit-learn, resampy, pooch, matplotlib, pynndescent, librosa, umap-learn\r\n Running setup.py install for audioread ... error\r\n error: subprocess-exited-with-error\r\n \r\n \u00d7 Running setup.py install for audioread did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> [40 lines of output]\r\n C:\\Users\\michael\\AppData\\Local\\Temp\\pip-install-nat_itg2\\audioread_fa5fbfcd88364fc89c7b2a9e454b5549\\setup.py:17: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses\r\n import imp\r\n running install\r\n C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\command\\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.\r\n warnings.warn(\r\n Traceback (most recent call last):\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\util.py\", line 258, in subst_vars\r\n return _subst_compat(s).format_map(lookup)\r\n KeyError: 'py_version_nodot_plat'\r\n \r\n During handling of the above exception, another exception occurred:\r\n \r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"C:\\Users\\michael\\AppData\\Local\\Temp\\pip-install-nat_itg2\\audioread_fa5fbfcd88364fc89c7b2a9e454b5549\\setup.py\", line 27, in <module>\r\n setup(name='audioread',\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\__init__.py\", line 153, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\core.py\", line 148, in setup\r\n return run_commands(dist)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\core.py\", line 163, in run_commands\r\n dist.run_commands()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\dist.py\", line 967, in run_commands\r\n self.run_command(cmd)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\dist.py\", line 985, in run_command\r\n cmd_obj.ensure_finalized()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\cmd.py\", line 107, in ensure_finalized\r\n self.finalize_options()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\command\\install.py\", line 45, in finalize_options\r\n orig.install.finalize_options(self)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 381, in finalize_options\r\n self.expand_dirs()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 563, in expand_dirs\r\n self._expand_attrs(['install_purelib', 'install_platlib',\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 553, in _expand_attrs\r\n val = subst_vars(val, self.config_vars)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\util.py\", line 260, in subst_vars\r\n raise ValueError(f\"invalid variable {var}\")\r\n ValueError: invalid variable 'py_version_nodot_plat'\r\n [end of output]\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\n error: legacy-install-failure\r\n \r\n \u00d7 Encountered error while trying to install package.\r\n \u2570\u2500> audioread\r\n \r\n note: This is an issue with the package mentioned above, not pip.\r\n hint: See above for output from the failure.\r\n\r\nI'm not sure if the issue is due to \"setup.py install\" being deprecated; if that's the case I have no idea what the fix is because I think this is being required somewhere else - maybe another package needs a newer version? But I have no idea which one.\r\n\r\nI also thought maybe it could be that wheel wasn't installed, `since package 'wheel' is not installed.` but when I try to install it, it says it's already installed:\r\n\r\n C:\\> pip install wheel --user\r\n\r\n Requirement already satisfied: wheel in c:\\python310\\lib\\site-packages (0.37.1)\r\n\r\nThere's also the invalid variable error, but I have no idea what this is talking about.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "95adc699c1deb637f485e85a5107d40da0ad94fc", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/717", "iss_label": "", "title": "I can't use Dataset/Speaker/Utterance", "body": "I can't use the upper section in the software. when loading it shows:\r\nWarning: you did not pass a root directory for datasets as argument.\r\nHow can I fix this?\r\nThank you\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "95adc699c1deb637f485e85a5107d40da0ad94fc", "files": [{"path": "demo_toolbox.py", "Loc": {"(None, None, None)": {"mod": [15]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nwarning", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["demo_toolbox.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "039f7e5402e6d9da7fad5022dae038cdfb507b39", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/13", "iss_label": "", "title": "problem with utils.argutils in python 3.6", "body": "Hi under win 10 64 bits trying using python 3.6 it failed to import the print_args wiht the fact that he can't find the argutils.\r\nthink i have a relative import error but can't solve it\r\n\r\nbtw nice job on what i heard on the youtube demo\r\nif i mnaully try to import the utils from the root dir seems he load another utils files \r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "039f7e5402e6d9da7fad5022dae038cdfb507b39", "files": [{"path": "synthesizer/__init__.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "7432046efc23cabf176f9fdc8d2fd67020059478", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/884", "iss_label": "", "title": "Using a different speaker encoder", "body": "Hello, I really appreciate the work on display here. I was just wondering if I could use a different speaker encoder. If someone used a different encoder, could you explain the difficulties of replacing the encoder and how the results were different from the speaker encoder already in use?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "7432046efc23cabf176f9fdc8d2fd67020059478", "files": [{"path": "toolbox/__init__.py", "Loc": {"('Toolbox', 'add_real_utterance', 182)": {"mod": [191]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "a32962bb7b4827660646ac6dabf62309aea08a91", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/488", "iss_label": "", "title": "preprocessing VoxCele2 is not working", "body": "While running encoder_preprocess on voxceleb2 dataset, I'm getting the following warning and nothing else happens. Not sure why?\r\n\r\n\r\n```\r\nraw: Preprocessing data for 5994 speakers.\r\nraw: 0%| | 0/5994 [00:00<?, ?speakers/s]\r\n/home/amin/.local/lib/python3.6/site-packages/librosa/core/audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.\r\n warnings.warn('PySoundFile failed. Trying audioread instead.')\r\n/home/amin/.local/lib/python3.6/site-packages/librosa/core/audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.\r\n warnings.warn('PySoundFile failed. Trying audioread instead.')\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a32962bb7b4827660646ac6dabf62309aea08a91", "files": [{"path": "encoder/preprocess.py", "Loc": {"(None, 'preprocess_voxceleb2', 164)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["encoder/preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "0713f860a3dd41afb56e83cff84dbdf589d5e11a", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1065", "iss_label": "", "title": "vocoder_dataset.py ValueError", "body": "I am trying to use the Librispeech dataset to train the vocoder. \r\nAnd I got a ValueError while training. \r\n```numpy.random._bounded_integers._rand_int32 ValueError: low >= high```\r\n\r\nIt occurs in line 61 of vocoder_dataset.py, \r\n```mel_offsets = [np.random.randint(0, offset) for offset in max_offsets]```\r\nSo I assume there is something wrong with the value of offset? e.g. offset=0 so np.random.randint could not generate a number [0, 0)?\r\nDid anyone encountered this problem too?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0713f860a3dd41afb56e83cff84dbdf589d5e11a", "files": [{"path": "synthesizer/hparams.py", "Loc": {"(None, None, None)": {"mod": [88]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/651", "iss_label": "", "title": "Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc", "body": "hello. \r\nPlease help me, I do not know how to solve my problem problem. \r\nI run and completed without errors \r\n`python synthesizer_preprocess_audio.py <datasets_root>`\r\n`python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer`\r\nbut after typing `python synthesizer_train.py my_run <datasets_root>/SV2TTS/synthesizer` \r\nshows me a long error\r\n\r\n\r\n```\r\nArguments:\r\n name: my_run\r\n synthesizer_root: C:\\Users\\matve\\Documents\\Tacotron\\datasets\\SV2TTS\\synthesizer\r\n models_dir: synthesizer/saved_models/\r\n mode: synthesis\r\n GTA: True\r\n restore: True\r\n summary_interval: 2500\r\n embedding_interval: 10000\r\n checkpoint_interval: 2000\r\n eval_interval: 100000\r\n tacotron_train_steps: 2000000\r\n tf_log_level: 1\r\n slack_url: None\r\n hparams: \r\n\r\nCheckpoint path: synthesizer/saved_models/logs-my_run\\taco_pretrained\\tacotron_model.ckpt\r\nLoading training data from: C:\\Users\\matve\\Documents\\Tacotron\\datasets\\SV2TTS\\synthesizer\\train.txt\r\nUsing model: Tacotron\r\nHyperparameters:\r\n allow_clipping_in_normalization: True\r\n attention_dim: 128\r\n attention_filters: 32\r\n attention_kernel: (31,)\r\n cbhg_conv_channels: 128\r\n cbhg_highway_units: 128\r\n cbhg_highwaynet_layers: 4\r\n cbhg_kernels: 8\r\n cbhg_pool_size: 2\r\n cbhg_projection: 256\r\n cbhg_projection_kernel_size: 3\r\n cbhg_rnn_units: 128\r\n cleaners: english_cleaners\r\n clip_for_wavenet: True\r\n clip_mels_length: True\r\n cross_entropy_pos_weight: 20\r\n cumulative_weights: True\r\n decoder_layers: 2\r\n decoder_lstm_units: 1024\r\n embedding_dim: 512\r\n enc_conv_channels: 512\r\n enc_conv_kernel_size: (5,)\r\n enc_conv_num_layers: 3\r\n encoder_lstm_units: 256\r\n fmax: 7600\r\n fmin: 55\r\n frame_shift_ms: None\r\n griffin_lim_iters: 60\r\n hop_size: 200\r\n mask_decoder: False\r\n mask_encoder: True\r\n max_abs_value: 4.0\r\n max_iters: 2000\r\n max_mel_frames: 900\r\n min_level_db: -100\r\n n_fft: 800\r\n natural_eval: False\r\n normalize_for_wavenet: True\r\n num_mels: 80\r\n outputs_per_step: 2\r\n postnet_channels: 512\r\n postnet_kernel_size: (5,)\r\n postnet_num_layers: 5\r\n power: 1.5\r\n predict_linear: False\r\n preemphasis: 0.97\r\n preemphasize: True\r\n prenet_layers: [256, 256]\r\n ref_level_db: 20\r\n rescale: True\r\n rescaling_max: 0.9\r\n sample_rate: 16000\r\n signal_normalization: True\r\n silence_min_duration_split: 0.4\r\n silence_threshold: 2\r\n smoothing: False\r\n speaker_embedding_size: 256\r\n split_on_cpu: True\r\n stop_at_any: True\r\n symmetric_mels: True\r\n tacotron_adam_beta1: 0.9\r\n tacotron_adam_beta2: 0.999\r\n tacotron_adam_epsilon: 1e-06\r\n tacotron_batch_size: 36\r\n tacotron_clip_gradients: True\r\n tacotron_data_random_state: 1234\r\n tacotron_decay_learning_rate: True\r\n tacotron_decay_rate: 0.5\r\n tacotron_decay_steps: 50000\r\n tacotron_dropout_rate: 0.5\r\n tacotron_final_learning_rate: 1e-05\r\n tacotron_gpu_start_idx: 0\r\n tacotron_initial_learning_rate: 0.001\r\n tacotron_num_gpus: 1\r\n tacotron_random_seed: 5339\r\n tacotron_reg_weight: 1e-07\r\n tacotron_scale_regularization: False\r\n tacotron_start_decay: 50000\r\n tacotron_swap_with_cpu: False\r\n tacotron_synthesis_batch_size: 128\r\n tacotron_teacher_forcing_decay_alpha: 0.0\r\n tacotron_teacher_forcing_decay_steps: 280000\r\n tacotron_teacher_forcing_final_ratio: 0.0\r\n tacotron_teacher_forcing_init_ratio: 1.0\r\n tacotron_teacher_forcing_mode: constant\r\n tacotron_teacher_forcing_ratio: 1.0\r\n tacotron_teacher_forcing_start_decay: 10000\r\n tacotron_test_batches: None\r\n tacotron_test_size: 0.05\r\n tacotron_zoneout_rate: 0.1\r\n train_with_GTA: False\r\n trim_fft_size: 512\r\n trim_hop_size: 128\r\n trim_top_db: 23\r\n use_lws: False\r\n utterance_min_duration: 1.6\r\n win_size: 800\r\nLoaded metadata for 290550 examples (366.70 hours)\r\ninitialisation done /gpu:0\r\nInitialized Tacotron model. Dimensions (? = dynamic shape): \r\n Train mode: True\r\n Eval mode: False\r\n GTA mode: False\r\n Synthesis mode: False\r\n Input: (?, ?)\r\n device: 0\r\n embedding: (?, ?, 512)\r\n enc conv out: (?, ?, 512)\r\n encoder out (cond): (?, ?, 768)\r\n decoder out: (?, ?, 80)\r\n residual out: (?, ?, 512)\r\n projected residual out: (?, ?, 80)\r\n mel out: (?, ?, 80)\r\n <stop_token> out: (?, ?)\r\n Tacotron Parameters 28.439 Million.\r\ninitialisation done /gpu:0\r\nInitialized Tacotron model. Dimensions (? = dynamic shape): \r\n Train mode: False\r\n Eval mode: True\r\n GTA mode: False\r\n Synthesis mode: False\r\n Input: (?, ?)\r\n device: 0\r\n embedding: (?, ?, 512)\r\n enc conv out: (?, ?, 512)\r\n encoder out (cond): (?, ?, 768)\r\n decoder out: (?, ?, 80)\r\n residual out: (?, ?, 512)\r\n projected residual out: (?, ?, 80)\r\n mel out: (?, ?, 80)\r\n <stop_token> out: (?, ?)\r\n Tacotron Parameters 28.439 Million.\r\nTacotron training set to a maximum of 2000000 steps\r\nLoading checkpoint synthesizer/saved_models/logs-my_run\\taco_pretrained\\tacotron_model.ckpt-0\r\n\r\nGenerated 64 train batches of size 36 in 3.626 sec\r\nStep 1 [5.798 sec/step, loss=14.85899, avg_loss=14.85899]\r\nStep 1 [5.798 sec/step, loss=14.85899, avg_loss=14.85899]\r\n\r\nSaving Model Character Embeddings visualization..\r\nTacotron Character embeddings have been updated on tensorboard!\r\nStep 2 [3.362 sec/step, loss=11.10468, avg_loss=12.98183]\r\nStep 2 [3.362 sec/step, loss=11.10468, avg_loss=12.98183]\r\n\r\nGenerated 403 test batches of size 36 in 15.574 sec\r\nExiting due to exception: 2 root error(s) found.\r\n (0) Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\r\n\t [[node Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d (defined at e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py:1748) ]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n\t [[Tacotron_model/clip_by_global_norm/mul_30/_479]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n (1) Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\r\n\t [[node Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d (defined at e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py:1748) ]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n0 successful operations.\r\n0 derived errors ignored.\r\n\r\nOriginal stack trace for 'Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d':\r\n File \"synthesizer_train.py\", line 55, in <module>\r\n tacotron_train(args, log_dir, hparams)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 392, in tacotron_train\r\n return train(log_dir, args, hparams)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 148, in train\r\n model, stats = model_train_mode(args, feeder, hparams, global_step)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 91, in model_train_mode\r\n is_training=True, split_infos=feeder.split_infos)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\tacotron.py\", line 230, in initialize\r\n residual = postnet(decoder_output)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\modules.py\", line 406, in __call__\r\n \"conv_layer_{}_\".format(i + 1) + self.scope)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\modules.py\", line 420, in conv1d\r\n padding=\"same\")\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 324, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\layers\\convolutional.py\", line 218, in conv1d\r\n return layer.apply(inputs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 324, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\base_layer.py\", line 1700, in apply\r\n return self.__call__(inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\layers\\base.py\", line 548, in __call__\r\n outputs = super(Layer, self).__call__(inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\base_layer.py\", line 854, in __call__\r\n outputs = call_fn(cast_inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 234, in wrapper\r\n return converted_call(f, options, args, kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 439, in converted_call\r\n return _call_unconverted(f, args, kwargs, options)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 330, in _call_unconverted\r\n return f(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\layers\\convolutional.py\", line 387, in call\r\n return super(Conv1D, self).call(inputs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\layers\\convolutional.py\", line 197, in call\r\n outputs = self._convolution_op(inputs, self.kernel)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 1134, in __call__\r\n return self.conv_op(inp, filter)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 639, in __call__\r\n return self.call(inp, filter)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 238, in __call__\r\n name=self.name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 227, in _conv1d\r\n name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 574, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 574, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 1681, in conv1d\r\n name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\gen_nn_ops.py\", line 1071, in conv2d\r\n data_format=data_format, dilations=dilations, name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\op_def_library.py\", line 794, in _apply_op_helper\r\n op_def=op_def)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 507, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 3357, in create_op\r\n attrs, op_def, compute_device)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 3426, in _create_op_internal\r\n op_def=op_def)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1748, in __init__\r\n self._traceback = tf_stack.extract_stack()\r\n\r\n2021-02-05 20:02:33.232435: W tensorflow/core/kernels/queue_base.cc:277] _1_datafeeder/eval_queue: Skipping cancelled enqueue attempt with queue not closed\r\n2021-02-05 20:02:33.232577: W tensorflow/core/kernels/queue_base.cc:277] _0_datafeeder/input_queue: Skipping cancelled enqueue attempt with queue not closed\r\n\r\n```\r\nI think it can't use the memory of my GTX 1660 super .Tell the noob what to do\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5425557efe30863267f805851f918124191e0be0", "files": [{"path": "synthesizer/hparams.py", "Loc": {"(None, None, None)": {"mod": [243]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "77c0bd169d8158ed1cdb180cda73c24d3cacd778", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1263", "iss_label": "", "title": "Python 3.10.12 is not supported ", "body": "When I ran python3.10 -m pip install numpy==1.20.3 on linux mint, I got an error while I was trying to install it. But it was totally fine when I used python3.8\r\n![12](https://github.com/CorentinJ/Real-Time-Voice-Cloning/assets/100217654/99071c68-bf38-4ffe-b789-9d292ed539a5)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "77c0bd169d8158ed1cdb180cda73c24d3cacd778", "files": [{"path": "requirements.txt", "Loc": {"(None, None, None)": {"mod": [4]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/250", "iss_label": "", "title": "[Errno 2] No such file or directory: 'encoder/_sources.txt'", "body": "I have this problem, but I can't understand what does this file contain? There is not _sources.txt in this repo", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "files": [{"path": "encoder_preprocess.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["encoder_preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5e400d474043044ba0e3e907a74b4baccb16ee7c", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/425", "iss_label": "", "title": "Tensorflow.contrib file missing what to do", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5e400d474043044ba0e3e907a74b4baccb16ee7c", "files": [{"path": "README.md", "Loc": {"(None, None, 35)": {"mod": [35]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nand\n2\n\u8fd9\u91cc\u662f\u6307\u5bfc\u662fdoc\n\u95ee\u9898\u539f\u56e0\u662f\u4f9d\u8d56\u7684\u5e93\u7684\u7248\u672c", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "9553eaa1748cf94814be322ec7b096d2d6bc7f28", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/419", "iss_label": "", "title": "Getting an exception when browsing for files", "body": "For some reason, importing mp3 files is not working. Anyone got an idea on why this might be the case.?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "9553eaa1748cf94814be322ec7b096d2d6bc7f28", "files": [{"path": "README.md", "Loc": {"(None, None, 40)": {"mod": [40]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/221", "iss_label": "", "title": "A couple inquiries about the colab version", "body": "So I have a setup using a copy of the colaboratory version, but I want to be able to generate a few sentences at a time without having to generate per sentence.\r\n\r\nI understand that commas and periods don't work, but in the demonstration video it was mentioned that line breaks are a way to get around this for now... however that's done in the toolbox application. How would it be done in code?\r\n\r\nI've tried \\n but I assume that's only for print related arguments... but I'm fairly new to Python so excuse my ignorance.\r\n\r\nOn top of this, how could I improve the voice in colab? In regards to training, it's mentioned that a decent session requires around 500gb or more... since I don't exactly have that in colab, is there another way to go about doing this?\r\n\r\nI've tried the code with the input being longer than 10 seconds, but apparently if the input is more than 10 seconds or so the voice seems more jittery than it would be if it were capped at 10 seconds. I absolutely applaud this repo but I just really need to understand it a bit better... Thanks in advance.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "files": [{"path": "toolbox/__init__.py", "Loc": {"('Toolbox', 'synthesize', 158)": {"mod": [170, 171, 172, 173, 174, 175]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/225", "iss_label": "", "title": "Not code-savy but want to experiment with code", "body": "I have Python Spyder downloaded, but I do not know much about coding, or how to get to the stage where I can add audio and synthesize it. What would you recommend?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "070a3c187f87136ebe92aa72766f8343772d414e", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/378", "iss_label": "", "title": "i cant install NVIDIA CUDA", "body": "I can't install NVIDIA CUDA even though I followed everything that [this guide](https://poorlydocumented.com/2019/11/installing-corentinjs-real-time-voice-cloning-project-on-windows-10-from-scratch/l) told me to do. I also have tried searching for this problem on the internet, but none of them solves my problem. I also have provided the image of the error [here](https://imgur.com/a/fYkiBYQ).\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "070a3c187f87136ebe92aa72766f8343772d414e", "files": [{"path": "demo_cli.py", "Loc": {"(None, None, None)": {"mod": [34]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["demo_cli.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "9553eaa1748cf94814be322ec7b096d2d6bc7f28", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/420", "iss_label": "", "title": "New Audio Issue: Assertion Failed", "body": "This was working yesterday fine, and no big changes were made. \r\nHowever, today starting up the demo toolbox loaded:\r\nAssertion failed!\r\n\r\nProgram: C:\\Users\\paul1\\AppData\\Local\\Programs\\Python\\Python37\\python.exe\r\nFile: src/hostapi/wdmks/pa_win_wdmks.c, Line 1061\r\n\r\nExpression: FALSE\r\n\r\nI have tried reinstalling visual studio as well, but to no avail. Any thoughts on this would be deeply appreciated.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "sounddevice"}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "\u5e93"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["sounddevice"]}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "39827a3998afa3ea612e7cc9a475085765d4d509", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5134", "iss_label": "asking-for-help-with-local-system-issues", "title": "[Bug]: Non checkpoints found. Can't run without a checkpoint.", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nDuring the installation (windows), an error occurs :\r\n```\r\nvenv \"G:\\Dev\\stable-diffusion-webui\\venv\\Scripts\\Python.exe\"\r\nPython 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]\r\nCommit hash: 9e78d2c419732711e984c4478af15ece121d64fd\r\nInstalling requirements for Web UI\r\nLaunching Web UI with arguments:\r\nNo module 'xformers'. Proceeding without it.\r\nNo checkpoints found. When searching for checkpoints, looked at:\r\n - file G:\\Dev\\stable-diffusion-webui\\model.ckpt\r\n - directory G:\\Dev\\stable-diffusion-webui\\models\\Stable-diffusion\r\nCan't run without a checkpoint. Find and place a .ckpt file into any of those locations. The program will exit.\r\n```\n\n### Steps to reproduce the problem\n\nLaunch webui-user.bat\n\n### What should have happened?\n\nInstallation complete\n\n### Commit where the problem happens\n\n9e78d2c419732711e984c4478af15ece121d64fd\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n_No response_\n\n### Additional information, context and logs\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "39827a3998afa3ea612e7cc9a475085765d4d509", "files": [{"path": "modules/sd_models.py", "Loc": {"(None, 'load_model', 230)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": ["modules/sd_models.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11458", "iss_label": "bug-report", "title": "[Bug]: ModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nLaunching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\r\n2023-06-27 13:53:22.297173: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-06-27 13:53:23.287285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 /content/microsoftexcel/launch.py:38 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2771 38 \u2502 main() \u2502\r\n\u2502 39 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/launch.py:34 in main \u2502\r\n\u2502 \u2502\r\n\u2502 31 \u2502 if args.test_server: \u2502\r\n\u2502 32 \u2502 \u2502 configure_for_tests() \u2502\r\n\u2502 33 \u2502 \u2502\r\n\u2502 \u2771 34 \u2502 start() \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/launch_utils.py:340 in start \u2502\r\n\u2502 \u2502\r\n\u2502 337 \u2502\r\n\u2502 338 def start(): \u2502\r\n\u2502 339 \u2502 print(f\"Launching {'API server' if '--nowebui' in sys.argv else 'W \u2502\r\n\u2502 \u2771 340 \u2502 import webui \u2502\r\n\u2502 341 \u2502 if '--nowebui' in sys.argv: \u2502\r\n\u2502 342 \u2502 \u2502 webui.api_only() \u2502\r\n\u2502 343 \u2502 else: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/webui.py:42 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 39 startup_timer.record(\"import ldm\") \u2502\r\n\u2502 40 \u2502\r\n\u2502 41 from modules import extra_networks \u2502\r\n\u2502 \u2771 42 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, \u2502\r\n\u2502 43 \u2502\r\n\u2502 44 # Truncate version number of nightly/local build of PyTorch to not cau \u2502\r\n\u2502 45 if \".dev\" in torch.__version__ or \"+git\" in torch.__version__: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/call_queue.py:5 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 2 import threading \u2502\r\n\u2502 3 import time \u2502\r\n\u2502 4 \u2502\r\n\u2502 \u2771 5 from modules import shared, progress, errors \u2502\r\n\u2502 6 \u2502\r\n\u2502 7 queue_lock = threading.Lock() \u2502\r\n\u2502 8 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/shared.py:18 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 15 import modules.devices as devices \u2502\r\n\u2502 16 from modules import localization, script_loading, errors, ui_component \u2502\r\n\u2502 17 from modules.paths_internal import models_path, script_path, data_path \u2502\r\n\u2502 \u2771 18 from ldm.models.diffusion.ddpm import LatentDiffusion \u2502\r\n\u2502 19 from typing import Optional \u2502\r\n\u2502 20 \u2502\r\n\u2502 21 demo = None \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/repositories/stable-diffusion-stability-ai/ldm/model \u2502\r\n\u2502 s/diffusion/ddpm.py:20 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 17 import itertools \u2502\r\n\u2502 18 from tqdm import tqdm \u2502\r\n\u2502 19 from torchvision.utils import make_grid \u2502\r\n\u2502 \u2771 20 from pytorch_lightning.utilities.distributed import rank_zero_only \u2502\r\n\u2502 21 from omegaconf import ListConfig \u2502\r\n\u2502 22 \u2502\r\n\u2502 23 from ldm.util import log_txt_as_img, exists, default, ismap, isimage, \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'\n\n### Steps to reproduce the problem\n\n1. on colab\r\n2. try to use the new 1.4.0 release\r\n3. error\n\n### What should have happened?\n\nno error\n\n### Version or Commit where the problem happens\n\n1.4.0\n\n### What Python version are you running on ?\n\nNone\n\n### What platforms do you use to access the UI ?\n\nOther/Cloud\n\n### What device are you running WebUI on?\n\n_No response_\n\n### Cross attention optimization\n\nAutomatic\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n```Shell\n!COMMANDLINE_ARGS=\"--share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\" REQS_FILE=\"requirements.txt\" python launch.py\n```\n\n\n### List of extensions\n\nsd-webui-tunnels\r\ncontrolnet\r\nopenpose-editor\r\nposex\r\na1111-sd-webui-tagcomplete\r\nsupermerger\r\nultimate-upscale-for-automatic1111\r\na111 locon extension\r\nimages browser\r\n\n\n### Console logs\n\n```Shell\n**truncated on colab**\r\n\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1217 100 1217 0 0 3699 0 --:--:-- --:--:-- --:--:-- 3699\r\n100 1722k 100 1722k 0 0 670k 0 0:00:02 0:00:02 --:--:-- 1355k\r\nArchive: /content/microsoftexcel.zip\r\n creating: microsoftexcel/\r\n inflating: microsoftexcel/.eslintignore \r\n inflating: microsoftexcel/.eslintrc.js \r\n inflating: microsoftexcel/.git-blame-ignore-revs \r\n creating: microsoftexcel/.github/\r\n creating: microsoftexcel/.github/ISSUE_TEMPLATE/\r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/bug_report.yml \r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/config.yml \r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/feature_request.yml \r\n inflating: microsoftexcel/.github/pull_request_template.md \r\n creating: microsoftexcel/.github/workflows/\r\n inflating: microsoftexcel/.github/workflows/on_pull_request.yaml \r\n inflating: microsoftexcel/.github/workflows/run_tests.yaml \r\n inflating: microsoftexcel/.gitignore \r\n inflating: microsoftexcel/.pylintrc \r\n inflating: microsoftexcel/CHANGELOG.md \r\n inflating: microsoftexcel/CODEOWNERS \r\n creating: microsoftexcel/configs/\r\n inflating: microsoftexcel/configs/alt-diffusion-inference.yaml \r\n inflating: microsoftexcel/configs/instruct-pix2pix.yaml \r\n inflating: microsoftexcel/configs/v1-inference.yaml \r\n inflating: microsoftexcel/configs/v1-inpainting-inference.yaml \r\n creating: microsoftexcel/embeddings/\r\n extracting: microsoftexcel/embeddings/Place Textual Inversion embeddings here.txt \r\n inflating: microsoftexcel/environment-wsl2.yaml \r\n creating: microsoftexcel/extensions/\r\n extracting: microsoftexcel/extensions/put extensions here.txt \r\n creating: microsoftexcel/extensions-builtin/\r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/\r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/javascript/\r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js \r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/scripts/\r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/scripts/hotkey_config.py \r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/style.css \r\n creating: microsoftexcel/extensions-builtin/extra-options-section/\r\n creating: microsoftexcel/extensions-builtin/extra-options-section/scripts/\r\n inflating: microsoftexcel/extensions-builtin/extra-options-section/scripts/extra_options_section.py \r\n creating: microsoftexcel/extensions-builtin/LDSR/\r\n inflating: microsoftexcel/extensions-builtin/LDSR/ldsr_model_arch.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/preload.py \r\n creating: microsoftexcel/extensions-builtin/LDSR/scripts/\r\n inflating: microsoftexcel/extensions-builtin/LDSR/scripts/ldsr_model.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/sd_hijack_autoencoder.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/vqvae_quantize.py \r\n creating: microsoftexcel/extensions-builtin/Lora/\r\n inflating: microsoftexcel/extensions-builtin/Lora/extra_networks_lora.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/lora.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/preload.py \r\n creating: microsoftexcel/extensions-builtin/Lora/scripts/\r\n inflating: microsoftexcel/extensions-builtin/Lora/scripts/lora_script.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/ui_extra_networks_lora.py \r\n creating: microsoftexcel/extensions-builtin/prompt-bracket-checker/\r\n creating: microsoftexcel/extensions-builtin/prompt-bracket-checker/javascript/\r\n inflating: microsoftexcel/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js \r\n creating: microsoftexcel/extensions-builtin/ScuNET/\r\n inflating: microsoftexcel/extensions-builtin/ScuNET/preload.py \r\n creating: microsoftexcel/extensions-builtin/ScuNET/scripts/\r\n inflating: microsoftexcel/extensions-builtin/ScuNET/scripts/scunet_model.py \r\n inflating: microsoftexcel/extensions-builtin/ScuNET/scunet_model_arch.py \r\n creating: microsoftexcel/extensions-builtin/SwinIR/\r\n inflating: microsoftexcel/extensions-builtin/SwinIR/preload.py \r\n creating: microsoftexcel/extensions-builtin/SwinIR/scripts/\r\n inflating: microsoftexcel/extensions-builtin/SwinIR/scripts/swinir_model.py \r\n inflating: microsoftexcel/extensions-builtin/SwinIR/swinir_model_arch.py \r\n inflating: microsoftexcel/extensions-builtin/SwinIR/swinir_model_arch_v2.py \r\n creating: microsoftexcel/html/\r\n inflating: microsoftexcel/html/card-no-preview.png \r\n inflating: microsoftexcel/html/extra-networks-card.html \r\n inflating: microsoftexcel/html/extra-networks-no-cards.html \r\n inflating: microsoftexcel/html/footer.html \r\n inflating: microsoftexcel/html/image-update.svg \r\n inflating: microsoftexcel/html/licenses.html \r\n creating: microsoftexcel/javascript/\r\n inflating: microsoftexcel/javascript/aspectRatioOverlay.js \r\n inflating: microsoftexcel/javascript/contextMenus.js \r\n inflating: microsoftexcel/javascript/dragdrop.js \r\n inflating: microsoftexcel/javascript/edit-attention.js \r\n inflating: microsoftexcel/javascript/extensions.js \r\n inflating: microsoftexcel/javascript/extraNetworks.js \r\n inflating: microsoftexcel/javascript/generationParams.js \r\n inflating: microsoftexcel/javascript/hints.js \r\n inflating: microsoftexcel/javascript/hires_fix.js \r\n inflating: microsoftexcel/javascript/imageMaskFix.js \r\n inflating: microsoftexcel/javascript/imageviewer.js \r\n inflating: microsoftexcel/javascript/imageviewerGamepad.js \r\n inflating: microsoftexcel/javascript/localization.js \r\n inflating: microsoftexcel/javascript/notification.js \r\n inflating: microsoftexcel/javascript/profilerVisualization.js \r\n inflating: microsoftexcel/javascript/progressbar.js \r\n inflating: microsoftexcel/javascript/textualInversion.js \r\n inflating: microsoftexcel/javascript/token-counters.js \r\n inflating: microsoftexcel/javascript/ui.js \r\n inflating: microsoftexcel/javascript/ui_settings_hints.js \r\n inflating: microsoftexcel/launch.py \r\n inflating: microsoftexcel/LICENSE.txt \r\n creating: microsoftexcel/localizations/\r\n extracting: microsoftexcel/localizations/Put localization files here.txt \r\n creating: microsoftexcel/models/\r\n creating: microsoftexcel/models/deepbooru/\r\n extracting: microsoftexcel/models/deepbooru/Put your deepbooru release project folder here.txt \r\n creating: microsoftexcel/models/karlo/\r\n inflating: microsoftexcel/models/karlo/ViT-L-14_stats.th \r\n creating: microsoftexcel/models/Stable-diffusion/\r\n extracting: microsoftexcel/models/Stable-diffusion/Put Stable Diffusion checkpoints here.txt \r\n creating: microsoftexcel/models/VAE/\r\n extracting: microsoftexcel/models/VAE/Put VAE here.txt \r\n creating: microsoftexcel/models/VAE-approx/\r\n inflating: microsoftexcel/models/VAE-approx/model.pt \r\n creating: microsoftexcel/modules/\r\n creating: microsoftexcel/modules/api/\r\n inflating: microsoftexcel/modules/api/api.py \r\n inflating: microsoftexcel/modules/api/models.py \r\n inflating: microsoftexcel/modules/call_queue.py \r\n inflating: microsoftexcel/modules/cmd_args.py \r\n creating: microsoftexcel/modules/codeformer/\r\n inflating: microsoftexcel/modules/codeformer/codeformer_arch.py \r\n inflating: microsoftexcel/modules/codeformer/vqgan_arch.py \r\n inflating: microsoftexcel/modules/codeformer_model.py \r\n inflating: microsoftexcel/modules/config_states.py \r\n inflating: microsoftexcel/modules/deepbooru.py \r\n inflating: microsoftexcel/modules/deepbooru_model.py \r\n inflating: microsoftexcel/modules/devices.py \r\n inflating: microsoftexcel/modules/errors.py \r\n inflating: microsoftexcel/modules/esrgan_model.py \r\n inflating: microsoftexcel/modules/esrgan_model_arch.py \r\n inflating: microsoftexcel/modules/extensions.py \r\n inflating: microsoftexcel/modules/extras.py \r\n inflating: microsoftexcel/modules/extra_networks.py \r\n inflating: microsoftexcel/modules/extra_networks_hypernet.py \r\n inflating: microsoftexcel/modules/face_restoration.py \r\n inflating: microsoftexcel/modules/generation_parameters_copypaste.py \r\n inflating: microsoftexcel/modules/gfpgan_model.py \r\n inflating: microsoftexcel/modules/gitpython_hack.py \r\n inflating: microsoftexcel/modules/hashes.py \r\n creating: microsoftexcel/modules/hypernetworks/\r\n inflating: microsoftexcel/modules/hypernetworks/hypernetwork.py \r\n inflating: microsoftexcel/modules/hypernetworks/ui.py \r\n inflating: microsoftexcel/modules/images.py \r\n inflating: microsoftexcel/modules/img2img.py \r\n inflating: microsoftexcel/modules/import_hook.py \r\n inflating: microsoftexcel/modules/interrogate.py \r\n inflating: microsoftexcel/modules/launch_utils.py \r\n inflating: microsoftexcel/modules/localization.py \r\n inflating: microsoftexcel/modules/lowvram.py \r\n inflating: microsoftexcel/modules/mac_specific.py \r\n inflating: microsoftexcel/modules/masking.py \r\n inflating: microsoftexcel/modules/memmon.py \r\n inflating: microsoftexcel/modules/modelloader.py \r\n creating: microsoftexcel/modules/models/\r\n creating: microsoftexcel/modules/models/diffusion/\r\n inflating: microsoftexcel/modules/models/diffusion/ddpm_edit.py \r\n creating: microsoftexcel/modules/models/diffusion/uni_pc/\r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/sampler.py \r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/uni_pc.py \r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/__init__.py \r\n inflating: microsoftexcel/modules/ngrok.py \r\n inflating: microsoftexcel/modules/paths.py \r\n inflating: microsoftexcel/modules/paths_internal.py \r\n inflating: microsoftexcel/modules/postprocessing.py \r\n inflating: microsoftexcel/modules/processing.py \r\n inflating: microsoftexcel/modules/progress.py \r\n inflating: microsoftexcel/modules/prompt_parser.py \r\n inflating: microsoftexcel/modules/realesrgan_model.py \r\n inflating: microsoftexcel/modules/restart.py \r\n inflating: microsoftexcel/modules/Roboto-Regular.ttf \r\n inflating: microsoftexcel/modules/safe.py \r\n inflating: microsoftexcel/modules/scripts.py \r\n inflating: microsoftexcel/modules/scripts_auto_postprocessing.py \r\n inflating: microsoftexcel/modules/scripts_postprocessing.py \r\n inflating: microsoftexcel/modules/script_callbacks.py \r\n inflating: microsoftexcel/modules/script_loading.py \r\n inflating: microsoftexcel/modules/sd_disable_initialization.py \r\n inflating: microsoftexcel/modules/sd_hijack.py \r\n inflating: microsoftexcel/modules/sd_hijack_checkpoint.py \r\n inflating: microsoftexcel/modules/sd_hijack_clip.py \r\n inflating: microsoftexcel/modules/sd_hijack_clip_old.py \r\n inflating: microsoftexcel/modules/sd_hijack_inpainting.py \r\n inflating: microsoftexcel/modules/sd_hijack_ip2p.py \r\n inflating: microsoftexcel/modules/sd_hijack_open_clip.py \r\n inflating: microsoftexcel/modules/sd_hijack_optimizations.py \r\n inflating: microsoftexcel/modules/sd_hijack_unet.py \r\n inflating: microsoftexcel/modules/sd_hijack_utils.py \r\n inflating: microsoftexcel/modules/sd_hijack_xlmr.py \r\n inflating: microsoftexcel/modules/sd_models.py \r\n inflating: microsoftexcel/modules/sd_models_config.py \r\n inflating: microsoftexcel/modules/sd_samplers.py \r\n inflating: microsoftexcel/modules/sd_samplers_common.py \r\n inflating: microsoftexcel/modules/sd_samplers_compvis.py \r\n inflating: microsoftexcel/modules/sd_samplers_kdiffusion.py \r\n inflating: microsoftexcel/modules/sd_unet.py \r\n inflating: microsoftexcel/modules/sd_vae.py \r\n inflating: microsoftexcel/modules/sd_vae_approx.py \r\n inflating: microsoftexcel/modules/sd_vae_taesd.py \r\n inflating: microsoftexcel/modules/shared.py \r\n inflating: microsoftexcel/modules/shared_items.py \r\n inflating: microsoftexcel/modules/styles.py \r\n inflating: microsoftexcel/modules/sub_quadratic_attention.py \r\n inflating: microsoftexcel/modules/sysinfo.py \r\n creating: microsoftexcel/modules/textual_inversion/\r\n inflating: microsoftexcel/modules/textual_inversion/autocrop.py \r\n inflating: microsoftexcel/modules/textual_inversion/dataset.py \r\n inflating: microsoftexcel/modules/textual_inversion/image_embedding.py \r\n inflating: microsoftexcel/modules/textual_inversion/learn_schedule.py \r\n inflating: microsoftexcel/modules/textual_inversion/logging.py \r\n inflating: microsoftexcel/modules/textual_inversion/preprocess.py \r\n inflating: microsoftexcel/modules/textual_inversion/test_embedding.png \r\n inflating: microsoftexcel/modules/textual_inversion/textual_inversion.py \r\n inflating: microsoftexcel/modules/textual_inversion/ui.py \r\n inflating: microsoftexcel/modules/timer.py \r\n inflating: microsoftexcel/modules/txt2img.py \r\n inflating: microsoftexcel/modules/ui.py \r\n inflating: microsoftexcel/modules/ui_common.py \r\n inflating: microsoftexcel/modules/ui_components.py \r\n inflating: microsoftexcel/modules/ui_extensions.py \r\n inflating: microsoftexcel/modules/ui_extra_networks.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_checkpoints.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_hypernets.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_textual_inversion.py \r\n inflating: microsoftexcel/modules/ui_gradio_extensions.py \r\n inflating: microsoftexcel/modules/ui_loadsave.py \r\n inflating: microsoftexcel/modules/ui_postprocessing.py \r\n inflating: microsoftexcel/modules/ui_settings.py \r\n inflating: microsoftexcel/modules/ui_tempdir.py \r\n inflating: microsoftexcel/modules/upscaler.py \r\n inflating: microsoftexcel/modules/xlmr.py \r\n inflating: microsoftexcel/package.json \r\n inflating: microsoftexcel/pyproject.toml \r\n inflating: microsoftexcel/README.md \r\n inflating: microsoftexcel/requirements-test.txt \r\n inflating: microsoftexcel/requirements.txt \r\n inflating: microsoftexcel/requirements_versions.txt \r\n inflating: microsoftexcel/screenshot.png \r\n inflating: microsoftexcel/script.js \r\n creating: microsoftexcel/scripts/\r\n inflating: microsoftexcel/scripts/custom_code.py \r\n inflating: microsoftexcel/scripts/img2imgalt.py \r\n inflating: microsoftexcel/scripts/loopback.py \r\n inflating: microsoftexcel/scripts/outpainting_mk_2.py \r\n inflating: microsoftexcel/scripts/poor_mans_outpainting.py \r\n inflating: microsoftexcel/scripts/postprocessing_codeformer.py \r\n inflating: microsoftexcel/scripts/postprocessing_gfpgan.py \r\n inflating: microsoftexcel/scripts/postprocessing_upscale.py \r\n inflating: microsoftexcel/scripts/prompts_from_file.py \r\n inflating: microsoftexcel/scripts/prompt_matrix.py \r\n inflating: microsoftexcel/scripts/sd_upscale.py \r\n inflating: microsoftexcel/scripts/xyz_grid.py \r\n inflating: microsoftexcel/style.css \r\n creating: microsoftexcel/test/\r\n inflating: microsoftexcel/test/conftest.py \r\n inflating: microsoftexcel/test/test_extras.py \r\n creating: microsoftexcel/test/test_files/\r\n inflating: microsoftexcel/test/test_files/empty.pt \r\n inflating: microsoftexcel/test/test_files/img2img_basic.png \r\n inflating: microsoftexcel/test/test_files/mask_basic.png \r\n inflating: microsoftexcel/test/test_img2img.py \r\n inflating: microsoftexcel/test/test_txt2img.py \r\n inflating: microsoftexcel/test/test_utils.py \r\n extracting: microsoftexcel/test/__init__.py \r\n creating: microsoftexcel/textual_inversion_templates/\r\n inflating: microsoftexcel/textual_inversion_templates/hypernetwork.txt \r\n inflating: microsoftexcel/textual_inversion_templates/none.txt \r\n inflating: microsoftexcel/textual_inversion_templates/style.txt \r\n inflating: microsoftexcel/textual_inversion_templates/style_filewords.txt \r\n inflating: microsoftexcel/textual_inversion_templates/subject.txt \r\n inflating: microsoftexcel/textual_inversion_templates/subject_filewords.txt \r\n inflating: microsoftexcel/webui-macos-env.sh \r\n inflating: microsoftexcel/webui-user.bat \r\n inflating: microsoftexcel/webui-user.sh \r\n inflating: microsoftexcel/webui.bat \r\n inflating: microsoftexcel/webui.py \r\n inflating: microsoftexcel/webui.sh \r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-tunnels'...\r\nremote: Enumerating objects: 143, done.\r\nremote: Counting objects: 100% (38/38), done.\r\nremote: Compressing objects: 100% (14/14), done.\r\nremote: Total 143 (delta 35), reused 24 (delta 24), pack-reused 105\r\nReceiving objects: 100% (143/143), 26.38 KiB | 13.19 MiB/s, done.\r\nResolving deltas: 100% (62/62), done.\r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-controlnet'...\r\nremote: Enumerating objects: 7327, done.\r\nremote: Counting objects: 100% (2275/2275), done.\r\nremote: Compressing objects: 100% (128/128), done.\r\nremote: Total 7327 (delta 2172), reused 2178 (delta 2147), pack-reused 5052\r\nReceiving objects: 100% (7327/7327), 15.36 MiB | 9.38 MiB/s, done.\r\nResolving deltas: 100% (4220/4220), done.\r\nCloning into '/content/microsoftexcel/extensions/openpose-editor'...\r\nremote: Enumerating objects: 403, done.\r\nremote: Counting objects: 100% (123/123), done.\r\nremote: Compressing objects: 100% (56/56), done.\r\nremote: Total 403 (delta 88), reused 80 (delta 67), pack-reused 280\r\nReceiving objects: 100% (403/403), 1.15 MiB | 14.54 MiB/s, done.\r\nResolving deltas: 100% (170/170), done.\r\nCloning into '/content/microsoftexcel/extensions/posex'...\r\nremote: Enumerating objects: 407, done.\r\nremote: Counting objects: 100% (43/43), done.\r\nremote: Compressing objects: 100% (19/19), done.\r\nremote: Total 407 (delta 21), reused 35 (delta 19), pack-reused 364\r\nReceiving objects: 100% (407/407), 11.39 MiB | 8.04 MiB/s, done.\r\nResolving deltas: 100% (196/196), done.\r\nCloning into '/content/microsoftexcel/extensions/a1111-microsoftexcel-tagcomplete'...\r\nremote: Enumerating objects: 1341, done.\r\nremote: Counting objects: 100% (1341/1341), done.\r\nremote: Compressing objects: 100% (505/505), done.\r\nremote: Total 1341 (delta 783), reused 1251 (delta 775), pack-reused 0\r\nReceiving objects: 100% (1341/1341), 3.85 MiB | 4.02 MiB/s, done.\r\nResolving deltas: 100% (783/783), done.\r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-supermerger'...\r\nremote: Enumerating objects: 720, done.\r\nremote: Counting objects: 100% (237/237), done.\r\nremote: Compressing objects: 100% (94/94), done.\r\nremote: Total 720 (delta 180), reused 183 (delta 143), pack-reused 483\r\nReceiving objects: 100% (720/720), 307.44 KiB | 13.37 MiB/s, done.\r\nResolving deltas: 100% (374/374), done.\r\nCloning into '/content/microsoftexcel/extensions/ultimate-upscale-for-automatic1111'...\r\nremote: Enumerating objects: 309, done.\r\nremote: Counting objects: 100% (84/84), done.\r\nremote: Compressing objects: 100% (46/46), done.\r\nremote: Total 309 (delta 34), reused 64 (delta 23), pack-reused 225\r\nReceiving objects: 100% (309/309), 32.23 MiB | 11.17 MiB/s, done.\r\nResolving deltas: 100% (109/109), done.\r\nCloning into '/content/microsoftexcel/extensions/a1111-microsoftexcel-locon'...\r\nremote: Enumerating objects: 188, done.\r\nremote: Counting objects: 100% (43/43), done.\r\nremote: Compressing objects: 100% (20/20), done.\r\nremote: Total 188 (delta 18), reused 40 (delta 17), pack-reused 145\r\nReceiving objects: 100% (188/188), 47.64 KiB | 15.88 MiB/s, done.\r\nResolving deltas: 100% (93/93), done.\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1229 100 1229 0 0 4708 0 --:--:-- --:--:-- --:--:-- 4708\r\n100 68776 100 68776 0 0 239k 0 --:--:-- --:--:-- --:--:-- 239k\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1195 100 1195 0 0 5063 0 --:--:-- --:--:-- --:--:-- 5063\r\n100 1509k 100 1509k 0 0 5428k 0 --:--:-- --:--:-- --:--:-- 5428k\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1191 100 1191 0 0 4983 0 --:--:-- --:--:-- --:--:-- 4962\r\n100 118M 100 118M 0 0 212M 0 --:--:-- --:--:-- --:--:-- 212M\r\n/content/microsoftexcel/extensions\r\nArchive: /content/microsoftexcel/extensions/microsoftexcel-images-browser.zip\r\n creating: sd-webui-images-browser/\r\n inflating: sd-webui-images-browser/.DS_Store \r\n creating: sd-webui-images-browser/.git/\r\n creating: sd-webui-images-browser/.git/branches/\r\n inflating: sd-webui-images-browser/.git/config \r\n inflating: sd-webui-images-browser/.git/description \r\n inflating: sd-webui-images-browser/.git/HEAD \r\n creating: sd-webui-images-browser/.git/hooks/\r\n inflating: sd-webui-images-browser/.git/hooks/applypatch-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/commit-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/fsmonitor-watchman.sample \r\n inflating: sd-webui-images-browser/.git/hooks/post-update.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-applypatch.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-commit.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-merge-commit.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-push.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-rebase.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-receive.sample \r\n inflating: sd-webui-images-browser/.git/hooks/prepare-commit-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/update.sample \r\n inflating: sd-webui-images-browser/.git/index \r\n creating: sd-webui-images-browser/.git/info/\r\n inflating: sd-webui-images-browser/.git/info/exclude \r\n creating: sd-webui-images-browser/.git/logs/\r\n inflating: sd-webui-images-browser/.git/logs/HEAD \r\n creating: sd-webui-images-browser/.git/logs/refs/\r\n creating: sd-webui-images-browser/.git/logs/refs/heads/\r\n inflating: sd-webui-images-browser/.git/logs/refs/heads/main \r\n creating: sd-webui-images-browser/.git/logs/refs/remotes/\r\n creating: sd-webui-images-browser/.git/logs/refs/remotes/origin/\r\n inflating: sd-webui-images-browser/.git/logs/refs/remotes/origin/HEAD \r\n creating: sd-webui-images-browser/.git/objects/\r\n creating: sd-webui-images-browser/.git/objects/info/\r\n creating: sd-webui-images-browser/.git/objects/pack/\r\n inflating: sd-webui-images-browser/.git/objects/pack/pack-8c09dc0723b064b3aad4351dc4af51e311b0601c.idx \r\n inflating: sd-webui-images-browser/.git/objects/pack/pack-8c09dc0723b064b3aad4351dc4af51e311b0601c.pack \r\n inflating: sd-webui-images-browser/.git/packed-refs \r\n creating: sd-webui-images-browser/.git/refs/\r\n creating: sd-webui-images-browser/.git/refs/heads/\r\n inflating: sd-webui-images-browser/.git/refs/heads/main \r\n creating: sd-webui-images-browser/.git/refs/remotes/\r\n creating: sd-webui-images-browser/.git/refs/remotes/origin/\r\n inflating: sd-webui-images-browser/.git/refs/remotes/origin/HEAD \r\n creating: sd-webui-images-browser/.git/refs/tags/\r\n inflating: sd-webui-images-browser/.gitignore \r\n creating: sd-webui-images-browser/javascript/\r\n inflating: sd-webui-images-browser/javascript/images_history.js \r\n inflating: sd-webui-images-browser/README.md \r\n creating: sd-webui-images-browser/scripts/\r\n inflating: sd-webui-images-browser/scripts/images_history.py \r\n/content/microsoftexcel/embeddings\r\nArchive: /content/microsoftexcel/embeddings/embeddings.zip\r\n creating: embeddings/\r\n inflating: embeddings/21charturnerv2.pt \r\n inflating: embeddings/Asian-Less-Neg.pt \r\n inflating: embeddings/bad-artist-anime.pt \r\n inflating: embeddings/bad-artist.pt \r\n inflating: embeddings/bad-hands-5.pt \r\n inflating: embeddings/bad-image-v2-39000.pt \r\n inflating: embeddings/bad-picture-chill-75v.pt \r\n inflating: embeddings/BadDream.pt \r\n inflating: embeddings/badhandv4.pt \r\n inflating: embeddings/bad_pictures.pt \r\n inflating: embeddings/bad_prompt.pt \r\n inflating: embeddings/bad_prompt_version2.pt \r\n inflating: embeddings/charturnerv2.pt \r\n inflating: embeddings/CyberRealistic_Negative-neg.pt \r\n inflating: embeddings/easynegative.safetensors \r\n inflating: embeddings/EasyNegativeV2.safetensors \r\n inflating: embeddings/epiCNegative.pt \r\n inflating: embeddings/epiCRealism.pt \r\n inflating: embeddings/FastNegativeEmbedding.pt \r\n inflating: embeddings/HyperStylizeV6.pt \r\n inflating: embeddings/nartfixer.pt \r\n inflating: embeddings/negative_hand-neg.pt \r\n inflating: embeddings/nfixer.pt \r\n inflating: embeddings/ng_deepnegative_v1_75t.pt \r\n inflating: embeddings/nrealfixer.pt \r\n inflating: embeddings/pureerosface_v1.pt \r\n inflating: embeddings/rmadanegative402_sd15-neg.pt \r\n inflating: embeddings/ulzzang-6500-v1.1.bin \r\n inflating: embeddings/ulzzang-6500.pt \r\n inflating: embeddings/UnrealisticDream.pt \r\n inflating: embeddings/verybadimagenegative_v1.3.pt \r\n/content/microsoftexcel/models/ESRGAN\r\nArchive: /content/microsoftexcel/models/ESRGAN/upscalers.zip\r\n inflating: 4x-UltraSharp.pth \r\n inflating: 4x_foolhardy_Remacri.pth \r\n/content\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1133 100 1133 0 0 4800 0 --:--:-- --:--:-- --:--:-- 4800\r\n100 4067M 100 4067M 0 0 221M 0 0:00:18 0:00:18 --:--:-- 242M\r\n/content/microsoftexcel\r\nfatal: not a git repository (or any of the parent directories): .git\r\nfatal: not a git repository (or any of the parent directories): .git\r\nPython 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0]\r\nVersion: ## 1.4.0\r\nCommit hash: <none>\r\nInstalling gfpgan\r\nInstalling clip\r\nInstalling open_clip\r\nInstalling xformers\r\nCloning Stable Diffusion into /content/microsoftexcel/repositories/stable-diffusion-stability-ai...\r\nCloning K-diffusion into /content/microsoftexcel/repositories/k-diffusion...\r\nCloning CodeFormer into /content/microsoftexcel/repositories/CodeFormer...\r\nCloning BLIP into /content/microsoftexcel/repositories/BLIP...\r\nInstalling requirements for CodeFormer\r\nInstalling requirements\r\nInstalling sd-webui-controlnet requirement: mediapipe\r\nInstalling sd-webui-controlnet requirement: svglib\r\nInstalling sd-webui-controlnet requirement: fvcore\r\n\r\nInstalling pycloudflared\r\n\r\nInstalling diffusers\r\n\r\nLaunching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\r\n2023-06-27 13:53:22.297173: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-06-27 13:53:23.287285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 /content/microsoftexcel/launch.py:38 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2771 38 \u2502 main() \u2502\r\n\u2502 39 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/launch.py:34 in main \u2502\r\n\u2502 \u2502\r\n\u2502 31 \u2502 if args.test_server: \u2502\r\n\u2502 32 \u2502 \u2502 configure_for_tests() \u2502\r\n\u2502 33 \u2502 \u2502\r\n\u2502 \u2771 34 \u2502 start() \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/launch_utils.py:340 in start \u2502\r\n\u2502 \u2502\r\n\u2502 337 \u2502\r\n\u2502 338 def start(): \u2502\r\n\u2502 339 \u2502 print(f\"Launching {'API server' if '--nowebui' in sys.argv else 'W \u2502\r\n\u2502 \u2771 340 \u2502 import webui \u2502\r\n\u2502 341 \u2502 if '--nowebui' in sys.argv: \u2502\r\n\u2502 342 \u2502 \u2502 webui.api_only() \u2502\r\n\u2502 343 \u2502 else: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/webui.py:42 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 39 startup_timer.record(\"import ldm\") \u2502\r\n\u2502 40 \u2502\r\n\u2502 41 from modules import extra_networks \u2502\r\n\u2502 \u2771 42 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, \u2502\r\n\u2502 43 \u2502\r\n\u2502 44 # Truncate version number of nightly/local build of PyTorch to not cau \u2502\r\n\u2502 45 if \".dev\" in torch.__version__ or \"+git\" in torch.__version__: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/call_queue.py:5 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 2 import threading \u2502\r\n\u2502 3 import time \u2502\r\n\u2502 4 \u2502\r\n\u2502 \u2771 5 from modules import shared, progress, errors \u2502\r\n\u2502 6 \u2502\r\n\u2502 7 queue_lock = threading.Lock() \u2502\r\n\u2502 8 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/shared.py:18 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 15 import modules.devices as devices \u2502\r\n\u2502 16 from modules import localization, script_loading, errors, ui_component \u2502\r\n\u2502 17 from modules.paths_internal import models_path, script_path, data_path \u2502\r\n\u2502 \u2771 18 from ldm.models.diffusion.ddpm import LatentDiffusion \u2502\r\n\u2502 19 from typing import Optional \u2502\r\n\u2502 20 \u2502\r\n\u2502 21 demo = None \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/repositories/stable-diffusion-stability-ai/ldm/model \u2502\r\n\u2502 s/diffusion/ddpm.py:20 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 17 import itertools \u2502\r\n\u2502 18 from tqdm import tqdm \u2502\r\n\u2502 19 from torchvision.utils import make_grid \u2502\r\n\u2502 \u2771 20 from pytorch_lightning.utilities.distributed import rank_zero_only \u2502\r\n\u2502 21 from omegaconf import ListConfig \u2502\r\n\u2502 22 \u2502\r\n\u2502 23 from ldm.util import log_txt_as_img, exists, default, ismap, isimage, \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'\n```\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c", "files": [{"path": "extensions-builtin/LDSR/sd_hijack_ddpm_v1.py", "Loc": {"(None, None, None)": {"mod": [17]}}, "status": "modified"}, {"path": "modules/models/diffusion/ddpm_edit.py", "Loc": {"(None, None, None)": {"mod": [22]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/models/diffusion/ddpm_edit.py", "extensions-builtin/LDSR/sd_hijack_ddpm_v1.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "ef4c94e1cfe66299227aa95a28c2380d21cb1600", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3902", "iss_label": "", "title": "[Feature Request]: ", "body": "Finer control of CFG Scale? now it goes by 0.5 steps. I'm trying to replicate work i did on other app which have CFG scale control by 0.1. i cannot get the same result, of course. \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["ui-config.json"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": ["ui-config.json"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "bf30673f5132c8f28357b31224c54331e788d3e7", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3301", "iss_label": "bug-report", "title": "Expected all tensors to be on the same device", "body": "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)\r\n\r\nhow to pick the CUDA:0 \uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "bf30673f5132c8f28357b31224c54331e788d3e7", "files": [{"path": "requirements.txt", "Loc": {"(None, None, 17)": {"mod": [17]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "39919c40dd18f5a14ae21403efea1b0f819756c7", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2190", "iss_label": "bug-report", "title": "How to use .ckpt model on repo", "body": "Hello everyone!\r\n\r\nI was able to train a custom model using Dreambooth and I now have a custom ckpt trained on myself. Where do I put this file to be able to use it in this repo?\r\n\r\nI dropped in into models but not sure what to do next?\r\n\r\nAppreciate any help", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "39919c40dd18f5a14ae21403efea1b0f819756c7", "files": [{"path": "models/Stable-diffusion", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["models/Stable-diffusion"]}}, {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "556c36b9607e3f4eacdddc85f8e7a78b29476ea7", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1614", "iss_label": "enhancement", "title": "Feature request: GPU temperature control ", "body": "**Is your feature request related to a problem? Please describe.**\r\nI don't like 85 degrees (Celsius) on my GPU, especially if it lasts more than 30 minutes or even 1 hour\r\n\r\n**Describe the solution you'd like**\r\nIf temp on a GPU is more than {maxTemp} and it lasts {accumulateTempTime} it will pause processing for {cooldownTime} or until it cools to {minTemp}, so my GPU won't end up with exploding\r\n\r\n**Describe alternatives you've considered**\r\nNot pausing, but lowering the activity to a few tens of seconds per step.\r\n\r\n**Additional context**\r\nNot lowering it in hard core, but smartly lowering activity (using sth similar to PID), so the temp will stay at {desiredTemp}\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "w-e-w", "pro": "stable-diffusion-webui-GPU-temperature-protection"}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["stable-diffusion-webui-GPU-temperature-protection"]}}, {"organization": "python", "repo_name": "cpython", "base_commit": "c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba", "iss_html_url": "https://github.com/python/cpython/issues/39472", "iss_label": "docs", "title": "Wrong reference for specific minidom methods", "body": "BPO | [832251](https://bugs.python.org/issue832251)\n--- | :---\nNosy | @freddrake\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = 'https://github.com/freddrake'\nclosed_at = <Date 2004-04-01.04:19:08.000>\ncreated_at = <Date 2003-10-29.09:39:39.000>\nlabels = ['docs']\ntitle = 'Wrong reference for specific minidom methods'\nupdated_at = <Date 2004-04-01.04:19:08.000>\nuser = 'https://bugs.python.org/nerby'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2004-04-01.04:19:08.000>\nactor = 'fdrake'\nassignee = 'fdrake'\nclosed = True\nclosed_date = None\ncloser = None\ncomponents = ['Documentation']\ncreation = <Date 2003-10-29.09:39:39.000>\ncreator = 'nerby'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 832251\nkeywords = []\nmessage_count = 3.0\nmessages = ['18799', '18800', '18801']\nnosy_count = 2.0\nnosy_names = ['fdrake', 'nerby']\npr_nums = []\npriority = 'high'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue832251'\nversions = ['Python 2.3']\n```\n\n</p></details>\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba", "files": [{"path": "Doc/lib/xmldomminidom.tex", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\ndoc\u95ee\u9898", "iss_reason": "2\ndoc\u9519\u8bef\uff0c\u4e0d\u662fbug", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Doc/lib/xmldomminidom.tex"]}}, {"organization": "python", "repo_name": "cpython", "base_commit": "5a65c2d43607a5033d7171445848cde21f07d81d", "iss_html_url": "https://github.com/python/cpython/issues/32681", "iss_label": "interpreter-core", "title": ".pyc writing/reading race condition (PR#189)", "body": "BPO | [210610](https://bugs.python.org/issue210610)\n--- | :---\nNosy | @gvanrossum\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = 'https://github.com/gvanrossum'\nclosed_at = <Date 2000-09-20.20:33:21.000>\ncreated_at = <Date 2000-07-31.21:05:42.000>\nlabels = ['interpreter-core']\ntitle = '.pyc writing/reading race condition (PR#189)'\nupdated_at = <Date 2000-09-20.20:33:21.000>\nuser = 'https://bugs.python.org/anonymous'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2000-09-20.20:33:21.000>\nactor = 'gvanrossum'\nassignee = 'gvanrossum'\nclosed = True\nclosed_date = None\ncloser = None\ncomponents = ['Interpreter Core']\ncreation = <Date 2000-07-31.21:05:42.000>\ncreator = 'anonymous'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 210610\nkeywords = []\nmessage_count = 4.0\nmessages = ['66', '67', '68', '69']\nnosy_count = 2.0\nnosy_names = ['gvanrossum', 'jhylton']\npr_nums = []\npriority = 'low'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue210610'\nversions = []\n```\n\n</p></details>\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5a65c2d43607a5033d7171445848cde21f07d81d", "files": [{"path": "Doc/library/os.rst", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": ["fcntl.h"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["fcntl.h"], "doc": ["Doc/library/os.rst"], "test": [], "config": [], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "adf03c3544084359d89e7a0bc2a5aa0561f1a0f2", "iss_html_url": "https://github.com/python/cpython/issues/68620", "iss_label": "stdlib\nrelease-blocker", "title": "Upgrade windows builds to use OpenSSL 1.0.2c", "body": "BPO | [24432](https://bugs.python.org/issue24432)\n--- | :---\nNosy | @pfmoore, @pitrou, @larryhastings, @giampaolo, @tiran, @tjguk, @benjaminp, @ned-deily, @alex, @bitdancer, @zware, @zooba, @dstufft\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = 'https://github.com/zooba'\nclosed_at = <Date 2015-07-03.22:28:01.834>\ncreated_at = <Date 2015-06-11.15:05:25.361>\nlabels = ['library', 'release-blocker']\ntitle = 'Upgrade windows builds to use OpenSSL 1.0.2c'\nupdated_at = <Date 2015-07-04.06:47:41.096>\nuser = 'https://github.com/alex'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2015-07-04.06:47:41.096>\nactor = 'python-dev'\nassignee = 'steve.dower'\nclosed = True\nclosed_date = <Date 2015-07-03.22:28:01.834>\ncloser = 'steve.dower'\ncomponents = ['Library (Lib)']\ncreation = <Date 2015-06-11.15:05:25.361>\ncreator = 'alex'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 24432\nkeywords = ['security_issue']\nmessage_count = 29.0\nmessages = ['245173', '245178', '245283', '246116', '246133', '246136', '246143', '246172', '246182', '246185', '246189', '246190', '246195', '246205', '246209', '246210', '246211', '246212', '246213', '246214', '246215', '246216', '246221', '246222', '246224', '246225', '246227', '246228', '246240']\nnosy_count = 15.0\nnosy_names = ['paul.moore', 'janssen', 'pitrou', 'larry', 'giampaolo.rodola', 'christian.heimes', 'tim.golden', 'benjamin.peterson', 'ned.deily', 'alex', 'r.david.murray', 'python-dev', 'zach.ware', 'steve.dower', 'dstufft']\npr_nums = []\npriority = 'release blocker'\nresolution = 'fixed'\nstage = 'resolved'\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue24432'\nversions = ['Python 2.7', 'Python 3.4', 'Python 3.5', 'Python 3.6']\n```\n\n</p></details>\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "adf03c3544084359d89e7a0bc2a5aa0561f1a0f2", "files": [{"path": "PCbuild/get_externals.bat", "Loc": {"(None, None, 57)": {"mod": [57]}}, "status": "modified"}, {"path": "PCbuild/python.props", "Loc": {"(None, None, 37)": {"mod": [37]}}, "status": "modified"}, {"path": "PCbuild/readme.txt", "Loc": {"(None, None, 200)": {"mod": [200]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["PCbuild/readme.txt"], "test": [], "config": ["PCbuild/get_externals.bat", "PCbuild/python.props"], "asset": []}}, {"organization": "python", "repo_name": "cpython", "base_commit": "5198a5c7aa77367765ae03542b561845094ca30d", "iss_html_url": "https://github.com/python/cpython/issues/48435", "iss_label": "type-bug\nstdlib\ntopic-regex", "title": "re module treats raw strings as normal strings", "body": "BPO | [4185](https://bugs.python.org/issue4185)\n--- | :---\nNosy | @gvanrossum, @loewis, @akuchling, @birkenfeld, @ezio-melotti\nFiles | <li>[raw-strings-with-re.txt](https://bugs.python.org/file11868/raw-strings-with-re.txt \"Uploaded as text/plain at 2008-10-23.03:55:27 by @ezio-melotti\"): Interactive Python session with more examples</li>\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = 'https://github.com/akuchling'\nclosed_at = <Date 2009-01-01.12:00:35.699>\ncreated_at = <Date 2008-10-23.03:55:28.615>\nlabels = ['expert-regex', 'type-bug', 'library']\ntitle = 're module treats raw strings as normal strings'\nupdated_at = <Date 2009-01-01.12:00:35.697>\nuser = 'https://github.com/ezio-melotti'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2009-01-01.12:00:35.697>\nactor = 'georg.brandl'\nassignee = 'akuchling'\nclosed = True\nclosed_date = <Date 2009-01-01.12:00:35.699>\ncloser = 'georg.brandl'\ncomponents = ['Library (Lib)', 'Regular Expressions']\ncreation = <Date 2008-10-23.03:55:28.615>\ncreator = 'ezio.melotti'\ndependencies = []\nfiles = ['11868']\nhgrepos = []\nissue_num = 4185\nkeywords = []\nmessage_count = 8.0\nmessages = ['75133', '75134', '75135', '75760', '77502', '77562', '77575', '78699']\nnosy_count = 5.0\nnosy_names = ['gvanrossum', 'loewis', 'akuchling', 'georg.brandl', 'ezio.melotti']\npr_nums = []\npriority = 'normal'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = 'behavior'\nurl = 'https://bugs.python.org/issue4185'\nversions = ['Python 2.6', 'Python 2.5', 'Python 2.4']\n```\n\n</p></details>\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5198a5c7aa77367765ae03542b561845094ca30d", "files": [{"path": "Doc/library/re.rst", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nor\n3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["Doc/library/re.rst"], "test": [], "config": [], "asset": []}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "ab6bcb4968bef335175c0b01972657961b2b1250", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/565", "iss_label": "", "title": "[BUG/Help] <title>\u4f7f\u7528ptuning\u5fae\u8c03\u65f6\u62a5\u9519RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] ", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nTraceback (most recent call last):\r\n File \"main.py\", line 429, in <module>\r\n main()\r\n File \"main.py\", line 112, in main\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, trust_remote_code=True)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py\", line 679, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1804, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1958, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 205, in __init__\r\n self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 61, in __init__\r\n self.text_tokenizer = TextTokenizer(vocab_file)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 22, in __init__\r\n self.sp.Load(model_path)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/sentencepiece/__init__.py\", line 905, in Load\r\n return self.LoadFromFile(model_file)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/sentencepiece/__init__.py\", line 310, in LoadFromFile\r\n return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)\r\nRuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] \n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n\u4f7f\u7528ptuning\u5fae\u8c03\u65f6\u62a5\u9519\uff0c\u5df2\u7ecf\u662f\u6700\u65b0\u7248\u7684\u6a21\u578b\u6587\u4ef6\u4e86\n\n### Environment\n\n```markdown\nPyTorch 1.11.0\r\nPython 3.8(ubuntu20.04)\r\nCuda 11.3\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["ice_text.model"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "801b1bb57690f0a99943f0a80c839b9ee120f3a7", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/388", "iss_label": "", "title": "\u4e3a\u4ec0\u4e48\u4e0d\u80fd\u7528\u5171\u4eabGPU\u5185\u5b58\u5462[Feature] <title>", "body": "### Is your feature request related to a problem? Please describe.\n\n\u4e3a\u4ec0\u4e48\u4e0d\u80fd\u7528\u5171\u4eabGPU\u5185\u5b58\u5462\r\n\u4e13\u75286G\u90fd\u6ee1\u4e86\u4f46\u662f\u5171\u4eabGPU\u5185\u5b58\u4e00\u70b9\u90fd\u6ca1\u52a8\r\n\n\n### Solutions\n\nemm\n\n### Additional context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "Jittor", "pro": "JittorLLMs"}], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["JittorLLMs"]}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "afe08a19ccadc8b238c218b245bb4c1c62598588", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/770", "iss_label": "", "title": "RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] ", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u8fd0\u884cpython cli_demo.py\u62a5\u9519\r\n\r\nroot@4uot40mdrplpv-0:/yx/ChatGLM-6B# python mycli_demo.py\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nTraceback (most recent call last):\r\n File \"/yx/ChatGLM-6B/mycli_demo.py\", line 6, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained(\"/yx/ChatGLM-6B/THUDM/chatglm-6b\", trust_remote_code=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 679, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 1804, in from_pretrained\r\n return cls._from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 1958, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 205, in __init__\r\n self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 61, in __init__\r\n self.text_tokenizer = TextTokenizer(vocab_file)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 22, in __init__\r\n self.sp.Load(model_path)\r\n File \"/usr/local/lib/python3.11/site-packages/sentencepiece/__init__.py\", line 905, in Load\r\n return self.LoadFromFile(model_file)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/sentencepiece/__init__.py\", line 310, in LoadFromFile\r\n return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] \r\n\r\n\u6211\u662f\u5728docker\u4e2d\u8fd0\u884c\u7684, \u9ebb\u70e6\u770b\u770b\u662f\u600e\u4e48\u56de\u4e8b, \u8c22\u8c22\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nhelp\n\n### Environment\n\n```markdown\n- OS:Red Hat 4.8.5-44\r\n- Python:3.11\r\n- Transformers:4.27.1\r\n- PyTorch:2.0\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :False\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["ice_text.model"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "d11eb5213e3c17225b47bb806a120dd45a80b126", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/63", "iss_label": "", "title": "How to fix error like this: torch.cuda.OutOfMemoryError: CUDA out of memory ?", "body": "OS: ubuntu 20.04\r\nThe error message said we need to change value of max_split_size_mb, but I search source code and cannot find any file contains max_split_size_mb, would you please provide some guidance about how to fix?\r\n```\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8/8 [00:16<00:00, 2.09s/it]\r\nTraceback (most recent call last):\r\n File \"/home/zhangclb/sandbox/ai_llm/ChatGLM-6B/cli_demo.py\", line 6, in <module>\r\n model = AutoModel.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True).half().cuda()\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 749, in cuda\r\n return self._apply(lambda t: t.cuda(device))\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n [Previous line repeated 2 more times]\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 664, in _apply\r\n param_applied = fn(param)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 749, in <lambda>\r\n return self._apply(lambda t: t.cuda(device))\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 1.83 GiB total capacity; 1.27 GiB already allocated; 57.19 MiB free; 1.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d11eb5213e3c17225b47bb806a120dd45a80b126", "files": [{"path": "cli_demo.py", "Loc": {"(None, None, None)": {"mod": [6]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["cli_demo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "a9fc0184446fba7f4f27addf519fea0b371df83a", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/417", "iss_label": "", "title": "[Help] <title> Oracle Linux 7.9 \u8fd0\u884cint4\u6a21\u578b\u51fa\u9519\uff0cAttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n/x/home/chatglm_env/lib/python3.7/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version!\r\n RequestsDependencyWarning)\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\n/x/home/chatglm_env/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory\r\n warn(f\"Failed to load image Python extension: {e}\")\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nNo compiled kernel found.\r\nCompiling kernels : /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.c\r\nCompiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.c -shared -o /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.so\r\nsh: gcc: command not found\r\nCompile failed, using default cpu kernel code.\r\nCompiling gcc -O3 -fPIC -std=c99 /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.c -shared -o /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.so\r\nKernels compiled : /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.so\r\nCannot load cpu kernel, don't use quantized model on cpu.\r\nUsing quantization cache\r\nApplying quantization to glm layers\r\nTraceback (most recent call last):\r\n File \"chatglm-int4-demo.py\", line 8, in <module>\r\n response, history = model.chat(tokenizer, '\u4f60\u597d', history=[])\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 1137, in chat\r\n outputs = self.generate(**input_ids, **gen_kwargs)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/transformers/generation/utils.py\", line 1447, in generate\r\n **model_kwargs,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/transformers/generation/utils.py\", line 2447, in sample\r\n output_hidden_states=output_hidden_states,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 1051, in forward\r\n return_dict=return_dict,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 887, in forward\r\n output_attentions=output_attentions\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 588, in forward\r\n output_attentions=output_attentions\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 406, in forward\r\n mixed_raw_layer = self.query_key_value(hidden_states)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 334, in forward\r\n output = W8A16LinearCPU.apply(input, self.weight, self.weight_scale, self.weight_bit_width, self.quantization_cache)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 74, in forward\r\n weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 256, in extract_weight_to_float\r\n func = cpu_kernels.int4WeightExtractionFloat\r\nAttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat'\r\n\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\nmodel_path = '/x/home/chatglm-6b-int4'\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\r\nmodel = AutoModel.from_pretrained(model_path, trust_remote_code=True).float()\r\n\r\nresponse, history = model.chat(tokenizer, '\u4f60\u597d', history=[])\r\n\n\n### Environment\n\n```markdown\n- OS: Oracle 7.9\r\n- Python: 3.7.13\r\n- Transformers: 2.6.1\r\n- PyTorch: 1.13.1\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : no cuda, use cpu\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "gcc"}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "\u5e93"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gcc"]}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "0c6d1750ef6042338534c3c97002175fa1ae0499", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/10", "iss_label": "question", "title": "\u53ef\u4ee5\u4f7f\u7528\u81ea\u5df1\u7684\u6570\u636e\u5fae\u8c03\u5417", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0c6d1750ef6042338534c3c97002175fa1ae0499", "files": [{"path": "ptuning/", "Loc": {}}, {"path": "ptuning/", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ptuning/"]}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "c55ecd89a079b86620cc722f2e21a14e3718d0f3", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/39", "iss_label": "", "title": "6GB\u663e\u5361\u63d0\u793a\u663e\u5b58\u4e0d\u8db3", "body": "\u663e\u5361\uff1a3060laptop 6GB\r\n\u62a5\u9519\uff1aRuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 6.00 GiB total capacity; 5.27 GiB already allocated; 0 bytes free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c55ecd89a079b86620cc722f2e21a14e3718d0f3", "files": [{"path": "web_demo.py", "Loc": {"(None, None, None)": {"mod": [5]}}, "status": "modified"}, {"path": "cli_demo.py", "Loc": {"(None, None, None)": {"mod": [6]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["web_demo.py", "cli_demo.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "1d87dac585c8fafd708db16860b628928ec5a821", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/532", "iss_label": "", "title": "[BUG/Help] \u8fd9\u4e24\u5929\u66f4\u65b0\u7248\u672c\u540e\uff0cchat\u7684\u5fae\u8c03\u597d\u50cf\u7528\u4e0d\u4e86\u4e86", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u524d\u51e0\u5929\u4f7f\u7528chat\u5fae\u8c03\u8fd8\u662f\u53ef\u4ee5\u7528\u7684\uff0c\u90a3\u65f6\u5019output\u6587\u4ef6\u662f\u5b8c\u6574\u7684\u5305\uff0c\u800c\u4e0d\u662f\u589e\u91cf\u5fae\u8c03\u5305\u3002\r\n\u8fd9\u4e24\u5929\u66f4\u65b0\u540e\uff0c\u4f7f\u7528\u7684\u8fd8\u662f\u9879\u76ee\u81ea\u5e26\u7684train_chat.sh\uff0c\u6a21\u578b\u7528\u7684\u662fint4\u3002\r\noutput\u6587\u4ef6\u786e\u5b9e\u5c0f\u4e86\uff0c\u4f46\u662f\u5374\u65e0\u6cd5\u8fd0\u884c\u4e86\uff0c\u5177\u4f53\u5f62\u5f0f\u4e3a\u8fd0\u884c\u4ee5\u4e0b\u4ee3\u7801\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModel\r\ntokenizer = AutoTokenizer.from_pretrained(\"/content/ChatGLM-6B/ptuning/output/chattm/checkpoint-50\", trust_remote_code=True)\r\nmodel = AutoModel.from_pretrained(\"/content/ChatGLM-6B/ptuning/output/chattm/checkpoint-50\", trust_remote_code=True).half().cuda()\r\nmodel = model.eval()\r\nresponse, history = model.chat(tokenizer, \"\u4f60\u597d\", history=[])\r\nprint(response)\r\n```\r\n\u62a5\u4ee5\u4e0b\u5185\u5bb9\u540e\u65e0\u53cd\u5e94\uff0c\u81f3\u5c115\u5206\u949f\u3002\u671f\u95f4\u663e\u5b58\u4e00\u76f4\u5728\u4e0a\u5347\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nThe dtype of attention mask (torch.int64) is not bool\r\n\u6700\u7ec8\u62a5\u9519\r\n2023-04-11 13:51:41.577016: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n-\n\n### Environment\n\n```markdown\ncolab pro \u9ed8\u8ba4\u73af\u5883\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1d87dac585c8fafd708db16860b628928ec5a821", "files": [{"path": "ptuning/main.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ptuning/main.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "edb127326a2d5afd855484f12a38e0119151f826", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/723", "iss_label": "", "title": "centos\u4e0a\uff0c2\u4e2a12g\u663e\u5b58\u7684\u663e\u5361\u5982\u4f55\u914d\u7f6e\u53ef\u4ee5\u540c\u65f6\u4f7f\u7528", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\ncentos\u4e0a\uff0c2\u4e2a12g\u663e\u5b58\u7684\u663e\u5361\uff0c\u65e0\u8bba\u662f\u8bad\u7ec3\u8fd8\u662fweb\uff0c\u90fd\u59cb\u7ec8\u75280\u53f7\u663e\u5361\uff0c\u5982\u4f55\u914d\u7f6e\u53ef\u4ee5\u540c\u65f6\u4f7f\u7528\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nCentos7\r\n12G nvida *2\n\n### Environment\n\n```markdown\n- OS:Centos7\r\n- Python:3.8\r\n- Transformers:4.26.1\r\n- PyTorch: 1.12\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "edb127326a2d5afd855484f12a38e0119151f826", "files": [{"path": "ptuning/train.sh", "Loc": {"(None, None, 4)": {"mod": [4]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nOther \u811a\u672c"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ptuning/train.sh"]}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "801b1bb57690f0a99943f0a80c839b9ee120f3a7", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/394", "iss_label": "", "title": "[BUG/Help] ValueError: 150000 is not in list", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n 0%| | 19/30000 [31:30<828:54:23, 99.53s/it]\r\n 0%| | 20/30000 [33:09<828:37:17, 99.50s/it]\r\n 0%| | 21/30000 [34:48<828:09:42, 99.45s/it]Traceback (most recent call last):\r\n File \"/root/projects/ChatGLM-6B/ptuning/main.py\", line 393, in <module>\r\n main()\r\n File \"/root/projects/ChatGLM-6B/ptuning/main.py\", line 332, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 1633, in train\r\n return inner_training_loop(\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 1902, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 2645, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 2677, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 1160, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 928, in forward\r\n mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 928, in <listcomp>\r\n mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]\r\nValueError: 150000 is not in list\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nPRE_SEQ_LEN=8\r\nLR=1e-2\r\n\r\nCUDA_VISIBLE_DEVICES=0 python3 main.py \\\r\n --do_train \\\r\n --train_file ../data/train.json \\\r\n --validation_file ../data/dev.json \\\r\n --prompt_column instruction \\\r\n --response_column output \\\r\n --overwrite_cache \\\r\n --model_name_or_path ~/projects/zero_nlp/simple_thu_chatglm6b/thuglm/ \\\r\n --output_dir output/adgen-chatglm-6b-pt-$PRE_SEQ_LEN-$LR \\\r\n --overwrite_output_dir \\\r\n --max_source_length 64 \\\r\n --max_target_length 64 \\\r\n --per_device_train_batch_size 100 \\\r\n --per_device_eval_batch_size 100 \\\r\n --gradient_accumulation_steps 16 \\\r\n --predict_with_generate \\\r\n --max_steps 30000 \\\r\n --logging_steps 100 \\\r\n --save_steps 100 \\\r\n --learning_rate $LR \\\r\n --pre_seq_len $PRE_SEQ_LEN \\\r\n # --quantization_bit 4\r\n\r\n\r\n\n\n### Environment\n\n```markdown\n- OS: centos8\r\n- Python: 3.9\r\n- Transformers: 4.27.1\r\n- PyTorch:2.0.0\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["ice_text.model", "modeling_chatglm.py"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\n2", "info_type": "Code"}, "loctype": {"code": ["modeling_chatglm.py"], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "1047e446e5387aa06c856c95800f67beab8b80d4", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/224", "iss_label": "", "title": "[BUG/Help] ImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n>>> model = AutoModel.from_pretrained(\"THUDM/chatglm-6b-int4\",trust_remote_code=True).float()\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 456, in from_pretrained\r\n logger.warning(\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\dynamic_module_utils.py\", line 374, in get_class_from_dynamic_module\r\n\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\dynamic_module_utils.py\", line 147, in get_class_in_module\r\n def get_class_in_module(class_name, module_path):\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\importlib\\__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"C:\\Users\\mina_/.cache\\huggingface\\modules\\transformers_modules\\THUDM\\chatglm-6b-int4\\dac03c3ac833dab2845a569a9b7f6ac4e8c5dc9b\\modeling_chatglm.py\", line 30, in <module>\r\n from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\generation\\utils.py\", line 39, in <module>\r\n from .configuration_utils import GenerationConfig\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\generation\\configuration_utils.py\", line 24, in <module>\r\n from ..utils import (\r\nImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils' (C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\utils\\__init__.py)\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n1. `conda activate chatglm-6b`\r\n2. `from transformers import AutoTokenizer, AutoModel`\r\n3. `tokenizer = AutoTokenizer.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True)`\r\n4. `model = AutoModel.from_pretrained(\"THUDM/chatglm-6b-int4\",trust_remote_code=True).float()`\r\n5. See this issue.\n\n### Environment\n\n```markdown\n- OS: Windows 10\r\n- Python: 3.7.5\r\n- Transformers:\r\n- PyTorch:\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : False\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1047e446e5387aa06c856c95800f67beab8b80d4", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "b65142b5e54e52b27c1c1269e1b4abd83efcce45", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/422", "iss_label": "", "title": "[BUG/Help] <title>KeyError: 'lm_head.weight'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u62a5\u9519\uff1aKeyError: 'lm_head.weight'\n\n### Expected Behavior\n\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\n\r\nLoading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]\r\nLoading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Administrator\\Downloads\\ChatGLM-6B-main\\cli_demo.py\", line 7, in <module>\r\n model = AutoModel.from_pretrained(r\"C:\\Users\\Administrator\\Downloads\\ChatGLM-6B-main\\model\",trust_remote_code=True,ignore_mismatched_sizes=True).half().quantize(4).cuda()\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 466, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2646, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2959, in _load_pretrained_model\r\n mismatched_keys += _find_mismatched_keys(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2882, in _find_mismatched_keys\r\n and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape\r\nKeyError: 'lm_head.weight'\r\n\n\n### Steps To Reproduce\n\n\u62a5\u9519\uff1aKeyError: 'lm_head.weight'\n\n### Environment\n\n```markdown\n- OS:windows 10\r\n- Python:3.10\r\n- Transformers:4.27.1\r\n- PyTorch:cu118\r\n- CUDA Support True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["pytorch_model-00001-of-00008.bin", "pytorch_model-00008-of-00008.bin"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Models/\u6570\u636e"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["pytorch_model-00001-of-00008.bin", "pytorch_model-00008-of-00008.bin"]}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "8633db1503fc3b0edc1d035f64aa35dce5d97969", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/622", "iss_label": "", "title": "[BUG/Help] ptuning\u65f6\uff0c\u6307\u5b9aPRE_SEQ_LEN=512\uff0c\u8bad\u7ec3\u540e\uff0c\u56de\u7b54\u7684\u95ee\u9898\u4ecd\u65e7\u6709\u56de\u7b54\u4e00\u767e\u5b57\u5de6\u53f3\u5c31\u65ad\u4e86\uff0c\u8be5\u5982\u4f55\u8c03\u6574\uff1f", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u8bad\u7ec3\u53c2\u6570\u5982\u4e0b\uff1a\r\nPRE_SEQ_LEN=512\r\nLR=2e-2\r\n\r\nCUDA_VISIBLE_DEVICES=0 python3 main.py \\\r\n --do_train \\\r\n --train_file ./data/gwddc.json \\\r\n --validation_file ./data/gwddc_test.json \\\r\n --prompt_column instruction \\\r\n --response_column output \\\r\n --overwrite_cache \\\r\n --model_name_or_path THUDM/chatglm-6b \\\r\n --output_dir output/adgen-chatglm-6b-pt-gwddc-v3 \\\r\n --overwrite_output_dir \\\r\n --max_source_length 64 \\\r\n --max_target_length 64 \\\r\n --per_device_train_batch_size 4 \\\r\n --per_device_eval_batch_size 1 \\\r\n --gradient_accumulation_steps 16 \\\r\n --predict_with_generate \\\r\n --max_steps 3000 \\\r\n --logging_steps 10 \\\r\n --save_steps 1000 \\\r\n --learning_rate $LR \\\r\n --pre_seq_len $PRE_SEQ_LEN\r\n\r\n\u8bad\u7ec3\u6210\u529f\uff0c\u52a0\u8f7dcheckpoint\u6a21\u578b\u4e5f\u6210\u529f\uff0c\u8f93\u5165prompts\u4e5f\u80fd\u6b63\u5e38\u56de\u7b54\uff0c\u53ef\u662f\uff0c\u56de\u7b54\u957f\u5ea6\u4ecd\u65e7\u5f88\u77ed\uff0c\u8fd8\u4f1a\u51fa\u73b0\u56de\u7b54\u534a\u622a\u65ad\u6389\u7684\u60c5\u51b5\uff0c\u8bf7\u95ee\u8be5\u5982\u4f55\u8c03\u6574\u8bad\u7ec3\u53c2\u6570\uff1f\r\n\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n1. ./data/gwddc.json\u4e3a\u81ea\u5907\u7684\u8bad\u7ec3\u96c6\uff0cprompts\u53ea\u6709\u4e0d\u52302000\u6761\r\n2. \u8f93\u5165\u4e0a\u8ff0\u53c2\u6570\u5e76\u8fd0\u884c\uff0c\u8bad\u7ec3\u7ed3\u679c\u4fe1\u606f\u5982\u4e0b\uff1a\r\n\u2026\u2026\r\n{'loss': 0.0371, 'learning_rate': 0.0, 'epoch': 96.77}\r\nSaving PrefixEncoder\r\n{'train_runtime': 21212.1807, 'train_samples_per_second': 9.051, 'train_steps_per_second': 0.141, 'train_loss': 0.2381483610868454, 'epoch': 96.77}\r\n***** train metrics *****\r\n epoch = 96.77\r\n train_loss = 0.2381\r\n train_runtime = 5:53:32.18\r\n train_samples = 1982\r\n train_samples_per_second = 9.051\r\n train_steps_per_second = 0.141\r\n\u5e2e\u770b\u662f\u4e0d\u662ftrain_loss\u4e0d\u884c\uff1f\u9700\u8981\u589e\u52a0\u8fed\u4ee3\u6b21\u6570\uff1f\n\n### Environment\n\n```markdown\n- OS:centos 7.6\r\n- Python:3.9\r\n- Transformers:4.27.1\r\n- PyTorch:2.0.0+cu117\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :True\n```\n\n\n### Anything else?\n\n\u53e6\u5916\uff0c\u7279\u522b\u8bad\u7ec3\u4e86\u201c\u4f60\u662f\u8c01\u201d\uff0c\u90e8\u7f72\u540e\uff0c\u4e5f\u6ca1\u751f\u6548\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8633db1503fc3b0edc1d035f64aa35dce5d97969", "files": [{"path": "ptuning/README.md", "Loc": {"(None, None, 180)": {"mod": [180]}}, "status": "modified"}, {"path": "ptuning/arguments.py", "Loc": {"('DataTrainingArguments', None, 65)": {"mod": [123]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["ptuning/arguments.py"], "doc": ["ptuning/README.md"], "test": [], "config": [], "asset": []}}, {"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "a14bc1d32452d92613551eb5d523e00950913710", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/353", "iss_label": "enhancement", "title": "[Help] \u5982\u4f55\u652f\u6301\u591a\u663e\u5361", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u516c\u53f8\u5185\u90e8\u4f7f\u7528\uff0c\u88c5\u4e862\u5361\uff0c\u53d1\u73b0\u9ed8\u8ba4\u914d\u7f6e\u53ea\u67091\u5361\u5728\u8dd1\uff0c\u8bf7\u95ee\u5982\u4f55\u4f7f\u7528\u624d\u53ef\u4ee5\u4f7f\u7528\u591a\u5361\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n\u65e0\n\n### Environment\n\n```markdown\nOS: Ubuntu 20.04\r\nPython: 3.8\r\nTransformers: 4.26.1\r\nPyTorch: 1.12\r\nCUDA Support: True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a14bc1d32452d92613551eb5d523e00950913710", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u5982\u4f55\u652f\u6301\u591a\u663e\u5361", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "34f28b2a1342fd72c2e4d4e5613855bfb9f35d34", "iss_html_url": "https://github.com/huggingface/transformers/issues/1225", "iss_label": "wontfix", "title": "Bert output last hidden state", "body": "## \u2753 Questions & Help\r\n\r\nHi,\r\n\r\nSuppose we have an utterance of length 24 (considering special tokens) and we right-pad it with 0 to max length of 64.\r\nIf we use Bert pertained model to get the last hidden states, the output would be of size [1, 64, 768]. \r\nCan we use just the first 24 as the hidden states of the utterance? I mean is it right to say that the output[0, :24, :] has all the required information?\r\nI realized that from index 24:64, the outputs has float values as well.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "34f28b2a1342fd72c2e4d4e5613855bfb9f35d34", "files": [{"path": "src/transformers/models/bert/modeling_bert.py", "Loc": {"('BertSelfAttention', 'forward', 276)": {"mod": [279]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/models/bert/modeling_bert.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "82c7e879876822864b5ceaf2c99eb01159266bcd", "iss_html_url": "https://github.com/huggingface/transformers/issues/27200", "iss_label": "", "title": "dataset download error in speech recognition examples", "body": "### System Info\n\n- `transformers` version: 4.35.0.dev0\r\n- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.18\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.24.1\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.10.0+cu111 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\n\n### Who can help?\n\n@stevhliu and @MKhalusova\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nCUDA_VISIBLE_DEVICES=0 python run_speech_recognition_ctc.py \\\r\n\t--dataset_name=\"common_voice\" \\\r\n\t--model_name_or_path=\"facebook/wav2vec2-large-xlsr-53\" \\\r\n\t--dataset_config_name=\"tr\" \\\r\n\t--output_dir=\"./wav2vec2-common_voice-tr-demo\" \\\r\n\t--overwrite_output_dir \\\r\n\t--num_train_epochs=\"15\" \\\r\n\t--per_device_train_batch_size=\"16\" \\\r\n\t--gradient_accumulation_steps=\"2\" \\\r\n\t--learning_rate=\"3e-4\" \\\r\n\t--warmup_steps=\"500\" \\\r\n\t--evaluation_strategy=\"steps\" \\\r\n\t--text_column_name=\"sentence\" \\\r\n\t--length_column_name=\"input_length\" \\\r\n\t--save_steps=\"400\" \\\r\n\t--eval_steps=\"100\" \\\r\n\t--layerdrop=\"0.0\" \\\r\n\t--save_total_limit=\"3\" \\\r\n\t--freeze_feature_encoder \\\r\n\t--gradient_checkpointing \\\r\n\t--chars_to_ignore , ? . ! - \\; \\: \\\" \u201c % \u2018 \u201d \ufffd \\\r\n\t--fp16 \\\r\n\t--group_by_length \\\r\n\t--push_to_hub \\\r\n\t--do_train --do_eval \n\n### Expected behavior\n\nWhen I run the default command, which set `dataset_name` as \"common_voice\", and I got a warning:\r\n```\r\n/home/xintong/.cache/huggingface/modules/datasets_modules/datasets/common_voice/220833898d6a60c50f621126e51fb22eb2dfe5244392c70dccd8e6e2f055f4bf/common_voice.py:634: FutureWarning: \r\n This version of the Common Voice dataset is deprecated.\r\n You can download the latest one with\r\n >>> load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\")\r\n \r\n warnings.warn(\r\nGenerating train split: 0%| | 0/1831 [00:00<?, ? examples/s]\r\nTraceback (most recent call last):\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 2578, in next\r\n tarinfo = self.tarinfo.fromtarfile(self)\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 1283, in fromtarfile\r\n obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 1221, in frombuf\r\n raise TruncatedHeaderError(\"truncated header\")\r\ntarfile.TruncatedHeaderError: truncated header\r\n```\r\nI modified this into `mozilla-foundation/common_voice_11_0`, it passed. \r\n```\r\nDownloading builder script: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8.13k/8.13k [00:00<00:00, 30.3MB/s]\r\nDownloading readme: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 14.4k/14.4k [00:00<00:00, 19.2MB/s]\r\nDownloading extra modules: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.44k/3.44k [00:00<00:00, 19.9MB/s]\r\nDownloading extra modules: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 60.9k/60.9k [00:00<00:00, 304kB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12.2k/12.2k [00:00<00:00, 25.6MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 568M/568M [00:07<00:00, 71.7MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 233M/233M [00:02<00:00, 78.6MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 285M/285M [00:04<00:00, 67.7MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.86M/4.86M [00:00<00:00, 73.3MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 109M/109M [00:01<00:00, 80.4MB/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:21<00:00, 4.24s/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:07<00:00, 1.54s/it]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.76M/5.76M [00:00<00:00, 56.0MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.17M/2.17M [00:00<00:00, 54.1MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.18M/2.18M [00:00<00:00, 64.3MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 32.8k/32.8k [00:00<00:00, 53.1MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 800k/800k [00:00<00:00, 59.8MB/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:05<00:00, 1.01s/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:00<00:00, 2954.98it/s]\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "82c7e879876822864b5ceaf2c99eb01159266bcd", "files": [{"path": "examples/pytorch/speech-recognition/README.md", "Loc": {"(None, None, 69)": {"mod": [69]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["examples/pytorch/speech-recognition/README.md"], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494", "iss_html_url": "https://github.com/huggingface/transformers/issues/12081", "iss_label": "", "title": "GPT2 Flax \"TypeError: JAX only supports number and bool dtypes, got dtype object in array\"", "body": "On GPU\r\n\r\n```\r\n>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM\r\n\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"gpt2-medium\")\r\n>>> model = FlaxAutoModelForCausalLM.from_pretrained(\"gpt2-medium\")\r\n>>> input_context = \"The dog\"\r\n>>> # encode input context\r\n>>> input_ids = tokenizer(input_context, return_tensors=\"jax\").input_ids\r\n>>> # generate candidates using sampling\r\n>>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)\r\n\r\nTypeError: JAX only supports number and bool dtypes, got dtype object in array\r\n```\r\n\r\n@patrickvonplaten @patil-suraj ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494", "files": [{"path": "src/transformers/models/gpt2/modeling_flax_gpt2.py", "Loc": {"('FlaxGPT2LMHeadModule', None, 553)": {"mod": []}}, "status": "modified"}, {"path": "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "Loc": {"('GPT2TokenizerFast', None, 70)": {"mod": []}}, "status": "modified"}, {"Loc": {"(None, None, None)": {"mod": [6, 7]}}, "path": null}]}, "own_code_loc": [{"Loc": {"(None, None, None)": {"mod": [6, 7]}}, "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "src/transformers/models/gpt2/modeling_flax_gpt2.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "322037e842e5e89080918c824998c17722df6f19", "iss_html_url": "https://github.com/huggingface/transformers/issues/10079", "iss_label": "", "title": "Unclear error \"NotImplementedError: \"while saving tokenizer. How fix it?", "body": "Here is my tokenizer code and how I save it to a json file\" /content/bert-datas7.json\"\r\n\r\n````\r\nfrom tokenizers import normalizers\r\nfrom tokenizers.normalizers import Lowercase, NFD, StripAccents\r\n\r\nbert_tokenizer.pre_tokenizer = Whitespace()\r\n\r\nfrom tokenizers.processors import TemplateProcessing\r\n\r\nbert_tokenizer.post_processor = TemplateProcessing(\r\n single=\"[CLS] $A [SEP]\",\r\n pair=\"[CLS] $A [SEP] $B:1 [SEP]:1\",\r\n special_tokens=[\r\n (\"[CLS]\", 1),\r\n (\"[SEP]\", 2),\r\n (\"[PAD]\", 3),\r\n ],\r\n \r\n)\r\nfrom tokenizers.trainers import WordPieceTrainer\r\n\r\ntrainer = WordPieceTrainer(\r\n vocab_size=30522, special_tokens=[\"[UNK]\", \"[CLS]\", \"[SEP]\", \"[PAD]\", \"[MASK]\"], pad_to_max_length=True\r\n)\r\nfiles = [f\"/content/For_ITMO.txt\" for split in [\"test\", \"train\", \"valid\"]]\r\nbert_tokenizer.train(trainer, files)\r\n\r\nmodel_files = bert_tokenizer.model.save(\"data\", \"/content/For_ITMO.txt\")\r\n\r\nbert_tokenizer.model = WordPiece.from_file(*model_files, unk_token=\"[UNK]\", pad_to_max_length=True)\r\n\r\nbert_tokenizer.save(\"/content/bert-datas7.json\") \r\n````\r\n\r\nWhen I output tokenizer name_or_path = nothing is displayed. This is normal?\r\n\r\n\r\n````\r\ntokenizer = PreTrainedTokenizerFast(tokenizer_file='/content/bert-datas7.json')\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\nprint(tokenizer)\r\n>>> PreTrainedTokenizerFast(name_or_path='', vocab_size=1435, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={'pad_token': '[PAD]'})\r\n````\r\nAlso, when I try to save my tokenizer, I get an error without explanation. How can I rewrite the code so that all this???\r\n#9658 \r\n#10039 \r\n[For_ITMO.txt-vocab (1) (1).txt](https://github.com/huggingface/transformers/files/5945659/For_ITMO.txt-vocab.1.1.txt)\r\n \r\n````\r\ntokenizer.save_pretrained(\"/content/tokennizerrrr\")\r\n\r\nNotImplementedError Traceback (most recent call last)\r\n<ipython-input-11-efc48254a528> in <module>()\r\n----> 1 tokenizer.save_pretrained(\"/content/tokennizerrrr\")\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in save_vocabulary(self, save_directory, filename_prefix)\r\n 2042 :obj:`Tuple(str)`: Paths to the files saved.\r\n 2043 \"\"\"\r\n-> 2044 raise NotImplementedError\r\n 2045 \r\n 2046 def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]:\r\n\r\nNotImplementedError: \r\n````\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "322037e842e5e89080918c824998c17722df6f19", "files": [{"path": "src/transformers/tokenization_utils_fast.py", "Loc": {"('PreTrainedTokenizerFast', '_save_pretrained', 505)": {"mod": [509]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/tokenization_utils_fast.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "77a257fc210a56f1fd0d75166ecd654cf58111f3", "iss_html_url": "https://github.com/huggingface/transformers/issues/8403", "iss_label": "", "title": "[s2s finetune] huge increase in memory demands with --fp16 native amp", "body": "While working on https://github.com/huggingface/transformers/issues/8353 I discovered that `--fp16` causes a 10x+ increase in gpu memory demands.\r\n\r\ne.g. I can run bs=12 w/o `--fp16` \r\n\r\n```\r\ncd examples/seq2seq\r\nexport BS=12; rm -rf distilbart-cnn-12-6; python finetune.py --learning_rate=3e-5 --gpus 1 \\\r\n--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \\\r\n--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \\\r\n--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \\\r\n--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \\\r\n--warmup_steps 500 --output_dir distilbart-cnn-12-6\r\n\r\n```\r\nBut if I add:\r\n```\r\n--fp16\r\n```\r\n\r\n(w/ or w/o `--fp16_opt_level O1`)\r\n\r\nI get OOM even with bs=1 on a 8GB card and it barely manages on a 24GB card - I think the increase in memory demand is more than 10x.\r\n\r\nThe OOM either right away when it does the sanity check step, or after just 10-20 batches - so within a few secs\r\n\r\nThis is with pytorch-1.6. Same goes for pytorch-1.7 and 1.8-nightly.\r\n\r\nI wasn't able to test `--fp16` with pytorch-1.5, since I can't build apex on ubuntu-20.04. Without `--fp16` pytorch-1.5 works the same as pytorch-1.6 gpu memory-wise.\r\n\r\nI tested with pytorch-1.5 + apex and there is no problem there. Memory consumption is about half.\r\n\r\nHere is the table of the batch sizes that fit into a 8gb rtx-1070 (bigger BS leads to an instant OOM):\r\n\r\nbs | version\r\n---|--------\r\n12 | pt15\r\n20 | pt15+fp16\r\n12 | pt16\r\n1 | pt16+fp16\r\n\r\n\r\n\r\nIf you'd like to reproduce the problem here are the full steps:\r\n\r\n```\r\n# prep library\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install -e .[dev]\r\npip install -r examples/requirements.txt\r\ncd examples/seq2seq\r\n\r\n# prep data\r\nwget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz\r\ntar -xzvf cnn_dm_v2.tgz # empty lines removed\r\nmv cnn_cln cnn_dm\r\n\r\n# run\r\nexport BS=12; \r\nrm -rf distilbart-cnn-12-6\r\npython finetune.py --learning_rate=3e-5 --gpus 1 \\\r\n--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \\\r\n--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \\\r\n--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \\\r\n--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \\\r\n--warmup_steps 500 --output_dir distilbart-cnn-12-6 \r\n```\r\n\r\nThis issue is to track the problem and hopefully finding a solution.\r\n\r\n@sshleifer ", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57", "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "pytorch", "pro": "pytorch", "path": ["{'base_commit': '57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57', 'files': [{'path': 'aten/src/ATen/autocast_mode.cpp', 'status': 'modified', 'Loc': {(None, 'cached_cast', 67): {'mod': [71]}}}, {'path': 'test/test_cuda.py', 'status': 'modified', 'Loc': {('TestCuda', None, 92): {'add': [2708]}}}]}"]}], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["aten/src/ATen/autocast_mode.cpp"], "doc": [], "test": ["test/test_cuda.py"], "config": [], "asset": ["pytorch"]}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "1a688709b34b10bd372e3e0860c8d39d170ebf53", "iss_html_url": "https://github.com/huggingface/transformers/issues/17201", "iss_label": "", "title": "a memory leak in qqp prediction using bart", "body": "### System Info\n\n```shell\n- `transformers` version: 4.19.0.dev0\r\n- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.4.0\r\n- PyTorch version (GPU?): 1.10.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\n```\n\n\n### Who can help?\n\n@sgugger\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nI met the same issue #11011. If not using `--eval_accumulation_steps`, it caused CUDA out of memory. If using it, it caused out of RAM and killed by system.\r\n\r\nI only did prediction on GLUE QQP dataset using bart without fine-tuning. Considering QQP having a large test set (300k), the prediction got slower and slower, and finally got out of memory.\r\n\r\nThis is the script to reproduce:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 python run_glue.py --model_name_or_path facebook/bart-large --task_name qqp --output_dir bart-large_qqp --eval_accumulation_steps 100 --do_predict --per_device_eval_batch_size 24\r\n```\n\n### Expected behavior\n\n```shell\nPrediction without out memory.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1a688709b34b10bd372e3e0860c8d39d170ebf53", "files": [{"path": "src/transformers/trainer.py", "Loc": {"('Trainer', 'evaluation_loop', 2549)": {"mod": [2635]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2\nOr\n5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/trainer.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5", "iss_html_url": "https://github.com/huggingface/transformers/issues/28435", "iss_label": "", "title": "Skip some weights for load_in_8bit and keep them as fp16/32?", "body": "### Feature request\r\n\r\nHello,\r\n\r\nI am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit.\r\n\r\n### Motivation\r\n\r\nMy motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 16 bit because I notice performance degradations with CLIP in 8 bit and also want to be able to train this part without LoRA.\r\n\r\nAs far as I can see in the documentation, issues and with Google (both here and for bitsandbytes), there is currently no way to do this.\r\n\r\n### Your contribution\r\n\r\nI can in theory help implement something like this but I don't know where and how in the code this should be done.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5", "files": [{"path": "src/transformers/modeling_utils.py", "Loc": {"('PreTrainedModel', 'from_pretrained', 2528)": {"mod": [3524]}}, "status": "modified"}, {"path": "src/transformers/utils/quantization_config.py", "Loc": {"('BitsAndBytesConfig', None, 151)": {"mod": [176]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/modeling_utils.py", "src/transformers/utils/quantization_config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "705ca7f21b2b557e0cfd5d0853b297fa53489d20", "iss_html_url": "https://github.com/huggingface/transformers/issues/14938", "iss_label": "", "title": "Question: Object of type EncoderDecoderConfig is not JSON serializable", "body": "Hi.\r\nAn error occurred when I used Trainer to train and save EncoderDecoderModel.\r\n\r\n```python\r\n File \"/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py\", line 482, in <module>\r\n run(model_args, data_args, training_args)\r\n File \"/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py\", line 465, in run\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1391, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1495, in _maybe_log_save_evaluate\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1557, in _save_checkpoint\r\n self.save_model(output_dir)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1961, in save_model\r\n self._save(output_dir)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 2009, in _save\r\n self.model.save_pretrained(output_dir, state_dict=state_dict)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 1053, in save_pretrained\r\n model_to_save.config.save_pretrained(save_directory)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 416, in save_pretrained\r\n self.to_json_file(output_config_file, use_diff=True)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 739, in to_json_file\r\n writer.write(self.to_json_string(use_diff=use_diff))\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 725, in to_json_string\r\n return json.dumps(config_dict, indent=2, sort_keys=True) + \"\\n\"\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 201, in encode\r\n chunks = list(chunks)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 431, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 438, in _iterencode\r\n o = _default(o)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type EncoderDecoderConfig is not JSON serializable\r\n```\r\nMy model and Config define the following code. \r\n```python\r\n tokenizer = RobertaTokenizerFast.from_pretrained(model_args.tokenizer_name)\r\n encoder_config = RobertaConfig.from_pretrained(model_args.encoder_model_name_or_path)\r\n decoder_config = RobertaConfig.from_pretrained(model_args.decoder_model_name_or_path)\r\n encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)\r\n model = RobertaForSeq2Seq.from_encoder_decoder_pretrained(model_args.encoder_model_name_or_path,\r\n model_args.decoder_model_name_or_path,\r\n config=encoder_decoder_config, tie_encoder_decoder=True)\r\n model.config.decoder_start_token_id = tokenizer.bos_token_id\r\n model.config.eos_token_id = tokenizer.eos_token_id\r\n model.config.max_length = 64\r\n model.config.early_stopping = True\r\n model.config.no_repeat_ngram_size = 3\r\n model.config.length_penalty = 2.0\r\n model.config.num_beams = 4\r\n model.config.pad_token_id = tokenizer.pad_token_id\r\n```\r\nThis error occurred because EncoderDecoderConfig cannot be converted to json format. But I don't know how to modify it.\r\n```python\r\nERROR OCCURRED:\r\n\r\n if use_diff is True:\r\n config_dict = self.to_diff_dict()\r\n else:\r\n config_dict = self.to_dict()\r\n return json.dumps(config_dict, indent=2, sort_keys=True) + \"\\n\"\r\n```\r\n\r\nI look forward to your help! Thanks!\r\n @jplu @patrickvonplaten ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [46, 47], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "45d21502f0b67eb8a5ad244d469dcc0dfb7517a7", "iss_html_url": "https://github.com/huggingface/transformers/issues/653", "iss_label": "", "title": "Different Results from version 0.4.0 to version 0.5.0", "body": "Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained a new model, so the weights changed? ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "45d21502f0b67eb8a5ad244d469dcc0dfb7517a7", "files": [{"path": "pytorch_pretrained_bert/modeling.py", "Loc": {"('BertPreTrainedModel', 'init_bert_weights', 515)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["pytorch_pretrained_bert/modeling.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885", "iss_html_url": "https://github.com/huggingface/transformers/issues/10202", "iss_label": "", "title": "Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True", "body": "## Environment info\r\n- `transformers` version: 4.3.2\r\n- Platform: macOS-11.2.1-x86_64-i386-64bit\r\n- Python version: 3.9.1\r\n- PyTorch version (GPU?): 1.7.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\n## Information\r\n\r\nSee title; this issue does not reproduce with slow tokenizers. Does not reproduce with serialized tokenizers.\r\n\r\nFound while investigating https://github.com/minimaxir/aitextgen/issues/88\r\n\r\n## To reproduce\r\n\r\nUsing [gpt2_merges.txt](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_merges.txt) and [gpt2_vocab.json](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_vocab.json) as linked:\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, GPT2Tokenizer, GPT2TokenizerFast\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\r\n\r\noutputs = model.generate(max_length=40)\r\n\r\n# tensor([[50256, 383, 471, 13, 50, 13, 2732, 286, 4796, 468,\r\n# 587, 10240, 262, 1918, 286, 257, 1966, 5349, 5797, 508,\r\n# 373, 2823, 290, 2923, 416, 257, 23128, 287, 262, 471,\r\n# 13, 50, 13, 13241, 319, 3583, 13, 198, 198, 198]])\r\n\r\ntokenizer_fast = GPT2TokenizerFast(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_fast.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# '<|endoftext|> The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\ntokenizer_slow = GPT2Tokenizer(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_slow.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# ' The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885", "files": [{"path": "src/transformers/tokenization_utils_base.py", "Loc": {"('SpecialTokensMixin', 'add_special_tokens', 900)": {"mod": []}}, "status": "modified"}, {"Loc": {"(None, None, None)": {"mod": [33]}}, "path": null}]}, "own_code_loc": [{"Loc": {"(None, None, None)": {"mod": [33]}}, "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "Cment\u6307\u51fa\u7528\u6237\u4ee3\u7801\u95ee\u9898\uff0c\u7ed9\u51fa\u9700\u8981\u4f7f\u7528\u7684API\n\u81ea\u5df1\u4ee3\u7801\u7684\u95ee\u9898 \u53e6\u4e00\u4e2aissue\u4e2d\u6307\u51facmit\nI think this is happening because when you load it from the vocab and merge files, it doesn't know <|endoftext|> is a special token. For the skip_special_tokens to work, I believe it would be necessary to add them to the tokenizer:\ntokenizer_fast.add_special_tokens({\n \"additional_special_tokens\": \"<|endoftext|>\"\n})\n", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/tokenization_utils_base.py", null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "huggingface", "repo_name": "transformers", "base_commit": "5bcbdff15922b1d0eeb035879630ca61c292122a", "iss_html_url": "https://github.com/huggingface/transformers/issues/32661", "iss_label": "bug", "title": "RoBERTa config defaults are inconsistent with fairseq implementation", "body": "### System Info\n\n python 3.12, transformers 4.14, latest mac os\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom transformers import RobertaConfig\r\nmy_config = RobertaConfig()\r\nroberta_config = RobertaConfig.from_pretrained(\"roberta-base\")\r\n\r\nassert (\r\n my_config.max_position_embeddings == roberta_config.max_position_embeddings\r\n), \"%d %d\" % (my_config.max_position_embeddings, roberta_config.max_position_embeddings)\n\n### Expected behavior\n\nThe config defaults should correspond the the base model?\r\n\r\nThis is an implementation detail, but it did send me on a debugging spree as it hid as a sticky CUDA assertion error.\r\n```Assertion `srcIndex < srcSelectDimSize` failed```\r\n\r\nThe problem is that by default if you create the position_ids yourself or if you let transformers roberta_modelling take care of it (it also does it the way fairseq implemented it), it will create indeces that are out of bounds with the default configuration as everything is shifted by pad_token_id.\r\n\r\nThis is more of a heads up. Do transformers generally provide defaults aligned with the original models, or are the defaults here meant to be agnostic of that?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5bcbdff15922b1d0eeb035879630ca61c292122a", "files": [{"path": "src/transformers/models/roberta/configuration_roberta.py", "Loc": {"('RobertaConfig', None, 29)": {"mod": [59]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/models/roberta/configuration_roberta.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f0df3144d68ed288f5ccce0c34d3939f8462ba98", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1345", "iss_label": "", "title": "Not able to run any MetaGPT examples", "body": "Referred Issue #1322 , but not able to resolve the issue. I added azure based api endpoint and api key in config2.yaml\r\n\r\n\r\n\u2502 105 \u2502 \u2502 typer.echo(\"Missing argument 'IDEA'. Run 'metagpt --help' for more information.\" \u2502\r\n\u2502 106 \u2502 \u2502 raise typer.Exit() \u2502\r\n\u2502 107 \u2502 \u2502\r\n\u2502 \u2771 108 \u2502 return generate_repo( \u2502\r\n\u2502 109 \u2502 \u2502 idea, \u2502\r\n\u2502 110 \u2502 \u2502 investment, \u2502\r\n\u2502 111 \u2502 \u2502 n_round, \u2502\r\n\u2502 \u2502\r\n\\metagpt\\software_company.py:30 in generate_repo \u2502\r\n\u2502 \u2502\r\n\u2502 27 \u2502 recover_path=None, \u2502\r\n\u2502 28 ) -> ProjectRepo: \u2502\r\n\u2502 29 \u2502 \"\"\"Run the startup logic. Can be called from CLI or other Python scripts.\"\"\" \u2502\r\n\u2502 \u2771 30 \u2502 from metagpt.config2 import config \u2502\r\n\u2502 31 \u2502 from metagpt.context import Context \u2502\r\n\u2502 32 \u2502 from metagpt.roles import ( \u2502\r\n\u2502 33 \u2502 \u2502 Architect, \u2502\r\n\u2502 \u2502\r\n\\new_meta_env\\Lib\\site-packages\\metagpt-0.8.1-py3.11.egg\\metagpt\\ \u2502\r\n\u2502 config2.py:164 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 161 \u2502 return result \u2502\r\n\u2502 162 \u2502\r\n\u2502 163 \u2502\r\n\u2502 \u2771 164 config = Config.default() \u2502\r\n\\new_meta_env\\Lib\\site-packages\\metagpt-0.8.1-py3.11.egg\\metagpt\\ \u2502\r\n\u2502 config2.py:106 in default \u2502\r\n\u2502 \u2502\r\n\u2502 103 \u2502 \u2502 dicts = [dict(os.environ)] \u2502\r\n\u2502 104 \u2502 \u2502 dicts += [Config.read_yaml(path) for path in default_config_paths] \u2502\r\n\u2502 105 \u2502 \u2502 final = merge_dict(dicts) \u2502\r\n\u2502 \u2771 106 \u2502 \u2502 return Config(**final) \u2502\r\n\u2502 107 \u2502 \u2502\r\n\u2502 108 \u2502 @classmethod \u2502\r\n\u2502 109 \u2502 def from_llm_config(cls, llm_config: dict): \u2502\r\n\u2502 \u2502\r\n\\new_meta_env\\Lib\\site-packages\\pydantic\\main.py:176 in __init__ \u2502\r\n\u2502 \u2502\r\n\u2502 173 \u2502 \u2502 \"\"\" \u2502\r\n\u2502 174 \u2502 \u2502 # `__tracebackhide__` tells pytest and some other tools to omit this function fr \u2502\r\n\u2502 175 \u2502 \u2502 __tracebackhide__ = True \u2502\r\n\u2502 \u2771 176 \u2502 \u2502 self.__pydantic_validator__.validate_python(data, self_instance=self) \u2502\r\n\u2502 177 \u2502 \u2502\r\n\u2502 178 \u2502 # The following line sets a flag that we use to determine when `__init__` gets overr \u2502\r\n\u2502 179 \u2502 __init__.__pydantic_base_init__ = True # pyright: ignore[reportFunctionMemberAccess \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nValidationError: 1 validation error for Config\r\nllm\r\n Field required [type=missing, input_value={'ALLUSERSPROFILE': 'C:\\\\..._INIT_AT_FORK': 'FALSE'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.7/v/missing", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f0df3144d68ed288f5ccce0c34d3939f8462ba98", "files": [{"path": "config/config2.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "e43aaec9322054f4dec92f44627533816588663b", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/576", "iss_label": "", "title": "\u8bf7\u95eemetagpt\u662f\u5426\u652f\u6301\u5411\u91cf\u6570\u636e\uff0c\u6784\u5efa\u81ea\u5df1\u7684\u77e5\u8bc6\u5e93", "body": "\u8bf7\u95eemetagpt\u662f\u5426\u652f\u6301\u5411\u91cf\u6570\u636e\uff0c\u6784\u5efa\u81ea\u5df1\u7684\u77e5\u8bc6\u5e93", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e43aaec9322054f4dec92f44627533816588663b", "files": [{"path": "/metagpt/document_store", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["/metagpt/document_store"], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "be56351e000a0f08562820fb04f6fdbe34d9e655", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/205", "iss_label": "", "title": "Rate Limited error", "body": "openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-fK5bb25UFhVbebfBtfCejGc4 on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.\r\n\r\nMaybe a way to resume so all the runtime isn't just lost?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "be56351e000a0f08562820fb04f6fdbe34d9e655", "files": [{"path": "metagpt/provider/openai_api.py", "Loc": {"('OpenAIGPTAPI', '_achat_completion_stream', 150)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/provider/openai_api.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "fd7feb57fac8d37509b1325cad502d2f65d59956", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1553", "iss_label": "inactive", "title": "ValueError: Creator not registered for key: LLMType.OLLAMA", "body": "**Bug description**\r\n<!-- Clearly and directly describe the current bug -->\r\nI using ***MetaGPT ver 0.8.1*** but when use RAG with method **SimpleEngine.from_docs** have error ***ValueError: Creator not registered for key: LLMType.OLLAMA***\r\n\r\n<!-- **Bug solved method** -->\r\n<!-- If you solved the bug, describe the idea or process to solve the current bug. Of course, you can also paste the URL address of your Pull Request. -->\r\n<!-- If not, provide more auxiliary information to facilitate our further positioning and investigation -->\r\n\r\n**Environment information**\r\n<!-- Environment\uff1aSystem version (like ubuntu 22.04), Python version (conda python 3.7), LLM type and model (OpenAI gpt-4-1106-preview) -->\r\n\r\n- LLM type and model name: ollama and model: hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\r\n- System version:\r\n- Python version: 3.10\r\n- MetaGPT version or branch: 0.8.1\r\n\r\n<!-- Dependent packagess\uff1athe packages version cause the bug(like `pydantic 1.10.8`), installation method\uff08like `pip install metagpt` or `pip install from source` or `run in docker`\uff09 -->\r\n\r\n- packages version:\r\n- installation method: \r\n\r\n**Screenshots or logs**\r\n<!-- Screenshots or logs of the bug can help us understand the problem more quickly -->\r\n***config2.yaml***\r\nembedding:\r\n api_type: \"ollama\"\r\n model: \"hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\"\r\n base_url: \"http://127.0.0.1:11434/api\"\r\n\r\nllm:\r\n api_type: \"ollama\"\r\n model: \"hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\"\r\n base_url: \"http://127.0.0.1:11434/api\"\r\n\r\n***Error Response***\r\n[/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/base.py](https://localhost:8080/#) in get_instance(self, key, **kwargs)\r\n 27 return creator(**kwargs)\r\n 28 \r\n---> 29 raise ValueError(f\"Creator not registered for key: {key}\")\r\n 30 \r\n 31 \r\n\r\nValueError: Creator not registered for key: LLMType.OLLAMA\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"path": "config/config2.yaml", "Loc": [28]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "df8d1124c68b62bb98c71b6071abf5efe6293dba", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/15", "iss_label": "", "title": "\u8bf7\u95ee\u5982\u4f55\u914d\u7f6e\u4f7f\u7528Azure\u4e0a\u7684api\uff1f", "body": "\u4f60\u597d\uff0c \r\n\u6211\u770b\u5230\u6587\u6863\u4e2d\u9700\u8981\u914d\u7f6eopenAI\u7684key\uff0c\u4f46\u662f\u6211\u6ce8\u610f\u5230\u5728provider\u4e2d\u6709azure_api\u7684\u76f8\u5173\u6587\u4ef6,\r\n\u8bf7\u95ee\u662f\u5426\u5728\u54ea\u4e2a\u5730\u65b9\u53ef\u4ee5\u914d\u7f6e\u8ba9\u4ed6\u4f7f\u7528azure\u63d0\u4f9b\u7684\u670d\u52a1\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "df8d1124c68b62bb98c71b6071abf5efe6293dba", "files": [{"path": "config/config.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config.yaml"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "dfa33fcdaade1e4f8019835bf065d372d76724ae", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/924", "iss_label": "", "title": "GLM4\u4e00\u76f4\u62a5\u9519", "body": "2024-02-22 16:50:26.666 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 80.109(s), this was the 5th time calling it. exp: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Full API spec', 'Required Python packages', 'Required Other language third-party packages'} [type=value_error, input_value={'Required JavaScript pac...ation and development.'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.5/v/value_error", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "dfa33fcdaade1e4f8019835bf065d372d76724ae", "files": [{"path": "config/config2.yaml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "80a189ad4a1546f8c1a9dbe00c42725868c35e5e", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/135", "iss_label": "", "title": "failed to launch chromium browser process errors", "body": "get errors on launch of browser process; below is the error from terminal which happens for all browser processes trying to launch.\r\n\r\n```\r\nINFO | metagpt.utils.mermaid:mermaid_to_file:38 - Generating /Users/lopezdp/DevOps/Ai_MetaGPT/workspace/test_app/resources/competitive_analysis.pdf..\r\n\r\nError: Failed to launch the browser process! spawn /usr/bin/chromium ENOENT\r\n\r\n\r\nTROUBLESHOOTING: https://pptr.dev/troubleshooting\r\n\r\n at ChildProcess.onClose (file:///Users/lopezdp/DevOps/Ai_MetaGPT/node_modules/@puppeteer/browsers/lib/esm/launch.js:253:24)\r\n at ChildProcess.emit (node:events:513:28)\r\n at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)\r\n at onErrorNT (node:internal/child_process:485:16)\r\n at processTicksAndRejections (node:internal/process/task_queues:83:21)\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "80a189ad4a1546f8c1a9dbe00c42725868c35e5e", "files": [{"path": "config/puppeteer-config.json", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": ["config/puppeteer-config.json"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1115", "iss_label": "", "title": "The following error appears on every run", "body": "![image](https://github.com/geekan/MetaGPT/assets/115678682/1fb58e0b-47a7-4e1f-a7b7-924ea9adedb0)\r\n\r\n2024-03-27 11:15:59.019 | ERROR | metagpt.utils.common:wrapper:631 - Exception occurs, start to serialize the project, exp:\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\r\n result = fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\repair_llm_raw_output.py\", line 296, in retry_parse_json_text\r\n parsed_data = CustomDecoder(strict=False).decode(output)\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 425, in _aask_v1\r\n parsed_data = llm_output_postprocess(\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31d30 state=finished raised JSONDecodeError>]\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 640, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 550, in run\r\n rsp = await self.react()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31160 state=finished raised RetryError>]\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 626, in wrapper\r\n result = await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\team.py\", line 134, in run\r\n await self.env.run()\r\nException: Traceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\r\n result = fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\repair_llm_raw_output.py\", line 296, in retry_parse_json_text\r\n parsed_data = CustomDecoder(strict=False).decode(output)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 297, in decode\r\n return super().decode(s)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\json\\decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\json\\decoder.py\", line 353, in raw_decode\r\n obj, end = self.scan_once(s, idx)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 65, in scan_once\r\n return _scan_once(string, idx)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 36, in _scan_once\r\n return parse_object((string, idx + 1), strict, _scan_once, object_hook, object_pairs_hook, memo)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 164, in JSONObject\r\n value, end = scan_once(s, end)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 34, in _scan_once\r\n return parse_string(string, idx + 1, strict, delimiter=nextchar)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 227, in py_scanstring\r\n raise JSONDecodeError(\"Unterminated string starting at\", s, begin)\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 425, in _aask_v1\r\n parsed_data = llm_output_postprocess(\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\llm_output_postprocess.py\", line 19, in llm_output_postprocess\r\n result = postprocess_plugin.run(output=output, schema=schema, req_key=req_key)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 68, in run\r\n new_output = self.run_repair_llm_output(output=output, schema=schema, req_key=req_key)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 32, in run_repair_llm_output\r\n parsed_data = self.run_retry_parse_json_text(content)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 47, in run_retry_parse_json_text\r\n parsed_data = retry_parse_json_text(output=content) # should use output=content\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 289, in wrapped_f\r\n return self(f, *args, **kw)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 379, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31d30 state=finished raised JSONDecodeError>]\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 640, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 550, in run\r\n rsp = await self.react()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 517, in react\r\n rsp = await self._react()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 463, in _react\r\n rsp = await self._act()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 392, in _act\r\n response = await self.rc.todo.run(self.rc.history)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 58, in run\r\n doc = await self._update_system_design(filename=filename)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 86, in _update_system_design\r\n system_design = await self._new_system_design(context=prd.content)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 73, in _new_system_design\r\n node = await DESIGN_API_NODE.fill(context=context, llm=self.llm)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 505, in fill\r\n return await self.simple_fill(schema=schema, mode=mode, images=images, timeout=timeout, exclude=exclude)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 457, in simple_fill\r\n content, scontent = await self._aask_v1(\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 88, in async_wrapped\r\n return await fn(*args, **kwargs)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 47, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31160 state=finished raised RetryError>]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d", "files": [{"path": "metagpt/strategy/planner.py", "Loc": {"('Planner', 'update_plan', 68)": {"mod": [75]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/strategy/planner.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "bdf9d224b5a05228897553a29214adc074fbc465", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/754", "iss_label": "", "title": "SubscriptionRunner", "body": "import asyncio\r\nfrom metagpt.subscription import SubscriptionRunner\r\nfrom metagpt.roles import Searcher\r\nfrom metagpt.schema import Message\r\n\r\nasync def trigger():\r\n while True:\r\n yield Message(\"the latest news about OpenAI\")\r\n await asyncio.sleep(1)\r\n\r\n\r\nasync def callback(msg: Message):\r\n print(msg.content)\r\n\r\n\r\n# async def main():\r\n# aa = trigger()\r\n# async for i in aa:\r\n# await callback(i)\r\nasync def main():\r\n pd = SubscriptionRunner()\r\n await pd.subscribe(Searcher(), trigger(), callback)\r\n await pd.run()\r\n\r\nasyncio.run(main())\r\n\u5728\u521b\u5efaRunner\u65f6\u5019\u62a5\u9519\uff0c0.6.3\u7248\u672c\r\nTraceback (most recent call last):\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 44, in <module>\r\n asyncio.run(main())\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 190, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\uweih034\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\base_events.py\", line 653, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 40, in main\r\n pd = SubscriptionRunner()\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\main.py\", line 164, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\_internal\\_mock_val_ser.py\", line 47, in __getattr__\r\n raise PydanticUserError(self._error_message, code=self._code)\r\npydantic.errors.PydanticUserError: `SubscriptionRunner` is not fully defined; you should define `Environment`, then call `SubscriptionRunner.model_rebuild()`.\r\n\r\nFor further information visit https://errors.pydantic.dev/2.5/u/class-not-fully-defined", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "bdf9d224b5a05228897553a29214adc074fbc465", "files": [{"path": "metagpt/environment.py", "Loc": {"('Environment', None, 27)": {"mod": []}}, "status": "modified"}, {"Loc": {"(None, None, None)": {"mod": [21]}}, "path": null}]}, "own_code_loc": [{"Loc": {"(None, None, None)": {"mod": [21]}}, "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "metagpt/environment.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f88fa9e2df09c28f867bda54ec24fa25b50be830", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/178", "iss_label": "", "title": "Specify Directory of pdf documents as Knowledge Base", "body": "Hi, how can we specify any folder which includes pdf documents as a knowledge base and create a new Role of Document Controller to extract specific information from within the documents in KB?\r\n\r\nAny help would be highly appreciated\r\n\r\nThanks much appreciated", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f88fa9e2df09c28f867bda54ec24fa25b50be830", "files": [{"path": "metagpt/document_store", "Loc": {}}, {"path": "tests/metagpt/document_store", "Loc": {}}, {"path": "examples/search_kb.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/search_kb.py"], "doc": ["metagpt/document_store", "tests/metagpt/document_store"], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7e756b9db56677636e6920c1e6628d13e980aec7", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/6006", "iss_label": "bug", "title": "All custom components throw errors after update to latest version", "body": "### Bug Description\n\n```\n[01/29/25 00:15:00] ERROR 2025-01-29 00:15:00 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405\n <class 'pydantic._internal._model_construction.ModelMetaclass'> \n``` \n\n### Reproduction\n\n1. langflow updated to v1.1.2 from v1.1.1\n2. all previously created custom components throwing error:\n\n[01/29/25 00:24:09] ERROR 2025-01-29 00:24:09 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405\n <class 'pydantic._internal._model_construction.ModelMetaclass'> \n\n### Expected behavior\n\nLangflow should build tool correctly, as on previous version. \n\nSimplified failing code:\n```python\nfrom langflow.custom import Component\nfrom langflow.io import Output\nfrom langflow.schema import Data\nfrom langflow.field_typing import Tool\nfrom langchain.tools import StructuredTool\nfrom pydantic import BaseModel, Field\n\nclass MinimalSchema(BaseModel):\n input_text: str = Field(..., description=\"Text Input\")\n\nclass SimpleToolComponentMinimalSchema(Component):\n display_name = \"Simple Tool Minimal Schema Test\"\n description = \"Component with StructuredTool and minimal schema\"\n outputs = [Output(display_name=\"Tool\", name=\"test_tool\", method=\"build_tool\")]\n\n class MinimalSchema(BaseModel): # Define inner schema\n input_text: str = Field(..., description=\"Text Input\")\n\n def build_tool(self) -> Tool:\n return StructuredTool.from_function( # Return directly - simplified\n name=\"minimal_tool\",\n description=\"Minimal tool for testing schema\",\n func=self.run_tool,\n args_schema=SimpleToolComponentMinimalSchema.MinimalSchema\n )\n\n def run_tool(self, input_text: str) -> str:\n return f\"Tool received: {input_text}\"\n``` \n\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nwsl Ubuntu latest\n\n### Langflow Version\n\n1.1.2\n\n### Python Version\n\n3.12\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [40], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "19818db68b507332be71f30dd90d16bf4c7d6f83", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3718", "iss_label": "enhancement", "title": "Add pgVector in the building instructions for the PostgreSQL Docker image", "body": "### Feature Request\r\n\r\nInclude the pgVector component with the Docker build instructions. This would provide the use with a fully functional PostgreSQL Vector DB, ready to be used inside LangFlow.\r\n\r\n### Motivation\r\n\r\nI am not a programmer, neither I do have proper knowledge of SQL, but I liked to play with some RAG ideas and LangFlow seems perfect. \r\nSo, after installing the Docker version for development of LangFlow, I noticed that the PostgreSQL server is missing the pgVector component, or at least that is what I understood from the error messages. \r\nPerhaps, it would be useful if the pgVector could be included in the Docker container, so having the user to just activate it on the SQL database. Anyway, I might be wrong, so in that case please forgive me.\r\n\r\n### Your Contribution\r\n\r\nAfter looking into the repository and searching around, with the help of AI (of course!), I found that the Docker instructions for the PostgreSQL server are defined inside the file \\docker\\cdk.Dockerfile (hope it's correct), and these might be the instructions to include pgVector:\r\n\r\n```\r\nFROM --platform=linux/amd64 python:3.10-slim\r\n\r\nWORKDIR /app\r\n\r\n# Install Poetry and build dependencies\r\nRUN apt-get update && apt-get install -y \\\r\n gcc \\\r\n g++ \\\r\n curl \\\r\n build-essential \\\r\n git \\\r\n postgresql-server-dev-all \\\r\n && rm -rf /var/lib/apt/lists/*\r\n\r\n# Install Poetry\r\nRUN curl -sSL https://install.python-poetry.org | python3 -\r\n\r\n# Add Poetry to PATH\r\nENV PATH=\"${PATH}:/root/.local/bin\"\r\n\r\n# Copy the pyproject.toml and poetry.lock files\r\nCOPY poetry.lock pyproject.toml ./\r\n\r\n# Copy the rest of the application codes\r\nCOPY ./ ./\r\n\r\n# Install dependencies\r\nRUN poetry config virtualenvs.create false && poetry install --no-interaction --no-ansi\r\n\r\n# Install pgvector extension\r\nRUN git clone https://github.com/pgvector/pgvector.git /tmp/pgvector && \\\r\n cd /tmp/pgvector && \\\r\n make && \\\r\n make install && \\\r\n rm -rf /tmp/pgvector\r\n\r\n# Install additional dependencies\r\nRUN poetry add botocore\r\nRUN poetry add pymysql\r\n\r\n# Command to run your application\r\nCMD [\"sh\", \"./container-cmd-cdk.sh\"]\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "19818db68b507332be71f30dd90d16bf4c7d6f83", "files": [{"path": "docker_example/docker-compose.yml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nor\n4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": ["docker_example/docker-compose.yml"], "test": [], "config": [], "asset": []}}, {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "12a46b6936e23829d9956d4d5f1fa51faff76137", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/965", "iss_label": "stale", "title": "Method for Dynamically Manipulating Parameters of a Custom Component", "body": "```python\r\nclass DynamicConfigCustomComponent(CustomComponent):\r\n def build_config(self, prev_selection=None):\r\n config = {\r\n \"param1\": {\"display_name\": \"Parameter 1\"},\r\n \"param2\": {\r\n \"display_name\": \"Parameter 2\",\r\n \"options\": [1, 2, 3],\r\n \"value\": 1,\r\n },\r\n }\r\n \r\n if prev_selection is not None:\r\n if prev_selection == 2:\r\n config[\"param3\"] = {\"display_name\": \"Parameter 3\", \"value\": \"hello\"}\r\n \r\n return config\r\n\r\n``` \r\nI want to dynamically change different values depending on the type of component that is input or connected when using a custom component, as shown in the attached code. For example, in Langflow's prompt template, when you change the text, the key value input into that component is dynamically displayed in the list. Is there any way to do this?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "12a46b6936e23829d9956d4d5f1fa51faff76137", "files": [{"path": "src/frontend/src/types/components/index.ts", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/types/components/index.ts"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "ad7cefa10c0647feee85114d58559fcf83ba6743", "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/1902", "iss_label": "setup", "title": "Error with 'python -m autogpt' command. Please set your OpenAI API key in .env or as an environment variable. You can get your key from https://beta.openai.com/account/api-keys", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\nInstalled the 'stable' version of the program\r\nI run 'python -m autogpt' command and comes up with an error.\r\n\r\n\r\n![Screenshot 2023-04-16 183147](https://user-images.githubusercontent.com/130889399/232320050-2b495403-55e9-4d43-b588-e53172eba533.jpg)\r\n\r\nI have paid Chat GPT and Open AI API accounts.\r\nFor Chat GPT I have access to version 4\r\nFor Open AI API I do not have access to version 4, I am on the version before this.\n\n### Current behavior \ud83d\ude2f\n\nError message ;Please set your OpenAI API key in .env or as an environment variable.\r\nYou can get your key from https://beta.openai.com/account/api-keys'\n\n### Expected behavior \ud83e\udd14\n\nShould load the program as to start commands\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\npython -m autogpt```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ad7cefa10c0647feee85114d58559fcf83ba6743", "files": [{"path": "run.sh", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1\n0", "info_type": "Other\n\u73af\u5883\u53d8\u91cf /script shell\u7b49"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["run.sh"]}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "90e6a55e378bc80352f01eb08122300b4d1a64ec", "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/2428", "iss_label": "function: logging", "title": "Add logging of user input of the role and goals", "body": "### Duplicates\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Summary \ud83d\udca1\r\n\r\nNow logs reflect only gpt's response but i don't really remember what exactly i input before. Please log it same as in the console. \r\nCurrent logging makes it a lot harder to debug\r\n\r\n### Examples \ud83c\udf08\r\n```\r\nAll packages are installed.\r\nWelcome back! Would you like me to return to being sc3?\r\nContinue with the last settings?\r\nName: sc3\r\nRole: warhammer 40k writer\r\nGoals: ['research the theme', 'do a 5000 symbols structurized explanation on wh40k lore', 'terminate']\r\nContinue (y/n): n\r\nWelcome to Auto-GPT! run with '--help' for more information.\r\nCreate an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.\r\nName your AI: For example, 'Entrepreneur-GPT'\r\nAI Name: da23eads\r\nda23eads here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\nda23eads is: wh 40k writer\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\nEnter nothing to load defaults, enter nothing when finished.\r\nGoal 1: research the theme\r\nGoal 2: do a plot esplanation on warhammer 40k universe\r\nGoal 3: terminate\r\nGoal 4:\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\n- Thinking...\r\n```\r\n\r\n### Motivation \ud83d\udd26\r\n\r\nmake the world better", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["ai_settings.yml"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["ai_settings.yml"], "asset": []}}, {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "16b7e7a91e7b6c73ddf3e7193cea53f1b45671fa", "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/4218", "iss_label": "setup", "title": "AutoGPT v0.3.1 crashes immediately after task given", "body": "### Which Operating System are you using?\r\n\r\nWindows\r\n\r\n### Which version of Auto-GPT are you using?\r\n\r\nLatest Release v0.3.1\r\n\r\n### GPT-3 or GPT-4?\r\n\r\nGPT-3.5\r\n\r\n### Steps to reproduce \ud83d\udd79\r\n\r\nWelcome to Auto-GPT! run with '--help' for more information.\r\nCreate an AI-Assistant: input '--manual' to enter manual mode.\r\n Asking user via keyboard...\r\nI want Auto-GPT to: Search Big Mac prices in EU countries\r\nUnable to automatically generate AI Config based on user desire. Falling back to manual mode.\r\nCreate an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.\r\nName your AI: For example, 'Entrepreneur-GPT'\r\n Asking user via keyboard...\r\nAI Name: MacGPT\r\nMacGPT here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\n Asking user via keyboard...\r\nMacGPT is: Search for Big Mc prices in EU countries\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\n Enter nothing to load defaults, enter nothing when finished.\r\n Asking user via keyboard...\r\nGoal 1: Conduct a thorough and accurate search of BigMc prices across EU countries\r\n Asking user via keyboard...\r\nGoal 2: Provide price per each EU capital\r\n Asking user via keyboard...\r\nGoal 3: Ensure that the information provided is up-to-date and accurate\r\n Asking user via keyboard...\r\nGoal 4: Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n Asking user via keyboard...\r\nGoal 5: Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nEnter your budget for API calls: For example: $1.50\r\n Enter nothing to let the AI run without monetary limit\r\n Asking user via keyboard...\r\nBudget: $1\r\nMacGPT has been created with the following details:\r\nName: MacGPT\r\nRole: Search for Big Mc prices in EU countries\r\nGoals:\r\n- Conduct a thorough and accurate search of BigMc prices across EU countries\r\n- Provide price per each EU capital\r\n- Ensure that the information provided is up-to-date and accurate\r\n- Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n- Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\makkolev\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\agpt\\autogpt\\__main__.py\", line 5, in <module>\r\n autogpt.cli.main()\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1635, in invoke\r\n rv = super().invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"C:\\agpt\\autogpt\\cli.py\", line 90, in main\r\n run_auto_gpt(\r\n File \"C:\\agpt\\autogpt\\main.py\", line 186, in run_auto_gpt\r\n agent.start_interaction_loop()\r\n File \"C:\\agpt\\autogpt\\agent\\agent.py\", line 113, in start_interaction_loop\r\n assistant_reply = chat_with_ai(\r\n File \"C:\\agpt\\autogpt\\llm\\chat.py\", line 244, in chat_with_ai\r\n assistant_reply = create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\llm_utils.py\", line 166, in create_chat_completion\r\n response = api_manager.create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\api_manager.py\", line 55, in create_chat_completion\r\n response = openai.ChatCompletion.create\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\chat_completion.py\", line 25, in create\r\n return super().create(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\abstract\\engine_api_resource.py\", line 153, in create\r\n response, _, api_key = requestor.request(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 226, in request\r\n resp, got_stream = self._interpret_response(result, stream)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 619, in _interpret_response\r\n self._interpret_response_line(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 682, in _interpret_response_line\r\n raise self.handle_error_response(\r\nopenai.error.AuthenticationError: <empty message>\r\n\r\n### Current behavior \ud83d\ude2f\r\n\r\nCrashes multiple times. Open_API_key has been provided. Restarted virtual environment a couple of times.\r\nNB! Tried to start AutoGPT both with Windows Python3.10 way and via Docker. In both cases can't start start search and receive immediately error (below) - openai.error.AuthenticationError: <empty message>\r\n\r\n### Expected behavior \ud83e\udd14\r\n\r\nStarts correctly\r\n\r\n### Your prompt \ud83d\udcdd\r\n\r\n```AI Name: MacGPT\r\nMacGPT here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\n Asking user via keyboard...\r\nMacGPT is: Search for Big Mc prices in EU countries\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\n Enter nothing to load defaults, enter nothing when finished.\r\n Asking user via keyboard...\r\nGoal 1: Conduct a thorough and accurate search of BigMc prices across EU countries\r\n Asking user via keyboard...\r\nGoal 2: Provide price per each EU capital\r\n Asking user via keyboard...\r\nGoal 3: Ensure that the information provided is up-to-date and accurate\r\n Asking user via keyboard...\r\nGoal 4: Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n Asking user via keyboard...\r\nGoal 5: Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nEnter your budget for API calls: For example: $1.50\r\n Enter nothing to let the AI run without monetary limit\r\n Asking user via keyboard...\r\nBudget: $1\r\nMacGPT has been created with the following details:\r\nName: MacGPT\r\nRole: Search for Big Mc prices in EU countries\r\nGoals:\r\n- Conduct a thorough and accurate search of BigMc prices across EU countries\r\n- Provide price per each EU capital\r\n- Ensure that the information provided is up-to-date and accurate\r\n- Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n- Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\n```\r\n\r\n\r\n### Your Logs \ud83d\udcd2\r\n\r\n```log\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\makkolev\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\agpt\\autogpt\\__main__.py\", line 5, in <module>\r\n autogpt.cli.main()\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1635, in invoke\r\n rv = super().invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"C:\\agpt\\autogpt\\cli.py\", line 90, in main\r\n run_auto_gpt(\r\n File \"C:\\agpt\\autogpt\\main.py\", line 186, in run_auto_gpt\r\n agent.start_interaction_loop()\r\n File \"C:\\agpt\\autogpt\\agent\\agent.py\", line 113, in start_interaction_loop\r\n assistant_reply = chat_with_ai(\r\n File \"C:\\agpt\\autogpt\\llm\\chat.py\", line 244, in chat_with_ai\r\n assistant_reply = create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\llm_utils.py\", line 166, in create_chat_completion\r\n response = api_manager.create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\api_manager.py\", line 55, in create_chat_completion\r\n response = openai.ChatCompletion.create(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\chat_completion.py\", line 25, in create\r\n return super().create(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\abstract\\engine_api_resource.py\", line 153, in create\r\n response, _, api_key = requestor.request(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 226, in request\r\n resp, got_stream = self._interpret_response(result, stream)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 619, in _interpret_response\r\n self._interpret_response_line(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 682, in _interpret_response_line\r\n raise self.handle_error_response(\r\nopenai.error.AuthenticationError: <empty message>\r\nPress any key to continue . . .\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "iss_html_url": "https://github.com/fastapi/fastapi/issues/5424", "iss_label": "question\nanswered\nquestion-migrate", "title": "How to identify query params with keys only and no value", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\n@router.get(\"/events\")\r\ndef get_alerts(request: Request)\r\n params = request.query_params\n```\n\n\n### Description\n\nI want to handle a use case where I want to handle a use case where if a query param is passed but no value is set, I would return a specific message. I want a different behavior when to when it is not passed at all.\r\n\r\nI tried using request.query_params but it doesn't get the Key in the request as well.\r\n\r\nPostman request looks like this:\r\n<img width=\"805\" alt=\"image\" src=\"https://user-images.githubusercontent.com/104721284/192010955-160c2418-63f3-46ac-9f64-a416b92c03ae.png\">\r\n\r\n\r\n\n\n### Operating System\n\nmacOS\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.70.0\n\n### Python Version\n\n3.9\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "iss_html_url": "https://github.com/fastapi/fastapi/issues/5425", "iss_label": "question\nanswered\nquestion-migrate", "title": "Error while opening swagger docs while uploading file in APIRouter", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nrouter = APIRouter(\r\n prefix='/predict',\r\n tags=[\"Prediction\"],\r\n responses={404: {\"description\": \"Not Found\"}}\r\n)\r\n\r\n\r\n@router.post(\"/\")\r\nasync def predict(file: UploadFile = File(...)):\r\n extension = file.filename.split(\".\")[-1] in (\"jpg\", \"jpeg\", \"png\")\r\n if not extension:\r\n raise HTTPException(status_code=400, detail=\"File Format Error : Uploaded file must be a JPG, JPEG or PNG file\")\r\n image = read_image_file(await file.read())\r\n result = predict_pneumonia(image)\r\n if result > 0.6:\r\n return JSONResponse(content={\"prediction\": \"pneumonia\"})\r\n return JSONResponse(content={\"prediction\": \"no pneumonia\"})\r\n```\r\n\r\n\r\n### Description\r\n\r\nI am just trying to create a ML prediction application using FastAPI. While uploading images, swagger docs doesn't load and its showing the below mentioned error. But the endpoint works perfectly when tried with Postman.\r\n\r\n![image](https://user-images.githubusercontent.com/58306412/192039571-1eed5f98-cd67-49ec-97ec-364b28ace0f9.png)\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 404, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\uvicorn\\middleware\\proxy_headers.py\", line 78, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 270, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\applications.py\", line 124, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 184, in __call__\r\n raise exc\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 75, in __call__\r\n raise exc\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 64, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 21, in __call__\r\n raise e\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 18, in __call__\r\n await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 680, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 275, in handle\r\n await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 65, in app\r\n response = await func(request)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 225, in openapi\r\n return JSONResponse(self.openapi())\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 200, in openapi\r\n self.openapi_schema = get_openapi(\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\openapi\\utils.py\", line 423, in get_openapi\r\n definitions = get_model_definitions(\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\utils.py\", line 39, in get_model_definitions\r\n model_name = model_name_map[model]\r\nKeyError: <class 'pydantic.main.Body_predict_predict__post'>\r\n```\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.85.0\r\n\r\n### Python Version\r\n\r\n3.9\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "files": [{"path": "fastapi/routing.py", "Loc": {"('APIRouter', 'add_api_route', 513)": {"mod": [593]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/routing.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "iss_html_url": "https://github.com/fastapi/fastapi/issues/5422", "iss_label": "question\nquestion-migrate", "title": "Unidirectional websocket connections where only the server pushes data to the clients", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\n@app.websocket(\"/ws\")\r\nasync def websocket_endpoint(websocket: WebSocket):\r\n await websocket.accept()\r\n while True:\r\n data = await websocket.receive_text()\r\n await websocket.send_text(f\"Message text was: {data}\")\n```\n\n\n### Description\n\nHello,\r\nIs there a way I could send data to clients over websocket without listening for when clients send data back. I'm trying to have a websocket endpoint where the server is pushing data to the client in a unidirectional way without the option for the client to send responses back. There doesn't seem to be any code that I could find that supports this since all the documentation seems to require that the server is listening for a `websocket.recieve_text()`. Any help would be much appreciated, thanks.\n\n### Operating System\n\nLinux\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.81.0\n\n### Python Version\n\n3.8.13\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [23], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "55afb70b3717969565499f5dcaef54b1f0acc7da", "iss_html_url": "https://github.com/fastapi/fastapi/issues/891", "iss_label": "question\nanswered\nquestion-migrate", "title": "SQL related tables and corresponding nested pydantic models in async", "body": "Really impressed with FastAPI so far... I have search docs github, tickets and googled the issue described below.\r\n\r\n### Description\r\n\r\nHow best to work with related tables and corresponding nested pydantic models whilst persisting data in a relational database in an async application?\r\n\r\n### Additional context\r\n\r\nI have been attempting to extend the example in the docs \r\nhttps://fastapi.tiangolo.com/advanced/async-sql-databases/\r\nwhich relies on https://github.com/encode/databases\r\n\r\nUsing three test pydantic models as an example:\r\n\r\n```\r\nclass UserModel(BaseModel):\r\n id: int\r\n title: str = Field(..., min_length=2, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n username: str = Field(..., min_length=3, max_length=50)\r\n email: str = Field(..., min_length=3, max_length=50)\r\n favourite_book: int = Field(...)\r\n\r\nclass FavouriteBook(BaseModel):\r\n id: int\r\n title: str = Field(...)\r\n author: str = Field(...)\r\n\r\n\r\nclass ExtendedUser(BaseModel):\r\n id: int\r\n title: str = Field(..., min_length=2, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n username: str = Field(..., min_length=3, max_length=50)\r\n email: str = Field(..., min_length=3, max_length=50)\r\n favourite_book: FavouriteBook\r\n\r\n```\r\n\r\nthe route would ideally be along the lines of...\r\n\r\n```\r\n@router.get(\"/extended\", response_model=List[ExtendedUser])\r\nasync def list():\r\n query = **sqlAlchemy/databases call that works**\r\n return database.fetch_all(query=query)\r\n\r\n```\r\n\r\n\r\nHow can a user create a route that returns the nested ExtendedUser from the database without resorting to performing two queries? \r\nAn SQL join is a standard way to do this with a single query. However, this does not work with SQLAlchemy core as the two tables contain 'id' and 'title' columns. \r\nIt is possible to work with SQLAlchemy orm - but not in an async way as far as I know. (async is my reason for using FastAPI ). I could rename the columns to something unique ( but to rename 'id' column seems like poor database design to me).\r\n\r\n\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [31], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "1760da0efa55585c19835d81afa8ca386036c325", "iss_html_url": "https://github.com/fastapi/fastapi/issues/3882", "iss_label": "question\nquestion-migrate", "title": "Doing work after the HTTP response has been sent", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\nfrom fastapi import FastAPI, Request\r\n\r\napp = FastAPI()\r\n\r\n@app.middleware(\"http\")\r\nasync def write_log(request: Request, call_next):\r\n response = await call_next(request)\r\n # write log\r\n return response\n```\n\n\n### Description\n\nI want to log data for each request, however since my application is latency sensitive, I would want to return as quickly as possible. Is there an equivalent to Symfony's \"[terminate](https://symfony.com/doc/current/reference/events.html#kernel-terminate)\" event (which I guess is the `request_finished` signal in Django)? The idea is to do the log writing after the HTTP response has been sent.\r\n\r\nThe above code is from the middleware documentation, but it basically means the code for writing the log will be executed before the response is sent.\n\n### Operating System\n\nLinux\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.65.1\n\n### Python Version\n\n3.8.5\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1760da0efa55585c19835d81afa8ca386036c325", "files": [{"path": "fastapi/background.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/background.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "a0e4d38bea74940de013e04a6d6f399d62f04280", "iss_html_url": "https://github.com/fastapi/fastapi/issues/1498", "iss_label": "question\nreviewed\nquestion-migrate", "title": "RedirectResponse from a POST request route to GET request route shows 405 Error code.", "body": "_Summary of the total issue is:_ **How to do a Post/Redirect/Get (PRG) in FastAPI?**\r\n\r\n_This is not necessarily a bug, rather a question._\r\n### Things i tried:\r\nI want to redirect response from 2nd route to 1st route. This [Issue#199](https://github.com/tiangolo/fastapi/issues/199) here explains **GET to GET** but not a **POST to GET**. **N.B:** `I have done this type of POST -> GET redirecting in flask, it was working there but not here.` And also this [Issue#863](https://github.com/tiangolo/fastapi/issues/863) has the same problem but doesn't really solves the problem. To re produce the error check the bottom.\r\n\r\n```Python3\r\n#1st route (GET request)\r\n@admin_content_edit_router.get('/admin/edit_content/set_category')\r\nasync def set_category(request:Request):\r\n return templates.TemplateResponse(\"admin/category_edit.html\", {'request': request})\r\n\r\n#2nd route (POST request)\r\n@admin_content_edit_router.post('/admin/edit_content/add_category')\r\nasync def add_category(request:Request):\r\n # here forms are getting processed\r\n return RedirectResponse(app.url_path_for('set_category')) # from here to 1st route\r\n```\r\nBut it shows :\r\n```Python3\r\n {\"detail\":\"Method Not Allowed\"}\r\n```\r\nFull traceback:\r\n```Python3\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/add_category HTTP/1.1\" 307 Temporary Redirect\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/set_category HTTP/1.1\" 405 Method Not Allowed\r\nERROR: Exception in callback _SelectorSocketTransport._read_ready()\r\nhandle: <Handle _SelectorSocketTransport._read_ready()>\r\nTraceback (most recent call last):\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\events.py\", line 145, in _run\r\n self._callback(*self._args)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 730, in _read_ready\r\n self._protocol.data_received(data)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 162, in data_received\r\n self.handle_events()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 247, in handle_events\r\n self.transport.resume_reading()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 711, in resume_reading\r\n raise RuntimeError('Not paused')\r\nRuntimeError: Not paused\r\n```\r\n\r\nBut when i do a GET to GET redirect response it works without any issue but a POST to GET blows things up! Am i completely missing something here? i did look up in starlette doc here on reverse route lookup but nothing helps. [https://www.starlette.io/routing/#reverse-url-lookups](url)\r\n\r\nQuick Re produce the error:\r\n```Python3\r\n\r\nfrom fastapi import FastAPI\r\nfrom starlette.responses import RedirectResponse\r\nimport os\r\nfrom starlette.status import HTTP_302_FOUND,HTTP_303_SEE_OTHER\r\n\r\napp = FastAPI()\r\n\r\n@app.post(\"/\")\r\nasync def login():\r\n # HTTP_302_FOUND,HTTP_303_SEE_OTHER : None is working:(\r\n return RedirectResponse(url=\"/ressource/1\",status_code=HTTP_303_SEE_OTHER)\r\n\r\n@app.get(\"/ressource/{r_id}\")\r\nasync def get_ressource(r_id:str):\r\n return {\"r_id\": r_id}\r\n\r\nif __name__ == '__main__':\r\n os.system(\"uvicorn tes:app --host 0.0.0.0 --port 80\")\r\n```\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "a0e4d38bea74940de013e04a6d6f399d62f04280", "files": [{"Loc": {"(None, None, None)": {"mod": [58]}}, "path": null}]}, "own_code_loc": [{"Loc": {"(None, None, None)": {"mod": [58]}}, "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "b93f8a709ab3923d1268dbc845f41985c0302b33", "iss_html_url": "https://github.com/fastapi/fastapi/issues/4551", "iss_label": "question\nquestion-migrate", "title": "Attribute not found while testing a Beanie Model inside fast api", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [x] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nMy Code:\r\n\r\n\r\nMy Route:\r\n\r\n@router.post(\"/login\")\r\nasync def internalLogin(request: Request,\r\n email: str = Form(...),\r\n password: str = Form(...)):\r\n try:\r\n res, token = await Controller.internalLogin(email=email, password=password)\r\n if res:\r\n return {\"message\": \"Success\"}\r\n else:\r\n return {\"message\": \"Failure\"}\r\n except DocumentNotFound as documentNotFoundException:\r\n return {\"message\": \"Error\"}\r\n```\r\n\r\nController:\r\n```\r\n@staticmethod\r\n async def internalLogin(email: str, password: str) -> List[bool | str]:\r\n logger.info(message=\"Inside OpenApi Controller\", fileName=__name__, functionName=\"OpenApiController\")\r\n try:\r\n user = await internalUserDb(email=email)\r\n if user is not None and user.verifyPassword(password):\r\n print(\"Logged In\")\r\n return [True, \"\"]\r\n else:\r\n print(\"Failed)\r\n return [False, \"\"]\r\n except DocumentNotFound as documentNotFound:\r\n raise documentNotFound\r\n\r\n```\r\n\r\nDB:\r\n\r\n```\r\nasync def internalUserDb(email: str) -> InternalUserModel:\r\n try:\r\n user: InternalUserModel = await InternalUserModel.find_one(InternalUserModel.email == email)\r\n return user\r\n except DocumentNotFound as documentNotFound:\r\n raise documentNotFound\r\n```\r\n\r\nMy TestCode:\r\n\r\n```\r\n@pytest.mark.anyio\r\nasync def testLogin():\r\n response = await asyncClient.post(\"/internalLogin\",\r\n data={\"email\": \"sample@mail.com\", \"password\": \"samplePass\"})\r\n assert response.status_code == 303\r\n```\r\n\r\nMy error while testing is: \r\n\r\n```\r\nFAILED Tests/TestLogin.py::testLogin[asyncio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'\r\nFAILED Tests/TestLogin.py::testLogin[trio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'\r\n```\r\n\r\n\r\n### Description\r\n\r\nHello, I am new to FastAPI. I am trying to test the fast api with PyTest. Normal tests are working perfectly fine but I am using MongoDB as backend to store my data. While I try to test the route that does some data fetching from database it shows error like `attribute not inside the model`. I am using Beanie ODM for MongoDB.\r\n\r\n### Operating System\r\n\r\nmacOS\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.73\r\n\r\n### Python Version\r\n\r\n3.10\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b93f8a709ab3923d1268dbc845f41985c0302b33", "files": [{"path": "docs/en/docs/advanced/testing-events.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/en/docs/advanced/testing-events.md"], "test": [], "config": [], "asset": []}}, {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "78b07cb809e97f400e196ff3d89862b9d5bd5dc2", "iss_html_url": "https://github.com/fastapi/fastapi/issues/4587", "iss_label": "question\nquestion-migrate", "title": "Use the raw response in Reponse classes", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nclass CustomEncoder():\r\n def encode(self, dict_data)\r\n return dict_data\r\n\r\nclass PhotonJSONResponse(JSONResponse):\r\n def __init__(self, content: typing.Any = None, status_code: int = 200, headers: dict = None, media_type: str = None,\r\n background: BackgroundTask = None) -> None:\r\n # Fetch the untouched response in the upper stacks\r\n current_frame = inspect.currentframe()\r\n self.raw_response = None\r\n while current_frame.f_back:\r\n if 'raw_response' in current_frame.f_locals:\r\n self.raw_response = current_frame.f_locals['raw_response']\r\n break\r\n current_frame = current_frame.f_back\r\n \r\n self._encoder = CustomEncoder()\r\n super().__init__(content, status_code, headers, media_type, background)\r\n\r\n def render(self, content: Any) -> bytes:\r\n dict_data = self._encoder.encode(self.raw_response)\r\n return super().render(dict_data)\r\n```\r\n\r\n\r\n### Description\r\n\r\nI want to access the raw response that hasn't been through the json_encoder inside my response class. This is because I have custom types that are handled in a custom encoder. I have looked through the relevant fastapi code and I can't find a way to override the encoder for all requests either. As you can see in the example code I currently use reflection to fetch the raw_response in the upper stack frame, however this is not very reliable. I also can't seem to do this using an APIRoute implementation because it would require re-implementing the route handler which is messy, maybe it would be more relevant in there though.\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.63.0\r\n\r\n### Python Version\r\n\r\n3.8.12\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "78b07cb809e97f400e196ff3d89862b9d5bd5dc2", "files": [{"path": "fastapi/routing.py", "Loc": {"('APIRoute', None, 300)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/routing.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/3341", "iss_label": "bug", "title": "state isn't clearly understood how to incorporate for script.py", "body": "### Describe the bug\n\nI see that output_modifier and a few other functions require state object, which is not defined in script.py nor are any of the existing plugins (that I looked at) use a state object.\r\n\r\nAs a result, I am unable to use the functions. I get a message about needing to pass state\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\ntry to use this snippet\r\n\r\nhttps://github.com/ChobPT/oobaboogas-webui-langchain_agent/blob/main/script.py#L185-L190\r\n\r\n```\r\ndef input_modifier(string):\r\n if string[:3] == \"/do Story\":\r\n print('hi')\r\n string += ' Tell me a story.'\r\n else:\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0])\r\n return string.replace('/do ', '')\r\n\r\n```\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nFile \"/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py\", line 144, in input_modifier\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0],state_dict)\r\nNameError: name 'state_dict' is not defined\r\n\r\n```\r\n```\r\n File \"/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py\", line 144, in input_modifier\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0],state)\r\nNameError: name 'state' is not defined\r\n\r\n```\r\n\r\n```\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0])\r\nTypeError: output_modifier() missing 1 required positional argument: 'state'\r\n\r\n```\r\n\r\nand if I removed state from output_modifier (as you see in my snippet above w print) I get no modified output nor print statement at console\r\nOutput generated in 1.99 seconds (9.06 tokens/s, 18 tokens, context 66, seed 123523724)\r\nTraceback (most recent call last):\r\n File \"/home/user/oobabooga_linux/text-generation-webui/server.py\", line 1181, in <module>\r\n time.sleep(0.5)\n```\n\n\n### System Info\n\n```shell\npython 3.9 oracle linux 8.5\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92", "files": []}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ChobPT", "pro": "oobaboogas-webui-langchain_agen", "path": ["script.py"]}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["script.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "8962bb173e9bdc36eb9cf28fe9e1952b2976e781", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5337", "iss_label": "bug", "title": "Generation slows at max context, even when truncated", "body": "### Describe the bug\r\n\r\n### Issue Summary\r\nWhen generating, if the context is near the maximum set via n_ctx (and the truncate value in Parameters is set to match it), generation will be quite slow. This does not occur if the context is more than approximately 300-500 below the set value. It still occurs even if the n_ctx and truncation numbers are reduced (though the slowdown becomes less severe.\r\n\r\n### Observations\r\n\r\n- Since speed is perfectly fine up until we near the context limit, then immediately drops, I suspect this has something to do with how the context is truncated; the actual act of truncating the input seems to cause the slowdown, despite the fact that this should be a simple operation.\r\n- Increasing the limit back up after lowering also does not help;; makes sense, since it just pulls in as much of the conversation as will fit and hits the context limit again, requiring truncation. \r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n- Set your n_ctx to a given value. (In my case, 8192).\r\n- Chat with the model, noting the speed. At this point, it should be fairly rapid. (In my case, 4.72 tokens/s up to context 7792).\r\n- As soon as the context reaches approximately 7800, generation slows. (In my case, 0.87 tokens/s on the message immediately after the above, at context 7798).\r\n- At this point, reducing n_ctx and reloading the model only partially helps. (In my case; reducing to 4092 produced 2.51 tokens/s at context 3641.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nN/A\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\n- Model: TheBloke/Silicon-Maid-7B-GGUF, using the 5_K_M quant.\r\n- Branch: dev\r\n- Commit: 8962bb173e9bdc36eb9cf28fe9e1952b2976e781\r\n- OS: Windows 11\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8962bb173e9bdc36eb9cf28fe9e1952b2976e781", "files": [{"path": "modules/ui_model_menu.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui_model_menu.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "564a8c507fffc8b25a056d8930035c63da71fc7b", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/3042", "iss_label": "bug", "title": "ERROR:Task exception was never retrieved", "body": "### Describe the bug\n\nRight after installation i open the webui in the browser and i receive an error.\n\n### Is there an existing issue for this?\n\n- [x] I have searched the existing issues\n\n### Reproduction\n\nRight after installation i open the webui in the browser and i receive this error.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\n2023-07-07 21:25:11 ERROR:Task exception was never retrieved\r\nfuture: <Task finished name='3s4vbrhqz8a_103' coro=<Queue.process_events() done, defined at D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py:343> exception=1 validation error for PredictBody\r\nevent_id\r\n Field required [type=missing, input_value={'data': [], 'event_data'...on_hash': '3s4vbrhqz8a'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.1.2/v/missing>\r\nTraceback (most recent call last):\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 347, in process_events\r\n client_awake = await self.gather_event_data(event)\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 220, in gather_event_data\r\n data, client_awake = await self.get_message(event, timeout=receive_timeout)\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 456, in get_message\r\n return PredictBody(**data), True\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\pydantic\\main.py\", line 150, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody\r\nevent_id\r\n Field required [type=missing, input_value={'data': [], 'event_data'...on_hash': '3s4vbrhqz8a'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.1.2/v/missing\n```\n\n\n### System Info\n\n```shell\nWindows 11\r\nEVGA RTX3080\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "564a8c507fffc8b25a056d8930035c63da71fc7b", "files": [{"path": "requirements.txt", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "07510a24149cbd6fd33df0c4a440d60b9783a18e", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2171", "iss_label": "enhancement\nstale", "title": "support for fastest-inference-4bit branch of GPTQ-for-LLaMa", "body": "**Description**\r\n\r\nThere is new branch of GPTQ-for-LLaMa - fastest-inference-4bit that combines triton and cuda and people say it's much faster. It would be nice if it was supported here. I tried to compile it myself but it doesnt work with this webui because there is no llama_inference_offload.py in the new branch. \r\n\r\n**Additional Context**\r\nhttps://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-inference-4bit\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "07510a24149cbd6fd33df0c4a440d60b9783a18e", "files": [{"path": "modules/GPTQ_loader.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/GPTQ_loader.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "7ddf6147accfb5b95e7dbbd7f1822cf976054a2a", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/446", "iss_label": "bug", "title": "Factual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047", "body": "### Describe the bug\n\nI get factual answers in ?? like this Factual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nCommon sense questions and answers\r\n\r\nQuestion: Hi\r\nFactual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047\n\n### Screenshot\n\n<img width=\"1535\" alt=\"Screenshot 2023-03-20 at 12 43 35 AM\" src=\"https://user-images.githubusercontent.com/25454015/226214371-e9424c75-6b81-4189-9865-70446b62235d.png\">\r\n\n\n### Logs\n\n```shell\nLoading LLaMA-7b...\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 33/33 [00:06<00:00, 5.47it/s]\r\nLoaded the model in 147.25 seconds.\r\nOutput generated in 12.96 seconds (4.71 tokens/s, 61 tokens)\r\nOutput generated in 13.20 seconds (0.61 tokens/s, 8 tokens)\n```\n\n\n### System Info\n\n```shell\nMacOS Ventura 13.2.1, Apple M1 Max\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "7ddf6147accfb5b95e7dbbd7f1822cf976054a2a", "files": [{"path": "download-model.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\n\u7ed3\u679c\u5947\u602a", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["download-model.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "3609ea69e4c4461a4f998bd12cc559d5a016f328", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5761", "iss_label": "bug", "title": "api broke: AttributeError: 'NoneType' object has no attribute 'replace'", "body": "### Describe the bug\n\napi calls result in\r\nAttributeError: 'NoneType' object has no attribute 'replace'\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\ninstall no requirements and llama-cpp-python by source then try to run curl\r\n\r\ncurl http://192.168.3.17:5000/v1/chat/completions \\\r\n -H \"Content-Type: application/json\" \\\r\n -d '{'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'tell me a story.'}], 'max_new_tokens': 1024, 'preset': 'None', 'do_sample': False, 'temperature': 1.0, 'top_p': 0, 'typical_p': 1, 'epsilon_cutoff': 0, 'eta_cutoff': 0, 'tfs': 1, 'top_a': 0, 'repetition_penalty': 1.18, 'repetition_penalty_range': 0, 'top_k': 50, 'min_length': 0, 'no_repeat_ngram_size': 2, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': True, 'mirostat_mode': 0, 'mirostat_tau': 5, 'mirostat_eta': 0.1, 'seed': -1, 'add_bos_token': True, 'truncation_length': 1068, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [], 'mode': 'instruct', 'instruction_template': 'Alpaca'}'\r\n\r\nException in ASGI application\r\nTraceback (most recent call last):\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 411, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 69, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/applications.py\", line 1054, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/applications.py\", line 123, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 186, in __call__\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 164, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/cors.py\", line 83, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 62, in __call__\r\n await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 758, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 778, in app\r\n await route.handle(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 299, in handle\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 79, in app\r\n await wrap_app_handling_exceptions(app, request)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 74, in app\r\n response = await func(request)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 278, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/data/text-generation-webui/extensions/openai/script.py\", line 137, in openai_chat_completions\r\n response = OAIcompletions.chat_completions(to_dict(request_data), is_legacy=is_legacy)\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 536, in chat_completions\r\n return deque(generator, maxlen=1).pop()\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 315, in chat_completions_common\r\n prompt = generate_chat_prompt(user_input, generate_params)\r\n File \"/data/text-generation-webui/modules/chat.py\", line 97, in generate_chat_prompt\r\n user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),\r\n File \"/data/text-generation-webui/modules/chat.py\", line 636, in replace_character_names\r\n text = text.replace('{{user}}', name1).replace('{{char}}', name2)\r\nAttributeError: 'NoneType' object has no attribute 'replace'\r\n\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\ninstall no avx2 requirements and llama-cpp-python by source then try to run curl\r\n\r\ncurl http://192.168.3.17:5000/v1/chat/completions \\\r\n -H \"Content-Type: application/json\" \\\r\n -d '{'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'tell me a story.'}], 'max_new_tokens': 1024, 'preset': 'None', 'do_sample': False, 'temperature': 1.0, 'top_p': 0, 'typical_p': 1, 'epsilon_cutoff': 0, 'eta_cutoff': 0, 'tfs': 1, 'top_a': 0, 'repetition_penalty': 1.18, 'repetition_penalty_range': 0, 'top_k': 50, 'min_length': 0, 'no_repeat_ngram_size': 2, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': True, 'mirostat_mode': 0, 'mirostat_tau': 5, 'mirostat_eta': 0.1, 'seed': -1, 'add_bos_token': True, 'truncation_length': 1068, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [], 'mode': 'instruct', 'instruction_template': 'Alpaca'}'\r\n\r\nException in ASGI application\r\nTraceback (most recent call last):\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 411, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 69, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/applications.py\", line 1054, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/applications.py\", line 123, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 186, in __call__\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 164, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/cors.py\", line 83, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 62, in __call__\r\n await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 758, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 778, in app\r\n await route.handle(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 299, in handle\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 79, in app\r\n await wrap_app_handling_exceptions(app, request)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 74, in app\r\n response = await func(request)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 278, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/data/text-generation-webui/extensions/openai/script.py\", line 137, in openai_chat_completions\r\n response = OAIcompletions.chat_completions(to_dict(request_data), is_legacy=is_legacy)\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 536, in chat_completions\r\n return deque(generator, maxlen=1).pop()\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 315, in chat_completions_common\r\n prompt = generate_chat_prompt(user_input, generate_params)\r\n File \"/data/text-generation-webui/modules/chat.py\", line 97, in generate_chat_prompt\r\n user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),\r\n File \"/data/text-generation-webui/modules/chat.py\", line 636, in replace_character_names\r\n text = text.replace('{{user}}', name1).replace('{{char}}', name2)\r\nAttributeError: 'NoneType' object has no attribute 'replace'\n```\n\n\n### System Info\n\n```shell\noracle linux 8, rocky linux 9\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "3609ea69e4c4461a4f998bd12cc559d5a016f328", "files": [{"path": "modules/chat.py", "Loc": {"(None, 'replace_character_names', 637)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/chat.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5774", "iss_label": "bug", "title": "The checksum verification for miniconda_installer.exe has failed.", "body": "### Describe the bug\n\nThe checksum verification for miniconda_installer.exe has failed.\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nAfter I extracted the files, I clicked start_windows.bat.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nDownloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Windows-x86_64.exe to D:\\text-generation-webui\\installer_files\\miniconda_installer.exe\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 53.8M 100 53.8M 0 0 23.2M 0 0:00:02 0:00:02 --:--:-- 23.3M\r\nfind: '/i': No such file or directory\r\nfind: '/v': No such file or directory\r\nfind: ' ': No such file or directory\r\nfind: '/i': No such file or directory\r\nfind: '307194e1f12bbeb52b083634e89cc67db4f7980bd542254b43d3309eaf7cb358': No such file or directory\r\nThe checksum verification for miniconda_installer.exe has failed.\n```\n\n\n### System Info\n\n```shell\nwindows11,CPU:i711800H,GPU:NVDIA RTXA2000Laptop\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a", "files": [{"path": "start_windows.bat", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["start_windows.bat"]}}, {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "c17624432726ab5743dfa21af807d559e4f4ff8c", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/6209", "iss_label": "bug\nstale", "title": "Oobabooga login not working through reverse proxy", "body": "### Describe the bug\n\nI have the latest text-generation-webui (just ran the update script) running on my home computer running Windows 11. I am running it on a LAN IP (192.168.1.102) and reverse-proxying it with Nginx so I can access it remotely over the Internet.\r\n\r\nSome recent update to text-generation-webui appears to have broken the login code. When I'm logging in from the LAN, I see the normal login screen, and authentication works. When I'm logging in from the WAN, I get a bare-bones UI which refuses to accept my login creds. \r\n\r\nI have been running this setup for months without change, so my assumption is that it's a recent change in the text-generation-webui codebase that's behind it.\r\n\r\nMy CMD_FLAGS.txt is:\r\n\r\n--gradio-auth myusername:mypassword\r\n--auto-devices\r\n--listen\r\n--listen-host 192.168.1.102\r\n--listen-port 7860\r\n\r\n\r\n\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\n1. Start the webui on a WAN port.\r\n2. Reverse-proxy to a publically-accessible IP.\r\n3. Try to login.\n\n### Screenshot\n\n![Oobaboga_Login](https://github.com/oobabooga/text-generation-webui/assets/13558208/823b2df8-d4e8-43c1-ab93-beb72cf6cae7)\r\n\n\n### Logs\n\n```shell\nI see repeated errors in the console: \"WARNING: invalid HTTP request received\", but no Python trace info.\n```\n\n\n### System Info\n\n```shell\nWindows 11, NVidia Founder RTX 2060 Super.\r\n\r\nReverse proxy is NGinx running on Debian. It uses Let's Encrypt so I can encrypt my remote connection.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c17624432726ab5743dfa21af807d559e4f4ff8c", "files": [{"path": "requirements/full/requirements.txt", "Loc": {"(None, None, 7)": {"mod": [7]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements/full/requirements.txt"], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "69d863b44ab5c7dad6eea04b7e3563f491c714a4", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/376", "iss_label": "", "title": "Unable to select camera device through UI", "body": "It would be nice to have a way to select which camera to use. I am on Ubuntu 22.04 with a Linux laptop. Since I use an external camera and keep my laptop closed, the program is defaulting to the on-board camera.\r\n\r\nI was unable to find a quick/easy way to change the default camera in Ubuntu, so it would be nice if the program was able to allow a selection in the UI.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "69d863b44ab5c7dad6eea04b7e3563f491c714a4", "files": [{"path": "modules/ui.py", "Loc": {"(None, 'webcam_preview', 252)": {"mod": [259]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "080d6f5110d2e185e8ce4e10451ac96313079be2", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/315", "iss_label": "", "title": "How to select the correct camera?", "body": "How to select the correct camera ? \r\nIs there any method to improve the output resolution of the camera?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "080d6f5110d2e185e8ce4e10451ac96313079be2", "files": [{"path": "modules/ui.py", "Loc": {"(None, 'webcam_preview', 252)": {"mod": [259]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "5bc3ada6324a28a8d8556da1176b546f2d2140f8", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/922", "iss_label": "", "title": "ERROR: Cannot install -r requirements.txt (line 13), tensorflow and typing-extensions>=4.8.0 because these package versions have conflicting dependencies.", "body": "The conflict is caused by:\n The user requested typing-extensions>=4.8.0\n torch 2.5.1+cu121 depends on typing-extensions>=4.8.0\n tensorflow-intel 2.12.1 depends on typing-extensions<4.6.0 and >=3.6.6", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5bc3ada6324a28a8d8556da1176b546f2d2140f8", "files": [{"path": "requirements.txt", "Loc": {"(None, None, 19)": {"mod": [19]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "6b0cc749574d7307b2f7deedfa2a0dbb363329da", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/243", "iss_label": "", "title": "[experimental] doesn't show the camera I want..", "body": "I'm using the `experimental` branch so I could choose the camera I wanted (OBS Virtual Camera) which is (2) but it only shows \"Camera 0\", so I made a test script and I was able to pull my OBS Virtual Camera using 'matplotlib',\r\n\r\n```\r\n(venv) (base) PS E:\\deep-live-cam> python list.py\r\n[ WARN:0@10.769] global cap_msmf.cpp:1769 CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638\r\n[ WARN:0@10.839] global cap.cpp:304 cv::VideoCapture::open VIDEOIO(DSHOW): raised OpenCV exception:\r\n\r\nOpenCV(4.10.0) D:\\a\\opencv-python\\opencv-python\\opencv\\modules\\videoio\\src\\cap_dshow.cpp:2763: error: (-215:Assertion failed) pVih in function 'videoInput::start'\r\n\r\n\r\n[ERROR:0@10.846] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.478] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.563] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.635] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.711] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.787] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.862] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\nAvailable camera indices: [2]\r\nEnter the camera index you want to use: 2\r\nCamera 2 opened successfully. Press 'q' to quit.\r\nPress 'q' and Enter to quit, or just Enter to continue: q\r\n(venv) (base) PS E:\\deep-live-cam>\r\n```\r\n\r\nIt shows up like this:\r\n\r\n<img width=\"419\" alt=\"Screen Shot 2024-08-12 at 8 31 51 PM\" src=\"https://github.com/user-attachments/assets/3f16b4f6-6ac7-492f-88a5-6abdc58e29b0\">\r\n\r\nSo I know it's possible, is there a way to force 'deep-live-cam' to use \"Camera (2)\" ?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "6b0cc749574d7307b2f7deedfa2a0dbb363329da", "files": [{"path": "modules/ui.py", "Loc": {"(None, 'webcam_preview', 307)": {"mod": [322]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "513e41395687921d589fc10bbaf2f72ed579c84a", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/915", "iss_label": "", "title": "Subject: Missing ui.py file in modules directory - preventing project execution", "body": "Hi,\n\nI'm trying to run the Deep-Live-Cam project, but I'm encountering a problem. The ui.py file is missing from the modules directory. I've tried the following:\n\n* Cloning the repository using git clone: `git clone https://github.com/hacksider/Deep-Live-Cam.git`\n* Cloning the repository using GitHub Desktop.\n* Downloading the repository as a ZIP file.\n\nIn all cases, the ui.py file is not present. I've also checked the repository on GitHub.com directly in my browser, and the file is missing there as well.\n\nThe modules directory contains the following files: [List the files you see].\n\nCould you please let me know how to obtain the ui.py file? Is it intentionally missing, or is there a separate download/generation step required?\n\nThanks for your help!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "513e41395687921d589fc10bbaf2f72ed579c84a", "files": [{"path": "modules/ui.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "a49d3fc6e5a228a6ac92e25831c507996fdc0042", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/697", "iss_label": "", "title": "[Solved] inswapper_128_fp16.onnx failed:Protobuf parsing failed", "body": "I have this error on macOS Apple Silicon.\r\n`Exception in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/Cellar/python@3.10/3.10.15/Frameworks/Python.framework/Versions/3.10/lib/python3.10/tkinter/__init__.py\", line 1921, in __call__\r\n return self.func(*args)\r\n File \"/Users//PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/customtkinter/windows/widgets/ctk_button.py\", line 554, in _clicked\r\n self._command()\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 242, in <lambda>\r\n command=lambda: webcam_preview(\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 649, in webcam_preview\r\n create_webcam_preview(\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 707, in create_webcam_preview\r\n temp_frame = frame_processor.process_frame(source_image, temp_frame)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 65, in process_frame\r\n temp_frame = swap_face(source_face, target_face, temp_frame)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 49, in swap_face\r\n return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 44, in get_face_swapper\r\n FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 96, in get_model\r\n model = router.get_model(providers=providers, provider_options=provider_options)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 40, in get_model\r\n session = PickableInferenceSession(self.onnx_file, **kwargs)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 25, in __init__\r\n super().__init__(model_path, **kwargs)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 347, in __init__\r\n self._create_inference_session(providers, provider_options, disabled_optimizers)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 384, in _create_inference_session\r\n sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)\r\nonnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/PycharmProjects/Deep-Live-Cam/models/inswapper_128_fp16.onnx failed:Protobuf parsing failed.`\r\n\r\n\r\nThis https://github.com/hacksider/Deep-Live-Cam/issues/613 didn't help. \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "hacksider", "pro": "deep-live-cam", "path": ["inswapper_128_fp16.onnx"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2\n+\n0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["inswapper_128_fp16.onnx"]}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "d4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/94", "iss_label": "", "title": "Can't find onnxruntime-silicon==1.13.1", "body": "Hi,\r\n\r\nCurrently on MacOS (Silicon, M2 Max), it seems not possible to download (with pip at least) the 1.13.1 version of onnxruntime.\r\n\r\n`ERROR: Could not find a version that satisfies the requirement onnxruntime-silicon==1.13.1 (from versions: 1.14.1, 1.15.0, 1.16.0, 1.16.3)\r\nERROR: No matching distribution found for onnxruntime-silicon==1.13.1`\r\n\r\nAnd, if I'm right, Deep-Live-Cam doesn't support more recent versions of onnxruntime, right ? So if that's the case, what could be a workaround ?\r\n\r\nThanks !", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33", "files": [{"path": "requirements.txt", "Loc": {"(None, None, 16)": {"mod": [16]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "install require"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}}, {"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/345", "iss_label": "", "title": "Program crashes when processing with DirectML", "body": "I am using an AMD RX 6600 XT GPU with the latest drivers and attempting to run the program with DirectML. The program's UI turns white and then crashes. It works fine with CPU execution but fails with DirectML.\r\nI already tried to reinstall onnxruntime-directml with no effect. Terminal:\r\n\r\n (myenv) E:\\Edesktop\\deep-live\\Deep-Live-Cam>python run.py --execution-provider dml\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5\r\nset det-size: (640, 640)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 100/100 [00:01<00:00, 50.67it/s]\r\n[DLC.CORE] Creating temp resources...\r\n[DLC.CORE] Extracting frames...\r\n[DLC.FACE-SWAPPER] Progressing...\r\nProcessing: 0%| | 0/125 [00:00<?, ?frame/s, execution_providers=['DmlExecutionProvider'], execution_threads=8, max_memory=16\r\n(myenv) E:\\Edesktop\\deep-live\\Deep-Live-Cam>\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa", "files": [{"path": "modules/ui.py", "Loc": {"(None, 'create_root', 93)": {"mod": [139, 140, 141]}}, "status": "modified"}, {"path": "modules/core.py", "Loc": {"(None, 'parse_args', 47)": {"mod": [67, 71]}, "(None, None, None)": {"mod": [11]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py", "modules/core.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "7e1928efee53da1ac7d156912df04aef83eefea5", "iss_html_url": "https://github.com/Textualize/rich/issues/1247", "iss_label": "Needs triage", "title": "[REQUEST] Extra caching for `get_character_cell_size`", "body": "**How would you improve Rich?**\r\n\r\nAdd a small `lru_cache` to https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L28 , similar to cache one layer down for https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L46\r\n\r\nSize `4096` was plenty for what I describe below.\r\n\r\n**What problem does it solved for you?**\r\n\r\nI'm working on some optimizations for a TUI application here https://github.com/JoshKarpel/spiel/pull/37\r\n\r\nThis was my first idea on how to improve rendering time, based on https://github.com/benfred/py-spy telling me that a lot of time was being spent in `get_character_cell_size`, and this was my first thought for a solution.\r\n\r\nAdding the cache described above gives a ~30% speedup on the benchmarks I was using to work on that PR. In that application I'm repeatedly re-rendering the same content (in a `Live`), so adding a small cache to `get_character_cell_size` represents a significant speedup since the set of characters is usually the same from frame to frame. The benchmark is mostly printing colorized ASCII, with some unicode also drawn from a small set (box-drawing characters, block shapes, etc.). \r\n\r\nI guess that since there's lots of `Layout` and `Padding` going on, the most common character is probably space... perhaps the ASCII set that there's already a shortcut for could just be pre-computed and stored in a set? There's probably a lot of good ways to approach this :) ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "7e1928efee53da1ac7d156912df04aef83eefea5", "files": [{"path": "rich/cells.py", "Loc": {"(None, 'get_character_cell_size', 28)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/cells.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "5c9161d0c48254fb579827249a9ee7d88f4589b7", "iss_html_url": "https://github.com/Textualize/rich/issues/1489", "iss_label": "Needs triage", "title": "[REQUEST] current item of a progress", "body": "when creating progress bars for logical items (that are then supported with additional progress pars,\r\ni would consider it helpful if it was possible to add a name/render able for the current item, and to push those in updates\r\n\r\ni`m not yet sure how this is best expressed/implemented", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5c9161d0c48254fb579827249a9ee7d88f4589b7", "files": [{"path": "rich/progress.py", "Loc": {"('Progress', 'update', 739)": {"mod": []}}, "status": "modified"}, {"path": "rich/progress.py", "Loc": {"('Task', None, 437)": {"mod": [466]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/progress.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Textualize", "repo_name": "rich", "base_commit": "0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80", "iss_html_url": "https://github.com/Textualize/rich/issues/2457", "iss_label": "bug", "title": "[BUG] Console(no_color=True) does not work on Windows 10", "body": "You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).\r\n\r\n**Describe the bug**\r\n\r\nThe \"no_color=True\" Console parameter does not seem to do anything on Windows 10. I tested on both Cmder and native cmd.exe terminals and got the same results. See screenshots below.\r\n\r\nCmder:\r\n![richbug01](https://user-images.githubusercontent.com/7690118/183566141-724f7390-f9f9-4063-bf31-b0144e391975.PNG)\r\n\r\ncmd.exe\r\n![richbug02](https://user-images.githubusercontent.com/7690118/183566181-5ef45bf6-366c-4c69-b6f8-6ad25d5aff41.PNG)\r\n\r\nfor reference, this is what it looks like from my Ubuntu laptop:\r\n\r\n![richbug-linux-ok](https://user-images.githubusercontent.com/7690118/183566308-62bbd545-1c90-4345-bd3c-a228ea0f5f35.png)\r\n\r\nAlso happy to help fix this if you can point me in the right direction. Thank you!\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nOS: Windows 10\r\n\r\n**Cmder:**\r\n\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502 <console width=155 ColorSystem.WINDOWS> \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'windows' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 83 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = True \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=155, height=83), \u2502\r\n\u2502 legacy_windows=True, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=155, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=83, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=155, height=83) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 155 \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\u250c\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u2510\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'cygwin', \u2502\r\n\u2502 'COLORTERM': None, \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': None, \u2502\r\n\u2502 'COLUMNS': '157', \u2502\r\n\u2502 'LINES': '83', \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\nplatform=\"Windows\"\r\n\r\n\r\n**cmd.exe**\r\n\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 A high level console interface. \u2502 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 <console width=119 ColorSystem.WINDOWS> \u2502 \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2502 \u2502 color_system = 'windows' \u2502 \u2502 encoding = 'utf-8' \u2502 \u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502 \u2502 height = 30 \u2502 \u2502 is_alt_screen = False \u2502 \u2502 is_dumb_terminal = False \u2502 \u2502 is_interactive = True \u2502 \u2502 is_jupyter = False \u2502 \u2502 is_terminal = True \u2502 \u2502 legacy_windows = True \u2502 \u2502 no_color = False \u2502 \u2502 options = ConsoleOptions( \u2502 \u2502 size=ConsoleDimensions(width=119, height=30), \u2502 \u2502 legacy_windows=True, \u2502 \u2502 min_width=1, \u2502 \u2502 max_width=119, \u2502 \u2502 is_terminal=True, \u2502 \u2502 encoding='utf-8', \u2502 \u2502 max_height=30, \u2502 \u2502 justify=None, \u2502 \u2502 overflow=None, \u2502 \u2502 no_wrap=False, \u2502 \u2502 highlight=None, \u2502 \u2502 markup=None, \u2502 \u2502 height=None \u2502 \u2502 ) \u2502 \u2502 quiet = False \u2502 \u2502 record = False \u2502 \u2502 safe_box = True \u2502 \u2502 size = ConsoleDimensions(width=119, height=30) \u2502 \u2502 soft_wrap = False \u2502 \u2502 stderr = False \u2502 \u2502 style = None \u2502 \u2502 tab_size = 8 \u2502 \u2502 width = 119 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u250c\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u2510 \u2502 Windows features available. \u2502 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2502 \u2502 truecolor = False \u2502 \u2502 vt = False \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u250c\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 { \u2502 \u2502 'TERM': None, \u2502 \u2502 'COLORTERM': None, \u2502 \u2502 'CLICOLOR': None, \u2502 \u2502 'NO_COLOR': None, \u2502 \u2502 'TERM_PROGRAM': None, \u2502 \u2502 'COLUMNS': None, \u2502 \u2502 'LINES': None, \u2502 \u2502 'JUPYTER_COLUMNS': None, \u2502 \u2502 'JUPYTER_LINES': None, \u2502 \u2502 'JPY_PARENT_PID': None, \u2502 \u2502 'VSCODE_VERBOSE_LOGGING': None \u2502 \u2502 } \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 platform=\"Windows\" \r\n\r\nrich==12.5.1\r\n\r\n</details>\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80", "files": [{"path": "rich/console.py", "Loc": {"('Console', None, 583)": {"mod": [612]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/console.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "427cc215310804127b55744fcc3664ede38a4a0d", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/21363", "iss_label": "question", "title": "How does youtube-dl detect advertisements?", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions\r\n- Search the bugtracker for similar questions: http://yt-dl.org/search-issues\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [x] I'm asking a question\r\n- [x] I've looked through the README and FAQ for similar questions\r\n- [x] I've searched the bugtracker for similar questions including closed ones\r\n\r\n\r\n## Question\r\n\r\n<!--\r\nAsk your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.\r\n-->\r\n\r\nFox Sports Go recently changed their streaming service. Previously, I used to be able to record live streams and download event replays by passing headers into streamlink. However, recording live with streamlink \"works\" just fine, but because commercials have some kind of different codec than the actual content, I can't do anything with the resulting .ts file.\r\n\r\nHowever, I can download replays from FOX.com just fine, using a youtube-dl command like this: `youtube-dl --hls-prefer-native -f 3750 https://content-auso1.uplynk.com/preplay2/6f324d0648b34576b36ce49160add428/391dec8c1a9a07b70d3062e4bf1a6e3c/4sQNPrWNbJWMzPMP2RXiNy2SFAhlIDUYbUwS2TJwN94h.m3u8?pbs=38dc148aad7c4a7f981a8dd57493a625`\r\n\r\nThe big problems with this are that a) I have to wait until a replay is posted; and b) FOX is very inconsistent as to which events get replays posted and which do not, meaning I'm SOL if I'm trying to save an event that just doesn't have a replay for some reason. If I could record live, this wouldn't be an issue, but again, the commercials are throwing things off.\r\n\r\nOne of the output lines from youtube-dl is `[hlsnative] Total fragments: 1815 (not including 504 ad)`.\r\n\r\nSo my question is: how does youtube-dl detect which segments are ads in the .m3u8 file? If I can figure that out, perhaps I can rig streamlink to ignore those segments when recording, saving me a lot of trouble.\r\n\r\nThanks!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "427cc215310804127b55744fcc3664ede38a4a0d", "files": [{"path": "youtube_dl/downloader/hls.py", "Loc": {"('HlsFD', 'is_ad_fragment_start', 78)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/downloader/hls.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "8b7340a45eb0e3aeaa996896ff8690b6c3a32af6", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/15955", "iss_label": "", "title": "use youtube-dl with cookies file in code not from command line ", "body": "## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.03.20*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2018.03.20**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [ ] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [X ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']\r\n[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251\r\n[debug] youtube-dl version 2018.03.20\r\n[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2\r\n[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n...\r\n<end of log>\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: https://www.youtube.com/watch?v=BaW_jenozKc\r\n- Single video: https://youtu.be/BaW_jenozKc\r\n- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc\r\n\r\nNote that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nExplanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.\r\nIf work on your *issue* requires account credentials please provide them or explain how one can obtain them.\r\n\r\n\r\n\r\n\r\n\r\n```\r\nfrom __future__ import unicode_literals\r\nimport youtube_dl\r\n\r\nydl_opts = {}\r\nwith youtube_dl.YoutubeDL(ydl_opts) as ydl:\r\n ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])\r\n```\r\nthis for downoalding simple youtube video i need how add the cookies file untill i can downoald from my account on linda im trying to create small downaolder untill help fast the process any idea how add cookies file", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8b7340a45eb0e3aeaa996896ff8690b6c3a32af6", "files": [{"path": "youtube_dl/YoutubeDL.py", "Loc": {"('YoutubeDL', None, 113)": {"mod": [208]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/YoutubeDL.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "267d81962a0709f15f82f96b7aadbb5473a06992", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16870", "iss_label": "", "title": "[bilibili]how can i download video on page2?", "body": "### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.25*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.06.25**\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [x] Question\r\n- [ ] Other\r\n\r\nI try to use youtube-dl to download a video on bilibili like https://www.bilibili.com/video/av18178195\r\n\r\nThe video have 2 pages, but when i type **youtube-dl -f 1 https://www.bilibili.com/video/av18178195**\r\ni just get the video on page1, how can i get video on page2?\r\nI have see this page https://github.com/rg3/youtube-dl/pull/16354\r\nbut i use \r\n**youtube-dl -f 1 https://www.bilibili.com/video/av18178195/index_2.html** or \r\n**youtube-dl -f 1 https://www.bilibili.com/video/av18178195/?p=2**\r\n\r\nIt will get the same video on page1\r\nHow can i solve this problem? Thank you.\r\nIs this problem fixed? I use the standalone exe version.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "267d81962a0709f15f82f96b7aadbb5473a06992", "files": [{"path": "youtube_dl/extractor/bilibili.py", "Loc": {"('BiliBiliIE', None, 25)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/bilibili.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16883", "iss_label": "", "title": "[Feature request] Network retry, with configurability", "body": "I just ran some large youtube-dl scripts, and noticed a few videos were missing finally.\r\n\r\nThis was probably due to intermittent network downtimes, and apparently youtube-dl doesn't do any network retry at all (I may be wrong).\r\n\r\nThus, I suggest adding an option named for example `--network-retry`, related to `--socket-timeout`. The default would be 0 to keep the current youtube-dl behavior, and I could configure it to something like 5.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71", "files": [{"path": "youtube_dl/options.py", "Loc": {"(None, 'parseOpts', 41)": {"mod": [458, 462]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/options.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "5014bd67c22b421207b2650d4dc874b95b36dda1", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/30539", "iss_label": "question", "title": "velocidad de descarga limitada", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions\r\n- Search the bugtracker for similar questions: http://yt-dl.org/search-issues\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [ yes] I'm asking a question\r\n- [ yes] I've looked through the README and FAQ for similar questions\r\n- [yes ] I've searched the bugtracker for similar questions including closed ones\r\n\r\n\r\n## Question\r\n\r\n<!--\r\nAsk your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.\r\n-->\r\n\r\nWRITE QUESTION HERE\r\n\r\nhola .. hace unos dias estoy experimento una baja en la velocidad de descarga desde la pagina de youtube usando youtube-dl .. lo pueden resolver? probe bajando videos desde otros sitios webs y descarga a toda velocidad .. solo me pasa desde la pagina de youtube .. para mi hicieron algun cambio en su plataforma ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "5014bd67c22b421207b2650d4dc874b95b36dda1", "files": [{"path": "youtube_dl/extractor/youtube.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "e90d175436e61e207e0b0cae7f699494dcf15922", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/9104", "iss_label": "", "title": "Chinese title was missing !", "body": "```\nroot@kangland:/var/www/ydy# youtube-dl -v w0dMz8RBG7g\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'-v', u'w0dMz8RBG7g']\n[debug] Encodings: locale ANSI_X3.4-1968, fs ANSI_X3.4-1968, out ANSI_X3.4-1968, pref ANSI_X3.4-1968\n[debug] youtube-dl version 2016.04.01\n[debug] Python version 2.7.6 - Linux-2.6.32-042stab113.11-i686-with-Ubuntu-14.04-trusty\n[debug] exe versions: none\n[debug] Proxy map: {}\n[youtube] w0dMz8RBG7g: Downloading webpage\n[youtube] w0dMz8RBG7g: Downloading video info webpage\n[youtube] w0dMz8RBG7g: Extracting video information\n[youtube] {22} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] w0dMz8RBG7g: Downloading player https://s.ytimg.com/yts/jsbin/player-en_US-vfli5QvRo/base.js\n[youtube] {43} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {18} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {5} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {36} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {17} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {136} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {247} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {135} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {244} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {134} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {243} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {133} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {242} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {160} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {278} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {140} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {171} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {249} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {250} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {251} signature length 41.43, html5 player en_US-vfli5QvRo\n[debug] Invoking downloader on u'https://r2---sn-a8au-vgqe.googlevideo.com/videoplayback?ms=au&mt=1460039622&pl=40&mv=m&key=yt6&pte=yes&mm=31&mn=sn-a8au-vgqe&sver=3&fexp=9407059%2C9416126%2C9416891%2C9420452%2C9422596%2C9423662%2C9426926%2C9427902%2C9428398%2C9432364&ratebypass=yes&ipbits=0&initcwndbps=26957500&expire=1460061513&upn=NhCteH8M5OA&mime=video%2Fmp4&axtags=tx%3D9417362&id=o-AEE-ylzEiNeRWF2HIs5_rsDGUftXqgxkV7V0eUSq7oZ4&dur=214.111&source=youtube&ip=2602%3Aff62%3A104%3Ae6%3A%3A&sparams=axtags%2Cdur%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cpte%2Cratebypass%2Crequiressl%2Csource%2Cupn%2Cexpire&requiressl=yes&lmt=1458219184364643&itag=22&signature=B1E1AF27412C916392FF49F1D60F0771145BE274.DA5587721204D947940DB57A584188E732C36433'\n[download] Destination: Wanting - (You Exist In My Song) [Trad. Chinese] [Official Music Video]-w0dMz8RBG7g.mp4\n[download] 100% of 32.20MiB in 00:00\n\n```\n\n```\nroot@kangland:/var/www/ydy# locale\nLANG=\nLANGUAGE=\nLC_CTYPE=\"POSIX\"\nLC_NUMERIC=\"POSIX\"\nLC_TIME=\"POSIX\"\nLC_COLLATE=\"POSIX\"\nLC_MONETARY=\"POSIX\"\nLC_MESSAGES=\"POSIX\"\nLC_PAPER=\"POSIX\"\nLC_NAME=\"POSIX\"\nLC_ADDRESS=\"POSIX\"\nLC_TELEPHONE=\"POSIX\"\nLC_MEASUREMENT=\"POSIX\"\nLC_IDENTIFICATION=\"POSIX\"\nLC_ALL=\n```\n\n```\nroot@kangland:/var/www/ydy# locale -a\nC\nC.UTF-8\nPOSIX\nzh_CN.utf8\nzh_HK.utf8\nzh_TW.utf8\n```\n\n**Run :** `youtube-dl -f 'best[height=360]' --restrict-filenames -i -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' PL1OKxDwI_y_AO1Lb-zO57wYdpWqhk7MUs`\n\n**Result :** [download] _/01 - _.mp4\n\nHow to fix chinese title ? \n\nThank you so much !\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e90d175436e61e207e0b0cae7f699494dcf15922", "files": [{"path": "youtube_dl/options.py", "Loc": {"(None, 'parseOpts', 22)": {"mod": [447]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/options.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "3794f1e20a56f3b7bcd23f82a006e266f2a57a05", "iss_html_url": "https://github.com/localstack/localstack/issues/2511", "iss_label": "type: usage", "title": "Cannot connect to DynamoDB from lambda", "body": "<!-- Love localstack? Please consider supporting our collective:\r\n\ud83d\udc49 https://opencollective.com/localstack/donate -->\r\n\r\n# Type of request: This is a ...\r\n\r\n- [x] bug report\r\n- [ ] feature request\r\n\r\n# Detailed description\r\nI'm using localstack for local development. I have a DynamoDB table named `readings` and I'd like \r\nto insert items from a lambda function.\r\nI have a simple function in python runtime:\r\n\r\n```python\r\nimport os\r\nimport boto3\r\n\r\ndef lambda_handler(events, context):\r\n DYNAMODB_ENDPOINT_URL = os.environ.get(\"DYNAMODB_ENDPOINT_URL\")\r\n dynamodb = boto3.resource(\"dynamodb\", endpoint_url=DYNAMODB_ENDPOINT_URL)\r\n readings_table = dynamodb.Table(DYNAMODB_READINGS_TABLE_NAME)\r\n\r\n readings_table.put_item(Item={\"reading_id\": \"10\", \"other\": \"test\"})\r\n```\r\n\r\nI cannot figure out what is the proper endpoint url for my local DynamoDB.\r\nI have tried different combinations of `localhost`, `localstack` and ports `4566`, `4569`, each time I get error `EndpointConnectionError`\r\n\r\n## Expected behavior\r\nItems are inserted in the table.\r\n\r\n## Actual behavior\r\nLambda cannot connect to dynamodb and error `[ERROR] EndpointConnectionError: Could not connect to the endpoint URL: \"http://localstack:4569/\"` is raised.\r\n\r\n# Steps to reproduce\r\n\r\nRun localstack image with docker-compose, set `LOCALSTACK_HOSTNAME=localstack` and try to access dynamodb resource from lambda.\r\n\r\n## Command used to start LocalStack\r\ndocker-compose service I'm using:\r\n```yml\r\n localstack:\r\n image: localstack/localstack:0.11.2\r\n ports:\r\n - 4566:4566\r\n - 8080:8080\r\n environment:\r\n SERVICES: \"dynamodb,sqs,lambda,iam\"\r\n DATA_DIR: \"/tmp/localstack/data\"\r\n PORT_WEB_UI: \"8080\"\r\n LOCALSTACK_HOSTNAME: localstack\r\n LAMBDA_EXECUTOR: docker\r\n AWS_ACCESS_KEY_ID: \"test\"\r\n AWS_SECRET_ACCESS_KEY: \"test\"\r\n AWS_DEFAULT_REGION: \"us-east-1\"\r\n volumes:\r\n - localstack_volume:/tmp/localstack/data\r\n - /var/run/docker.sock:/var/run/docker.sock\r\n # When a container is started for the first time, it will execute files with extensions .sh that are found in /docker-entrypoint-initaws.d. \r\n # Files will be executed in alphabetical order. You can easily create aws resources on localstack using `awslocal` (or `aws`) cli tool in the initialization scripts.\r\n # Here I run creating dynamodb tables, roles, etc.\r\n - ./localstack-startup-scripts/:/docker-entrypoint-initaws.d/\r\n```\r\n\r\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n...\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [19], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "1c5f2e9650155a839cc842a9cd07faf3e76ed5d2", "iss_html_url": "https://github.com/localstack/localstack/issues/1078", "iss_label": "", "title": "Connect to localhost:4568 [localhost/127.0.0.1] failed: Connection refused (Connection refused)", "body": "Hi there, I am having trouble connecting to Kinesis on localstack. Everything runs fine when I run it locally, the error happens inside of our Jenkins pipeline.\r\n\r\nHere is the Dockerfile I am using:\r\n```\r\nFROM hseeberger/scala-sbt:8u181_2.12.7_1.2.6\r\n\r\nUSER root\r\nRUN apt-get update\r\nRUN apt-get -y install curl\r\nRUN curl -sL https://deb.nodesource.com/setup_8.x | bash -\r\nRUN apt-get -y install nodejs\r\nRUN apt-get install npm\r\nRUN curl -L \"https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\r\nRUN chmod +x /usr/local/bin/docker-compose\r\n```\r\n\r\nAnd here is my docker-compose.yml:\r\n```\r\nversion: '3.6'\r\n\r\nservices:\r\n # AWS services in docker env\r\n localstack:\r\n image: localstack/localstack:latest\r\n environment:\r\n - SERVICES=kinesis,dynamodb,s3,cloudwatch\r\n - HOSTNAME_EXTERNAL=localstack\r\n - DATA_DIR=/tmp/localstack/data\r\n volumes:\r\n - \"/tmp:/tmp\"\r\n ports:\r\n - \"4568:4568\"\r\n - \"4569:4569\"\r\n - \"4572:4572\"\r\n - \"4582:4582\"\r\n\r\n postgres:\r\n image: \"postgres:9.6\"\r\n restart: always\r\n ports:\r\n - \"5432:5432\"\r\n environment:\r\n POSTGRES_USER: dev\r\n POSTGRES_PASSWORD: *******\r\n POSTGRES_DB: table\r\n PGPASSWORD: *******\r\n volumes:\r\n - ./docker/postgres-init:/docker-entrypoint-initdb.d\r\n\r\n mocks:\r\n image: \"jordimartin/mmock\"\r\n volumes:\r\n - \"./docker/mocks:/config\"\r\n ports:\r\n - \"8082:8082\"\r\n - \"8083:8083\"\r\n - \"8084:8084\"\r\n\r\n aws-create-stream:\r\n image: \"ivonet/aws-cli:1.0.0\"\r\n links:\r\n - localstack\r\n volumes:\r\n - ${HOME}/.aws:/root/.aws:ro\r\n command: --endpoint-url=http://localstack:4568 kinesis create-stream --stream-name RawScanPipe --shard-count 1\r\n environment:\r\n - AWS_DEFAULT_REGION=us-east-1\r\n\r\n #PGAdmin gives a nice gui on the PostgreSQL DB\r\n pgadmin:\r\n image: dpage/pgadmin4\r\n links:\r\n - postgres\r\n depends_on:\r\n - postgres\r\n ports:\r\n - \"8888:80\"\r\n volumes:\r\n - ./docker/pgadmin:/var/lib/pgadmin\r\n environment:\r\n PGADMIN_DEFAULT_EMAIL: *********\r\n PGADMIN_DEFAULT_PASSWORD: *********\r\n```\r\n\r\nIn case it matters, here is the segment in our Jenkins file where this gets called:\r\n```\r\ndef sbtInside() {\r\n return \"-u root -v /usr/bin/docker:/usr/bin/docker \" +\r\n \"-v /usr/local/bin/aws:/usr/local/bin/aws \" +\r\n \"-v /var/run/docker.sock:/var/run/docker.sock \" +\r\n \"-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/libltdl.so.7 \" +\r\n \"-v $HOME/.ivy2:/root/.ivy2 \" +\r\n \"-v $HOME/.sbt:/root/.sbt\"\r\n}\r\n\r\n stage(\"Unit/Functional Tests & Create Dockerfile\") {\r\n app.inside(sbtInside()) {\r\n try {\r\n echo \"Starting unit tests...\"\r\n sh \"TARGET=LOCAL sbt clean test\"\r\n\r\n echo \"Starting up test stack...\"\r\n sh \"docker-compose -f docker-compose.yml up -d\"\r\n\r\n echo \"Starting functional tests...\"\r\n sh \"TARGET=LOCAL \" +\r\n \"PRODUCT_ENABLED=true \" +\r\n \"sbt clean functional/test\"\r\n } finally {\r\n echo \"Tests complete!\"\r\n sh \"docker-compose -f docker-compose.yml down -v\"\r\n sh \"sbt docker\"\r\n }\r\n }\r\n }\r\n```\r\n\r\nI am sure I am missing something simple, I just can't figure out what it is!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1c5f2e9650155a839cc842a9cd07faf3e76ed5d2", "files": [{"path": "docker-compose.yml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "1c5f2e9650155a839cc842a9cd07faf3e76ed5d2", "iss_html_url": "https://github.com/localstack/localstack/issues/1095", "iss_label": "", "title": "Healthcheck when running in docker", "body": "I'm running localstack with docker-compose as a dependency for a service that I'm developing. The problem is that my service calls localstack before it's fully initialized. The only solution I could find so far is a hard `sleep <seconds>` at start-up, but that only works on my specific system and produces unexpected results for other developers. Can localstack expose a healthcheck, so I can have docker-compose start my service after localstack is \"healthy\"?\r\n\r\nA trimmed down version of my docker-compose.yml looks something like this:\r\n```yaml\r\nmyservice:\r\n command: \"sh -c 'sleep 10 && npm run start'\" #grrrrr\r\n depends_on:\r\n - aws\r\n # aws:\r\n # condition: service_healthy\r\naws:\r\n image: localstack/localstack\r\n environment:\r\n SERVICES: s3:81,sqs:82,ses:83\r\n HOSTNAME_EXTERNAL: aws\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1c5f2e9650155a839cc842a9cd07faf3e76ed5d2", "files": [{"path": "docker-compose.yml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "5d11af78ae1d19560f696a9e1abb707bd115c390", "iss_html_url": "https://github.com/localstack/localstack/issues/4970", "iss_label": "type: bug\nstatus: triage needed\narea: configuration\naws:cloudformation\narea: networking", "title": "Lambda invocation exception", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nCreating and/or updating Lambda functions in docker does not work after updating LocalStack image to the latest version with the following error in LocalStack logs:\r\n```\r\n2021-11-20T03:33:32.357:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-2:000000000000:function:lambda-socket-local-custom-resource-apigw-cw-role result / log output:\r\n\r\n> standard_init_linux.go:228: exec user process caused: exec format error\r\n2021-11-20T03:33:32.814:INFO:localstack.services.awslambda.lambda_api: Error executing Lambda function arn:aws:lambda:us-east-2:000000000000:function:lambda-socket-local-custom-resource-apigw-cw-role: Lambda process returned with error. Result: . Output:\r\nstandard_init_linux.go:228: exec user process caused: exec format error Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 608, in run_lambda_executor\r\n result, log_output = self.execute_in_container(\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/awslambda/lambda_launcher.py.enc\", line 272, in docker_separate_execute_in_container\r\n File \"/opt/code/localstack/localstack/utils/docker_utils.py\", line 1335, in start_container\r\n raise ContainerException(\r\nlocalstack.utils.docker_utils.ContainerException: Docker container returned with exit code 1\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_api.py\", line 809, in run_lambda\r\n result = LAMBDA_EXECUTOR.execute(\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 441, in execute\r\n return do_execute()\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 431, in do_execute\r\n return _run(func_arn=func_arn)\r\n File \"/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py\", line 158, in wrapped\r\n raise e\r\n File \"/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py\", line 154, in wrapped\r\n result = func(*args, **kwargs)\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 418, in _run\r\n raise e\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 414, in _run\r\n result = self._execute(lambda_function, inv_context)\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 726, in _execute\r\n result = self.run_lambda_executor(lambda_function=lambda_function, inv_context=inv_context)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc\", line 548, in run_lambda_executor\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 649, in run_lambda_executor\r\n raise InvocationException(\r\nlocalstack.services.awslambda.lambda_executors.InvocationException: Lambda process returned with error. Result: . Output:\r\nstandard_init_linux.go:228: exec user process caused: exec format error\r\n\r\n2021-11-20T03:33:55.187:INFO:localstack_ext.services.cloudformation.service_models: Unable to fetch CF custom resource result from s3://localstack-cf-custom-resources-results/62c433d4 . Existing keys: []\r\n2021-11-20T03:33:55.189:DEBUG:localstack.utils.cloudformation.template_deployer: Error applying changes for CloudFormation stack \"lambda-socket-local\": An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist. Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1482, in _run\r\n self.do_apply_changes_in_loop(changes, stack, stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1554, in do_apply_changes_in_loop\r\n self.apply_change(change, stack, new_resources, stack_name=stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1619, in apply_change\r\n result = deploy_resource(resource_id, new_resources, stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 778, in deploy_resource\r\n result = execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 829, in execute_resource_action\r\n result = func[\"function\"](resource_id, resources, resource_type, func, stack_name)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/models/custom.py\", line 61, in create_custom_resource\r\n result=retry(fetch_result,retries=KIGak(CUSTOM_RESOURCES_RESULT_POLL_TIMEOUT/2),sleep=2)\r\n File \"/opt/code/localstack/localstack/utils/common.py\", line 812, in retry\r\n raise raise_error\r\n File \"/opt/code/localstack/localstack/utils/common.py\", line 808, in retry\r\n return function(**kwargs)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/models/custom.py\", line 58, in fetch_result\r\n return aws_utils.download_s3_object(CUSTOM_RESOURCES_RESULTS_BUCKET,result_key)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/utils/aws/aws_utils.py.enc\", line 31, in download_s3_object\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 391, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 719, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.errorfactory.NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.\r\n```\r\n\r\n### Expected Behavior\r\n\r\nLambda create and/or update operations should pass successfully all the way to the end without any errors.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n```yml\r\nservices:\r\n localstack:\r\n container_name: localstack\r\n image: localstack/localstack\r\n ports:\r\n - 443:443\r\n - 4510-4530:4510-4530\r\n - 4566:4566\r\n - 4571:4571\r\n environment:\r\n - LOCALSTACK_API_KEY=${LOCALSTACK_LICENSE}\r\n - USE_LIGHT_IMAGE=1\r\n - IMAGE_NAME=localstack/localstack\r\n - MAIN_CONTAINER_NAME=localstack\r\n - SERVICES=cloudformation,cloudfront,apigateway,apigatewayv2,iam,secretsmanager,lambda,s3,sqs,sts,ec2,kafka,elb,elbv2\r\n - DEFAULT_REGION=us-east-1\r\n - AWS_ACCESS_KEY_ID=test\r\n - AWS_SECRET_ACCESS_KEY=test\r\n - EAGER_SERVICE_LOADING=1\r\n - S3_SKIP_SIGNATURE_VALIDATION=1\r\n - DEBUG=1\r\n volumes:\r\n - /var/run/docker.sock:/var/run/docker.sock\r\n network_mode: bridge\r\n```\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\nA test case available at [GitHub](https://github.com/abbaseya/localstack-msk-lambda-test) - test command `./socket.sh`\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: macOS 12.0.1\r\n- LocalStack: latest\r\n- AWS CLI: 2.2.35\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n#4932 ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [96], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "localstack", "repo_name": "localstack", "base_commit": "c07094dbf52c947e77d952825eb4daabf409655d", "iss_html_url": "https://github.com/localstack/localstack/issues/5516", "iss_label": "type: bug\nstatus: triage needed\nstatus: response required\naws:cognito", "title": "bug: JWT ID Token issued by cognito-idp can not be verified in v0.14.0 but can in 0.11.5", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nJWT tokens issued by cognito can not be verified.\r\n\r\n### Expected Behavior\r\n\r\nJWT tokens issues by cognito should be verifiable.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith the `localstack` script\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n`LOCALSTACK_API_KEY={MY_KEY} SERVICES=cognito-idp,iam,lambda,cloudformation,s3,s3api,sts DISABLE_CORS_CHECKS=1 localstack start`\r\n\r\n`LocalStack CLI 0.14.0.1`\r\n`LocalStack version: 0.14.0`\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\nCreate the following files in some directory:\r\n`package.json` file:\r\n```json\r\n{\r\n \"name\": \"localstack-jwt\",\r\n \"version\": \"1.0.0\",\r\n \"description\": \"\",\r\n \"main\": \"index.js\",\r\n \"scripts\": {\r\n \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\r\n },\r\n \"keywords\": [],\r\n \"author\": \"\",\r\n \"license\": \"ISC\",\r\n \"dependencies\": {\r\n \"jsonwebtoken\": \"^8.5.1\",\r\n \"jwk-to-pem\": \"^2.0.5\",\r\n \"node-fetch\": \"^2.6.7\"\r\n }\r\n}\r\n\r\n```\r\n`create-user-pool.json` file:\r\n\r\n```json\r\n{\r\n \"PoolName\": \"test\",\r\n \"Policies\": {\r\n \"PasswordPolicy\": {\r\n \"MinimumLength\": 6,\r\n \"RequireUppercase\": false,\r\n \"RequireLowercase\": false,\r\n \"RequireNumbers\": false,\r\n \"RequireSymbols\": false,\r\n \"TemporaryPasswordValidityDays\": 5\r\n }\r\n },\r\n \"AdminCreateUserConfig\": {\r\n \"AllowAdminCreateUserOnly\": false,\r\n \"UnusedAccountValidityDays\": 5\r\n }\r\n}\r\n\r\n```\r\n\r\n`localstack.js` file:\r\n```javascript\r\nconst jwkToPem = require('jwk-to-pem');\r\nconst jwt = require('jsonwebtoken');\r\nconst ps = require('process');\r\nconst fetch = require('node-fetch');\r\n(async () => {\r\n const token = ps.argv[2];\r\n console.log('<== TOKEN:', token);\r\n console.log('==> http://localhost:4566/userpool/.well-known/jwks.json')\r\n const jwksResponse = await fetch('http://localhost:4566/userpool/.well-known/jwks.json');\r\n const jwks = await jwksResponse.json();\r\n console.log('<==', jwks);\r\n\r\n let decodedToken = jwt.decode(token, { complete: true });\r\n console.log('DECODED TOKEN:', decodedToken);\r\n const publicKey = jwkToPem(jwks.keys[0]);\r\n console.log('PUBLIC KEY:', publicKey);\r\n try {\r\n const decoded = jwt.verify(token, publicKey);\r\n console.log('!!! JWT is valid');\r\n } catch (err) {\r\n console.error('!!! ERROR:', err.message);\r\n }\r\n\r\n})();\r\n```\r\n\r\n`setup.sh` file:\r\n```bash\r\n#!/bin/bash\r\necho \"Creating User Pool\"\r\nUSERNAME=user1\r\nPASSWORD=password1\r\nUSER_POOL_ID=$( aws --endpoint-url=http://localhost:4566 cognito-idp create-user-pool \\\r\n --pool-name test \\\r\n --cli-input-json file://create-user-pool.json | jq -r '.UserPool.Id' )\r\necho \"User Pool Created: ${USER_POOL_ID}\"\r\n\r\necho \"Creating User Pool Client\"\r\nCLIENT_ID=$( aws --endpoint-url=http://localhost:4566 cognito-idp create-user-pool-client \\\r\n--user-pool-id \"$USER_POOL_ID\" \\\r\n--client-name client \\\r\n--explicit-auth-flows ALLOW_USER_PASSWORD_AUTH | jq -r '.UserPoolClient.ClientId')\r\necho \"User Pool Created: ${CLIENT_ID}\"\r\n\r\necho \"Sign Up User: user1/password1\"\r\naws --endpoint-url=http://localhost:4566 cognito-idp sign-up \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --username \"$USERNAME\" \\\r\n --password \"$PASSWORD\" && echo \"Sign Up Success\" || echo \"Failed to Sign Up\"\r\n\r\necho \"Please enter confirmation code printed in terminal with 'localstack start' and hit Enter:\"\r\nread CONFIRMATION_CODE\r\n\r\naws --endpoint-url=http://localhost:4566 cognito-idp confirm-sign-up \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --username \"$USERNAME\" \\\r\n --confirmation-code \"$CONFIRMATION_CODE\" && echo \"User Confirmed\" || echo \"Unable to confirm\"\r\n\r\necho \"Authenticating User\"\r\nID_TOKEN=$( aws --endpoint-url=http://localhost:4566 cognito-idp initiate-auth \\\r\n --auth-flow USER_PASSWORD_AUTH \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --auth-parameters USERNAME=\"$USERNAME\",PASSWORD=\"$PASSWORD\" | jq -r '.AuthenticationResult.IdToken' )\r\n\r\necho \"Validating ID TOKEN\"\r\nnode localstack.js \"$ID_TOKEN\"\r\n\r\n```\r\n\r\n## Run\r\n* `npm install`\r\n* start localstack `LOCALSTACK_API_KEY={MY_KEY} SERVICES=cognito-idp,iam,lambda,cloudformation,s3,s3api,sts DISABLE_CORS_CHECKS=1 localstack start`\r\n* run `./setup.sh`\r\n* script will ask for confirmation code printed in localstack console\r\n* finally script will output `!!! ERROR: invalid signature`\r\n\r\n## Try the same with 0.11.5\r\n* `./setup.sh` will print `!!! JWT is valid`\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: MacOS Monterey 12.2.1\r\n- LocalStack: 0.14.0\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\nRepository with scripts you can use to reproduce issue: https://github.com/poul-kg/localstack-jwt", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [82], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/499", "iss_label": "Bug", "title": "raise Exception(\"`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\")", "body": "### Describe the bug\n\nFresh install on ubuntu 22,\r\nI'm using interpreter in terminal.\r\n\r\nAfter sending a prompt, at some point on the answer the program crashes\r\n```\r\n\r\n> Traceback (most recent call last):\r\n File \"/home/fauxprophet/Documents/Ops/openai/bin/interpreter\", line 8, in <module>\r\n sys.exit(cli())\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 21, in cli\r\n cli(self)\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/cli/cli.py\", line 145, in cli\r\n interpreter.chat()\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 65, in chat\r\n for _ in self._streaming_chat(message=message, display=display):\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 86, in _streaming_chat\r\n yield from terminal_interface(self, message)\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/terminal_interface/terminal_interface.py\", line 50, in terminal_interface\r\n for chunk in interpreter.chat(message, display=False, stream=True):\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 106, in _streaming_chat\r\n raise Exception(\"`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\")\r\nException: `interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\r\n\r\n```\r\n\n\n### Reproduce\n\n1. open terminal\r\n2. run cmd : \"interpreter\"\r\n3. ask something like \"can you change the color of my termninal? provide me with a few different options, and let me choose using a keystroke (1,2,3)?\"\r\n4. Wait for answers\r\n5. While answering the program crashes\n\n### Expected behavior\n\nNot crash\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.1.5\n\n### Python version\n\n3.10.12\n\n### Operating System name and version\n\nUbuntu 22\n\n### Additional context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad", "files": [{"path": "interpreter/core/core.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/core/core.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/15", "iss_label": "", "title": "Error: cannot import name 'cli' from 'interpreter'", "body": "```console\r\n\r\n\u2570\u2500$ uname -a\r\nLinux lab 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n\u2570\u2500$ pip --version 1 \u21b5\r\npip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)\r\n\r\n\u2570\u2500$ interpreter \r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/interpreter\", line 5, in <module>\r\n from interpreter import cli\r\nImportError: cannot import name 'cli' from 'interpreter' (unknown location)\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d", "files": [{"path": "interpreter/interpreter.py", "Loc": {"(None, None, None)": {"mod": [1]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "36ec07125efec86594c91e990f68e0ab214e7edf", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/1548", "iss_label": "", "title": "run interpreter --model ollama/qwen2.5:3b error", "body": "### Bug Description\r\n\r\nWhen executing the command `interpreter --model ollama/qwen2.5:3b`, an error occurs with the specific error message:\r\n\r\n```\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)\r\n```\r\n\r\nThis error indicates that there is an unterminated string while trying to parse a JSON string, which usually happens when the response data is incomplete or improperly formatted.\r\n\r\n### Error Log\r\n\r\n```plaintext\r\n\r\nC:\\Users\\unsia>interpreter --model ollama/qwen2.5:3b\r\n\r\n\u258c Model set to ollama/qwen2.5:3b\r\n\r\nLoading qwen2.5:3b...\r\n\r\nTraceback (most recent call last):\r\n File \"<frozen runpy>\", line 198, in _run_module_as_main\r\n File \"<frozen runpy>\", line 88, in _run_code\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\interpreter.exe\\__main__.py\", line 7, in <module>\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\start_terminal_interface.py\", line 612, in main\r\n start_terminal_interface(interpreter)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\start_terminal_interface.py\", line 560, in start_terminal_interface\r\n validate_llm_settings(\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\validate_llm_settings.py\", line 109, in validate_llm_settings\r\n interpreter.llm.load()\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 397, in load\r\n self.interpreter.computer.ai.chat(\"ping\")\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\computer\\ai\\ai.py\", line 134, in chat\r\n for chunk in self.computer.interpreter.llm.run(messages):\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 322, in run\r\n yield from run_tool_calling_llm(self, params)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\run_tool_calling_llm.py\", line 178, in run_tool_calling_llm\r\n for chunk in llm.completions(**request_params):\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 466, in fixed_litellm_completions\r\n raise first_error # If all attempts fail, raise the first error\r\n ^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 443, in fixed_litellm_completions\r\n yield from litellm.completion(**params)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\litellm\\llms\\ollama.py\", line 455, in ollama_completion_stream\r\n raise e\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\litellm\\llms\\ollama.py\", line 433, in ollama_completion_stream\r\n function_call = json.loads(response_content)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\decoder.py\", line 353, in raw_decode\r\n obj, end = self.scan_once(s, idx)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)\r\n\r\n\r\n```\r\n\r\n### Analysis Process\r\n\r\n- **Call Stack**: The error occurs in the file `litellm/llms/ollama.py` when attempting to parse the model's response using `json.loads(response_content)`.\r\n- **Potential Causes**:\r\n - The format of the data returned by the model may not meet expectations.\r\n - It might be due to network issues, server-side problems, or the model's response format being non-compliant, leading to empty or partial responses from the model.\r\n\r\n### Suggested Solutions\r\n\r\n1. **Check the Model's Response**: Ensure that the API response from the model is complete and properly formatted as JSON. Debugging can be facilitated by printing out `response_content`.\r\n2. **Catch Errors and Print More Information**: Before calling `json.loads()`, add checks to ensure that `response_content` is indeed a valid JSON string.\r\n\r\nExample Code:\r\n\r\n```python\r\nif response_content:\r\n try:\r\n parsed_data = json.loads(response_content)\r\n except json.JSONDecodeError as e:\r\n print(f\"JSON Decode Error: {e}\")\r\n print(f\"Response content: {response_content}\")\r\nelse:\r\n print(\"Empty response content\")\r\n```\r\n\r\n### Steps to Reproduce\r\n\r\nTo be filled with specific steps to reproduce this issue.\r\n\r\n### Expected Behavior\r\n\r\nTo be filled with the expected behavior from the user's perspective.\r\n\r\n### Environment Information\r\n\r\n- **Open Interpreter Version**: Open Interpreter 0.4.3 Developer Preview\r\n- **Python Version**: Python 3.11.0\r\n- **Operating System**: Windows 11\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "36ec07125efec86594c91e990f68e0ab214e7edf", "files": [{"path": "docs/usage/terminal/arguments.mdx", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n"}, "loctype": {"code": [], "doc": ["docs/usage/terminal/arguments.mdx"], "test": [], "config": [], "asset": []}}, {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "8fb4668dc7451ac58ac57ba587ed77194469f739", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/1175", "iss_label": "", "title": "Error when inporting interpreter", "body": "### Describe the bug\n\nI have the following error when I try to import interpreter:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/seba/workspace/AutoProgrammer/interpreter.py\", line 1, in <module>\r\n from interpreter import interpreter\r\n File \"/home/seba/workspace/AutoProgrammer/interpreter.py\", line 1, in <module>\r\n from interpreter import interpreter\r\nImportError: cannot import name 'interpreter' from partially initialized module 'interpreter' (most likely due to a circular import\r\n```\r\nI'm not python expert, but can't figure out what I did wrong. I installed open-interpreter with pip, pip in venv, conda but nothing helps. Other libs like crewai have no problem with imports.\n\n### Reproduce\n\n1. install open-interpreter\r\n2. inport into .py file `from interpreter import interpreter`\r\n3. run file\n\n### Expected behavior\n\nImport works\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.2.4\n\n### Python version\n\n3.11.8\n\n### Operating System name and version\n\nFedora\n\n### Additional context\n\nTested with open-interpreter `0.2.0` and `0.2.4`, python `3.10` and `3.11`", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"path": "/home/seba/workspace/AutoProgrammer/interpreter.py"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["/home/seba/workspace/AutoProgrammer/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "3bc25680529cdb6b5d407c8332e820aeb2e0b948", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/66", "iss_label": "", "title": "WebSocket error code", "body": "\r\n\"Your demonstration website has the same error, please take a look.\"", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "3bc25680529cdb6b5d407c8332e820aeb2e0b948", "files": [{"path": "docker-compose.yml", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "2f88cf9b2568163954ecc7c20ef9879263bfc9ba", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/476", "iss_label": "", "title": "Error generating code. Please contact support.", "body": "I have already started the project both frontend and backend but when placing the image I get the following error \"Error generating code. Please contact support.\" Could you help me with this problem?\r\n![image](https://github.com/user-attachments/assets/a71c97fe-c3c2-419e-b036-0a74ee577279)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Other\n\u73af\u5883\u53d8\u91cf\n\u6587\u6863\u7684\u4e00\u4e2aloc\u7684\u8bef\u89e3"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "4e30b207c1ee9ddad05a37c31a11ac5a182490b7", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/270", "iss_label": "", "title": "Error configuring ANTHROPIC API KEY in.env file", "body": "I added \"ANTHROPIC_API_KEY=s****\" to the.env file\r\n\r\n\"No Anthropic API key found. Please add the environment variable ANTHROPIC_API_KEY to backend/.env\"\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "4e30b207c1ee9ddad05a37c31a11ac5a182490b7", "files": [{"path": "backend/config.py", "Loc": {"(None, None, None)": {"mod": [6]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["backend/config.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "226af5bf4183539c97c7bab825cb9324b8c570c0", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/136", "iss_label": "", "title": "error generating code ", "body": "Error generating code. Check the Developer Console AND the backend logs for details. Feel free to open a Github issue.\r\n\r\nwhile hiiting the url and pasting the screenshot it shows below error ,am i doing it correctly \r\n<img width=\"940\" alt=\"Screenshot 2023-11-30 212304\" src=\"https://github.com/abi/screenshot-to-code/assets/152517537/38d9b1af-125b-45d4-9c4a-cbb600f5ec7d\">\r\n<img width=\"940\" alt=\"Screenshot 2023-11-30 212304\" src=\"https://github.com/abi/screenshot-to-code/assets/152517537/9c5bf85b-8109-44f7-842d-ec69dd2c49d0\">\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "226af5bf4183539c97c7bab825cb9324b8c570c0", "files": [{"path": "Troubleshooting.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": ["Troubleshooting.md"], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/452", "iss_label": "", "title": "build failed", "body": "**Describe the bug**\r\nDocker container Exited for `screenshot-to-code-main-frontend-1`\r\n\r\n**To Reproduce**\r\nOS: Ubuntu 22.04.4 LTS\r\nDocker Compose version v2.28.1\r\nBuild version: (commit id) b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1\r\n\r\n\r\n**Screenshots of backend AND frontend terminal logs**\r\nNginx conf\r\n```\r\n location /screenshot {\r\n proxy_set_header Host $host;\r\n proxy_set_header X-Real-IP $remote_addr;\r\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\r\n proxy_set_header X-Forwarded-Proto $scheme;\r\n proxy_send_timeout 1000;\r\n proxy_read_timeout 1000;\r\n send_timeout 1000;\r\n client_max_body_size 5M;\r\n proxy_pass http://127.0.0.1:5173;\r\n }\r\n```\r\n```\r\n~ docker logs --tail 444 screenshot-to-code-main-frontend-1\r\nyarn run v1.22.22\r\n$ vite --host 0.0.0.0\r\n\r\n VITE v4.5.0 ready in 1390 ms\r\n\r\n \u279c Local: http://localhost:5173/\r\n \u279c Network: http://172.20.0.3:5173/\r\n\r\n ERROR \r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n\r\n\r\n WARN Browserslist: caniuse-lite is outdated. Please run:\r\n npx update-browserslist-db@latest\r\n Why you should do it regularly: https://github.com/browserslist/update-db#readme\r\n\r\nfile:///app/tailwind.config.js:2\r\nmodule.exports = {\r\n^\r\n\r\nReferenceError: module is not defined\r\n at file:///app/tailwind.config.js:2:1\r\n at ModuleJobSync.runSync (node:internal/modules/esm/module_job:395:35)\r\n at ModuleLoader.importSyncForRequire (node:internal/modules/esm/loader:329:47)\r\n at loadESMFromCJS (node:internal/modules/cjs/loader:1414:24)\r\n at Module._compile (node:internal/modules/cjs/loader:1547:5)\r\n at Object..js (node:internal/modules/cjs/loader:1677:16)\r\n at Module.load (node:internal/modules/cjs/loader:1318:32)\r\n at Function._load (node:internal/modules/cjs/loader:1128:12)\r\n at TracingChannel.traceSync (node:diagnostics_channel:322:14)\r\n at wrapModuleLoad (node:internal/modules/cjs/loader:219:24)\r\n at Module.require (node:internal/modules/cjs/loader:1340:12)\r\n at require (node:internal/modules/helpers:138:16)\r\n at /app/node_modules/tailwindcss/lib/lib/load-config.js:35:27\r\n at loadConfig (/app/node_modules/tailwindcss/lib/lib/load-config.js:39:6)\r\n at getTailwindConfig (/app/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:71:116)\r\n at /app/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:100:92\r\n at /app/node_modules/tailwindcss/lib/processTailwindFeatures.js:48:11\r\n at plugins (/app/node_modules/tailwindcss/lib/plugin.js:38:69)\r\n at LazyResult.runOnRoot (/app/node_modules/postcss/lib/lazy-result.js:329:16)\r\n at LazyResult.runAsync (/app/node_modules/postcss/lib/lazy-result.js:258:26)\r\n at LazyResult.async (/app/node_modules/postcss/lib/lazy-result.js:160:30)\r\n at LazyResult.then (/app/node_modules/postcss/lib/lazy-result.js:404:17)\r\n\r\nNode.js v22.12.0\r\nerror Command failed with exit code 1.\r\ninfo Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.\r\n```\r\n![image](https://github.com/user-attachments/assets/498ddae4-247e-4641-811b-28b197c7aeef)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1", "files": [{"path": "frontend/tailwind.config.js", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/tailwind.config.js"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "214163b0e02176333b5543740cf6262e5da99602", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/268", "iss_label": "", "title": "model evaluation method", "body": "How to evaluate the performance of the model on generalized data, such as comparing the original screenshots with the generated results? Are there any indicators?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "214163b0e02176333b5543740cf6262e5da99602", "files": [{"path": "blog/evaluating-claude.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["blog/evaluating-claude.md"], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/443", "iss_label": "", "title": "ReferenceError: module is not defined", "body": "When running the frontend yarn dev command, I get the error below.\r\n\r\n\r\nSteps to reproduce the behavior:\r\n1. Go to frontend folder\r\n2. execute: `yarn`\r\n3. execute: `yarn dev`\r\n\r\n\r\nImmediately after executing the yarn dev command, I get a message that says:\r\n```\r\n ERROR 16:31:02\r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n```\r\n\r\nThen when I navigate to http://localhost:5173/, it crashes with the following output:\r\n\r\n```\r\n(base) user@192 frontend % yarn dev \r\nyarn run v1.22.22\r\nwarning ../../../package.json: No license field\r\n$ vite\r\n 16:31:00\r\n VITE v4.5.0 ready in 544 ms\r\n\r\n \u279c Local: http://localhost:5173/ 16:31:00\r\n \u279c Network: use --host to expose 16:31:00\r\n \u279c press h to show help 16:31:00\r\n\r\n ERROR 16:31:02\r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n\r\n\r\n WARN Browserslist: caniuse-lite is outdated. Please run: 16:31:37\r\n npx update-browserslist-db@latest\r\n Why you should do it regularly: https://github.com/browserslist/update-db#readme\r\n\r\n\r\n ERROR (node:91140) ExperimentalWarning: CommonJS module /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js is loading ES Module /Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js using require().\r\nSupport for loading ES Module in require() is an experimental feature and might change at any time\r\n(Use `node --trace-warnings ...` to show where the warning was created)\r\n\r\nfile:///Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js:2\r\nmodule.exports = {\r\n^\r\n\r\nReferenceError: module is not defined\r\n at file:///Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js:2:1\r\n at ModuleJobSync.runSync (node:internal/modules/esm/module_job:395:35)\r\n at ModuleLoader.importSyncForRequire (node:internal/modules/esm/loader:329:47)\r\n at loadESMFromCJS (node:internal/modules/cjs/loader:1376:24)\r\n at Module._compile (node:internal/modules/cjs/loader:1528:5)\r\n at Object..js (node:internal/modules/cjs/loader:1698:10)\r\n at Module.load (node:internal/modules/cjs/loader:1303:32)\r\n at Function._load (node:internal/modules/cjs/loader:1117:12)\r\n at TracingChannel.traceSync (node:diagnostics_channel:322:14)\r\n at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)\r\n at Module.require (node:internal/modules/cjs/loader:1325:12)\r\n at require (node:internal/modules/helpers:136:16)\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js:35:27\r\n at loadConfig (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js:39:6)\r\n at getTailwindConfig (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:71:116)\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:100:92\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/processTailwindFeatures.js:48:11\r\n at plugins (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/plugin.js:38:69)\r\n at LazyResult.runOnRoot (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:329:16)\r\n at LazyResult.runAsync (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:258:26)\r\n at LazyResult.async (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:160:30)\r\n at LazyResult.then (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:404:17)\r\n\r\nNode.js v23.3.0\r\nerror Command failed with exit code 1.\r\ninfo Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.\r\n\r\n```\r\n\r\nEdit: I am running MacOS 15.1 M2 chip.\r\nEdit 2: I only set OpenAI key, I do not intend to use both APIs.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1", "files": [{"path": "frontend/tailwind.config.js", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/tailwind.config.js"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/132", "iss_label": "", "title": "Why Connection closed 1006", "body": "![image](https://github.com/abi/screenshot-to-code/assets/19514719/e8d6aa4c-e133-475d-bce6-7309082c0cc2)\r\n\r\n![image](https://github.com/abi/screenshot-to-code/assets/19514719/9e00d1ef-67e2-4e13-9276-4ea4119c12cc)\r\n\r\n![image](https://github.com/abi/screenshot-to-code/assets/19514719/a15e37ce-d0aa-4dfe-896d-3eb0a96a7e63)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b", "files": [{"path": "backend/main.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["backend/main.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "689783eabd552151fa511e44cba90c14f3ee4dcd", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/83", "iss_label": "", "title": "code error", "body": "Hi, I tried the [online version](https://picoapps.xyz/free-tools/screenshot-to-code) of your tool with my API key but I got an error from that following screenshot \r\n\r\n![Web capture_22-11-2023_22822_www maras-it com](https://github.com/abi/screenshot-to-code/assets/482210/3c331d2e-cd22-4d65-8d4d-003468cd0c2e)\r\n\r\nwhich return this in the console :\r\n\r\n```JS\r\nWebSocket error code CloseEvent\u00a0{isTrusted: true, wasClean: false, code: 1006, reason: '', type: 'close',\u00a0\u2026}isTrusted: truebubbles: falsecancelBubble: falsecancelable: falsecode: 1006composed: falsecurrentTarget: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}defaultPrevented: falseeventPhase: 0reason: \"\"returnValue: truesrcElement: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}target: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}timeStamp: 70399.80000001192type: \"close\"wasClean: false[[Prototype]]: CloseEventcode: (...)reason: (...)wasClean: (...)constructor: \u0192 CloseEvent()Symbol(Symbol.toStringTag): \"CloseEvent\"bubbles: (...)cancelBubble: (...)cancelable: (...)composed: (...)currentTarget: (...)defaultPrevented: (...)eventPhase: (...)returnValue: (...)srcElement: (...)target: (...)timeStamp: (...)type: (...)get code: \u0192 code()get reason: \u0192 reason()get wasClean: \u0192 wasClean()[[Prototype]]: Event\r\n(anonymous) @ index-9af3e78e.js:225\r\n```\r\n\r\n<img width=\"946\" alt=\"image\" src=\"https://github.com/abi/screenshot-to-code/assets/482210/b8403fbe-fc6b-479d-92ea-5f70610b3d6c\">\r\n\r\nany idea on that topic ?\r\n\r\ndavid\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "689783eabd552151fa511e44cba90c14f3ee4dcd", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "7d6fde2deafa014dc1a90c3b1dcb2ed88680a2ff", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/1", "iss_label": "", "title": "Error: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte", "body": "Hello, thank you for your contribution, I am having the above problem, can you help me?\r\n\r\n` File \"<frozen codecs>\", line 322, in decode\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte`", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Other\n\u73af\u5883\u53d8\u91cf"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}}, {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/150", "iss_label": "", "title": "Error generating code. Check the Developer Console AND the backend logs for details", "body": "My ChatGPT has access to GPT-VISION. and the web app loads well but when I upload an image. it returns this error 'Error generating code. Check the Developer Console AND the backend logs for details'\r\n<img width=\"466\" alt=\"error\" src=\"https://github.com/abi/screenshot-to-code/assets/100529823/97c337b7-de54-45f9-8def-f984ade50a6d\">\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c", "files": [{"path": "docker-compose.yml", "Loc": {"(None, None, 20)": {"mod": [20]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "4622b3395276b37e10141fab43ffea33941ca0c2", "iss_html_url": "https://github.com/pytorch/pytorch/issues/2384", "iss_label": "", "title": "How the grad is transferred between layer", "body": "consider a simple example here:\r\n```python\r\nimport torch\r\nfrom torch.autograd import Variable\r\n\r\ninput = Variable(torch.randn(20, 3, 28, 28), requires_grad=True)\r\nm = torch.nn.Conv2d(3, 16, 5)\r\noutput = m(input)\r\n\r\nloss = torch.sum(output)# define loss to perform backprop\r\nm.zero_grad()\r\nloss.backward()\r\n\r\nprint(type(input))\r\nprint(input.grad.size())\r\nprint(type(output))\r\nprint(output.grad)\r\n```\r\nthe output is:\r\n```\r\n<class 'torch.autograd.variable.Variable'>\r\ntorch.Size([20, 3, 28, 28])\r\n<class 'torch.autograd.variable.Variable'>\r\nNone\r\n```\r\nI find the `output.grad` is `None`. I don't know how the `input.grad` is calculated without `output.grad`.\r\nand want to know how to get the values of `output.grad`.\r\n\r\nthanks!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "4622b3395276b37e10141fab43ffea33941ca0c2", "files": [{"path": "torch/autograd/variable.py", "Loc": {"('Variable', 'retain_grad', 236)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/autograd/variable.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "2abcafcfd8beb4f6a22e08532d58f9f09c490f0f", "iss_html_url": "https://github.com/pytorch/pytorch/issues/96983", "iss_label": "module: binaries\ntriaged\nmodule: arm", "title": "PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nPyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support, where as PyTorch 1.13.0 had support.\r\n\r\nSolution:\r\nthe wheels need to be built with the `--enable-mkldnn` option while building them from the pytorch/builder repo.\r\n\r\nexample command for pytorch wheel builder script:\r\n`./build_aarch64_wheel.py --python-version 3.8 --use-docker --keep-running --os ubuntu20_04 --enable-mkldnn --branch release/2.0`\r\n\r\nTo reproduce the issue, create c6g or c7g instance from AWS EC2, and in the below output, look for `USE_MKLDNN=`, this was ON for PyTorch 1.13.0 but OFF for PyTorch2.0.0.\r\n\r\nnon-working scenario\r\n```\r\npip install torch==2.0.0\r\n\r\ntime python3 -c \"import torch; torch.set_num_threads(8); print(torch.__version__, torch.__config__.show(), torch.get_num_threads());a=torch.rand(100, 100, 100); b=torch.rand(100,100, 100); [torch.bmm(a,b).sum() for i in range(1000)]\"\r\n2.0.0 PyTorch built with:\r\n - GCC 10.2\r\n - C++ Version: 201703\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: NO AVX\r\n - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n\r\n```\r\n\r\nworking scenario:\r\n\r\n```\r\npip3 install torch==1.13.0\r\n\r\ntime python3 -c \"import torch; torch.set_num_threads(8); print(torch.__version__, torch.__config__.show(), torch.get_num_threads());a=torch.rand(100, 100, 100); b=torch.rand(100,100, 100); [torch.bmm(a,b).sum() for i in range(1000)]\"\r\n\r\n1.13.0 PyTorch built with:\r\n - GCC 10.2\r\n - C++ Version: 201402\r\n - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: NO AVX\r\n - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_VERSION=1.13.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n \r\n\r\n\r\n```\r\n\r\n### Versions\r\n```\r\nCollecting environment information...\r\nPyTorch version: 2.0.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.5 LTS (aarch64)\r\nGCC version: (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0\r\nClang version: Could not collect\r\nCMake version: version 3.25.2\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-1028-aws-aarch64-with-glibc2.29\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: aarch64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 16\r\nOn-line CPU(s) list: 0-15\r\nThread(s) per core: 1\r\nCore(s) per socket: 16\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: ARM\r\nModel: 1\r\nStepping: r1p1\r\nBogoMIPS: 2100.00\r\nL1d cache: 1 MiB\r\nL1i cache: 1 MiB\r\nL2 cache: 16 MiB\r\nL3 cache: 32 MiB\r\nNUMA node0 CPU(s): 0-15\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; CSV2, BHB\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.2\r\n[pip3] torch==2.0.0\r\n[pip3] torchvision==0.14.1\r\n[conda] Could not collect\r\n```\r\n\r\ncc @ezyang @seemethere @malfet", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "2abcafcfd8beb4f6a22e08532d58f9f09c490f0f", "files": [{"path": ".ci/aarch64_linux/build_aarch64_wheel.py", "Loc": {"(None, None, None)": {"mod": [8]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [".ci/aarch64_linux/build_aarch64_wheel.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pytorch", "repo_name": "pytorch", "base_commit": "2dff0b3e918530719f7667cb31541f036a25e3f2", "iss_html_url": "https://github.com/pytorch/pytorch/issues/48435", "iss_label": "", "title": "AttributeError: module 'torch.cuda' has no attribute 'comm'", "body": "## \u2753 Questions and Help\r\n\r\nI'm using torch 1.7.0, and get this kind of error\r\n\r\nmy torch is installed via \r\n\r\npip install torch==1.7.0+cu101 torchvision==0.8.1+cu101 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html\r\n\r\nmy os is win10", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/facebookresearch/InterHand2.6M/commit/874eb9f740ef54c275433d1bd27f8fb8f6a8f17d", "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "facebookresearch", "pro": "InterHand2.6M", "path": ["{'base_commit': '874eb9f740ef54c275433d1bd27f8fb8f6a8f17d', 'files': [{'path': 'common/nets/module.py', 'status': 'modified', 'Loc': {('PoseNet', 'soft_argmax_1d', 41): {'mod': [43]}}}]}"]}], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["common/nets/module.py"], "doc": [], "test": [], "config": [], "asset": ["InterHand2.6M"]}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "e8f6013d0349229fd8f7d298952cfe56fc4b8761", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2070", "iss_label": "bug\nstale", "title": "Liaobots and You don't work", "body": "Liaobots and You do not work, they give the following errors:\r\n\r\n```\r\nLiaobots: ResponseStatusError: Response 500: Error\r\n``` \r\n\r\n```\r\nYou: ResponseStatusError: Response 401: {\"status_code\":401,\"request_id\":\"request-id-live-183191e7-adc1-4838-8e29-6e0c5c3ca048\",\"error_type\":\"endpoint_not_authorized_for_sdk\",\"error_message\":\"The project owner has not authorized the SDK to call this endpoint. Please enable it in the dashboard to continue: https://stytch.com/dashboard/sdk-configuration.\",\"error_url\":\"https://stytch.com/docs/api/errors/401#endpoint_not_authorized_for_sdk\"}\r\n``` \r\n@xtekky @hlohaus ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "e8f6013d0349229fd8f7d298952cfe56fc4b8761", "files": [{"path": "g4f/Provider/Liaobots.py", "Loc": {"('Liaobots', 'create_async_generator', 111)": {"mod": [149]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Liaobots.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "fa2d608822540c9b73350bfa036e8822ade4e23f", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2305", "iss_label": "stale", "title": "ValueError: Unknown model: dall-e-3", "body": "```\r\nC:\\Users\\MAX\\Desktop>pip install -U g4f[all]\r\nDefaulting to user installation because normal site-packages is not writeable\r\nRequirement already satisfied: g4f[all] in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (0.3.3.2)\r\nRequirement already satisfied: requests in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.32.3)\r\nRequirement already satisfied: aiohttp in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.9.3)\r\nRequirement already satisfied: brotli in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.1.0)\r\nRequirement already satisfied: pycryptodome in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.20.0)\r\nRequirement already satisfied: curl-cffi>=0.6.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.7.3)\r\nRequirement already satisfied: cloudscraper in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.2.71)\r\nRequirement already satisfied: certifi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2024.8.30)\r\nRequirement already satisfied: browser-cookie3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.19.1)\r\nRequirement already satisfied: PyExecJS in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.5.1)\r\nRequirement already satisfied: duckduckgo-search>=5.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (6.3.2)\r\nRequirement already satisfied: beautifulsoup4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (4.12.3)\r\nRequirement already satisfied: pywebview in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (5.2)\r\nRequirement already satisfied: platformdirs in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (4.2.2)\r\nRequirement already satisfied: plyer in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.1.0)\r\nRequirement already satisfied: cryptography in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (43.0.0)\r\nRequirement already satisfied: aiohttp-socks in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.8.4)\r\nRequirement already satisfied: pillow in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (10.2.0)\r\nRequirement already satisfied: cairosvg in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.7.1)\r\nRequirement already satisfied: werkzeug in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.0.1)\r\nRequirement already satisfied: flask in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.0.2)\r\nRequirement already satisfied: loguru in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.7.2)\r\nRequirement already satisfied: fastapi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.109.2)\r\nRequirement already satisfied: uvicorn in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.27.0.post1)\r\nRequirement already satisfied: nest-asyncio in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.6.0)\r\nRequirement already satisfied: cffi>=1.12.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from curl-cffi>=0.6.2->g4f[all]) (1.17.0)\r\nRequirement already satisfied: typing-extensions in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from curl-cffi>=0.6.2->g4f[all]) (4.12.2)\r\nRequirement already satisfied: click>=8.1.7 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from duckduckgo-search>=5.0->g4f[all]) (8.1.7)\r\nRequirement already satisfied: primp>=0.6.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from duckduckgo-search>=5.0->g4f[all]) (0.6.4)\r\nRequirement already satisfied: aiosignal>=1.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.3.1)\r\nRequirement already satisfied: attrs>=17.3.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (23.2.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.4.1)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (6.0.5)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.9.4)\r\nRequirement already satisfied: python-socks<3.0.0,>=2.4.3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->g4f[all]) (2.4.4)\r\nRequirement already satisfied: soupsieve>1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from beautifulsoup4->g4f[all]) (2.5)\r\nRequirement already satisfied: lz4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from browser-cookie3->g4f[all]) (4.3.3)\r\nRequirement already satisfied: pycryptodomex in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from browser-cookie3->g4f[all]) (3.20.0)\r\nRequirement already satisfied: cairocffi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (1.6.1)\r\nRequirement already satisfied: cssselect2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (0.7.0)\r\nRequirement already satisfied: defusedxml in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (0.7.1)\r\nRequirement already satisfied: tinycss2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (1.2.1)\r\nRequirement already satisfied: pyparsing>=2.4.7 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cloudscraper->g4f[all]) (3.1.2)\r\nRequirement already satisfied: requests-toolbelt>=0.9.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cloudscraper->g4f[all]) (1.0.0)\r\nRequirement already satisfied: charset-normalizer<4,>=2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (3.3.2)\r\nRequirement already satisfied: idna<4,>=2.5 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (3.6)\r\nRequirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (2.1.0)\r\nRequirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from fastapi->g4f[all]) (2.6.1)\r\nRequirement already satisfied: starlette<0.37.0,>=0.36.3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from fastapi->g4f[all]) (0.36.3)\r\nRequirement already satisfied: Jinja2>=3.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (3.1.3)\r\nRequirement already satisfied: itsdangerous>=2.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (2.1.2)\r\nRequirement already satisfied: blinker>=1.6.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (1.7.0)\r\nRequirement already satisfied: MarkupSafe>=2.1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from werkzeug->g4f[all]) (2.1.5)\r\nRequirement already satisfied: colorama>=0.3.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from loguru->g4f[all]) (0.4.6)\r\nRequirement already satisfied: win32-setctime>=1.0.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from loguru->g4f[all]) (1.1.0)\r\nRequirement already satisfied: six>=1.10.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from PyExecJS->g4f[all]) (1.16.0)\r\nRequirement already satisfied: proxy-tools in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (0.1.0)\r\nRequirement already satisfied: bottle in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (0.13.1)\r\nRequirement already satisfied: pythonnet in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (3.0.3)\r\nRequirement already satisfied: h11>=0.8 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from uvicorn->g4f[all]) (0.14.0)\r\nRequirement already satisfied: pycparser in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cffi>=1.12.0->curl-cffi>=0.6.2->g4f[all]) (2.22)\r\nRequirement already satisfied: annotated-types>=0.4.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->g4f[all]) (0.6.0)\r\nRequirement already satisfied: pydantic-core==2.16.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->g4f[all]) (2.16.2)\r\nRequirement already satisfied: async-timeout>=3.0.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->g4f[all]) (4.0.3)\r\nRequirement already satisfied: anyio<5,>=3.4.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from starlette<0.37.0,>=0.36.3->fastapi->g4f[all]) (4.2.0)\r\nRequirement already satisfied: webencodings in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cssselect2->cairosvg->g4f[all]) (0.5.1)\r\nRequirement already satisfied: clr-loader<0.3.0,>=0.2.6 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pythonnet->pywebview->g4f[all]) (0.2.6)\r\nRequirement already satisfied: sniffio>=1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from anyio<5,>=3.4.0->starlette<0.37.0,>=0.36.3->fastapi->g4f[all]) (1.3.0)\r\n\r\nC:\\Users\\MAX\\Desktop>\r\nTraceback (most recent call last):.py\r\n File \"C:\\Users\\MAX\\Desktop\\gptimg.py\", line 4, in <module>\r\n response = client.images.generate(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\MAX\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\g4f\\client\\client.py\", line 421, in generate\r\n return asyncio.run(self.async_generate(prompt, model, response_format=response_format, **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\runners.py\", line 194, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\base_events.py\", line 687, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\MAX\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\g4f\\client\\client.py\", line 426, in async_generate\r\n raise ValueError(f\"Unknown model: {model}\")\r\nValueError: Unknown model: dall-e-3\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "fa2d608822540c9b73350bfa036e8822ade4e23f", "files": [{"path": "g4f/models.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/models.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "1ade1d959cbc9aea7cf653bbe5b6c414ba486c97", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1292", "iss_label": "bug\nstale", "title": "RecursionError: maximum recursion depth exceeded while calling a Python object", "body": "Ubuntu 22, g4f-0.1.9.0, pip installation method, python3.10\r\n\r\n**Bug description**\r\nG4F API has these errors after 5-10 requests. I have to restart constantly. It is very uncomfortable. This problem did not exist in the previous version.\r\n\r\n**Errors**\r\n```\r\nRecursionError: maximum recursion depth exceeded in comparison\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\nRuntimeError: RetryProvider failed:\r\nYou: RecursionError: maximum recursion depth exceeded\r\nChatgpt4Online: RecursionError: maximum recursion depth exceeded in comparison\r\nChatAnywhere: RecursionError: maximum recursion depth exceeded while encoding a JSON object\r\nChatgptX: RecursionError: maximum recursion depth exceeded in comparison\r\nGptForLove: RuntimeUnavailableError: Could not find an available JavaScript runtime.\r\nChatBase: RecursionError: maximum recursion depth exceeded while encoding a JSON object\r\nGptGo: RecursionError: maximum recursion depth exceeded while calling a Python object\r\n```\r\n\r\n**Traceback**\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/api/__init__.py\", line 85, in chat_completions\r\n response = g4f.ChatCompletion.create(\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/__init__.py\", line 76, in create\r\n return result if stream else ''.join(result)\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/Provider/retry_provider.py\", line 59, in create_completion\r\n self.raise_exceptions()\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/Provider/retry_provider.py\", line 87, in raise_exceptions\r\n raise RuntimeError(\"\\n\".join([\"RetryProvider failed:\"] + [\r\nRuntimeError: RetryProvider failed:\r\nChatAnywhere: RecursionError: maximum recursion depth exceeded\r\nChatBase: RecursionError: maximum recursion depth exceeded\r\nChatgptX: RecursionError: maximum recursion depth exceeded\r\nYou: RecursionError: maximum recursion depth exceeded while calling a Python object\r\nGptGo: RecursionError: maximum recursion depth exceeded\r\nChatgpt4Online: RecursionError: maximum recursion depth exceeded\r\nGptForLove: RecursionError: maximum recursion depth exceeded\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py\", line 408, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py\", line 84, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/applications.py\", line 1106, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/applications.py\", line 122, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py\", line 184, in __call__\r\n raise exc\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py\", line 20, in __call__\r\n raise e\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py\", line 17, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 718, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 276, in handle\r\n await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 66, in app\r\n response = await func(request)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/routing.py\", line 274, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/api/__init__.py\", line 91, in chat_completions\r\n logging.exception(e)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 2113, in exception\r\n error(msg, *args, exc_info=exc_info, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 2105, in error\r\n root.error(msg, *args, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1506, in error\r\n self._log(ERROR, msg, args, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1624, in _log\r\n self.handle(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1634, in handle\r\n self.callHandlers(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1696, in callHandlers\r\n hdlr.handle(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 968, in handle\r\n self.emit(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1100, in emit\r\n msg = self.format(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 943, in format\r\n return fmt.format(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 686, in format\r\n record.exc_text = self.formatException(record.exc_info)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 636, in formatException\r\n traceback.print_exception(ei[0], ei[1], tb, None, sio)\r\n File \"/usr/lib/python3.10/traceback.py\", line 120, in print_exception\r\n for line in te.format(chain=chain):\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 248, in format\r\n yield from _ctx.emit(exc.format_exception_only())\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 64, in emit\r\n for text in text_gen:\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 335, in format_exception_only\r\n if isinstance(self.__notes__, collections.abc.Sequence):\r\n File \"/usr/lib/python3.10/abc.py\", line 119, in __instancecheck__\r\n return _abc_instancecheck(cls, instance)\r\nRecursionError: maximum recursion depth exceeded in comparison\r\n```\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "1ade1d959cbc9aea7cf653bbe5b6c414ba486c97", "files": [{"path": "g4f/cli.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/cli.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "c159eebd494b1aef06340429b7b62cdfb84f783d", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2556", "iss_label": "bug", "title": "Errors when generating images in the following models:", "body": "Hi!\r\nerrors when generating images in the following models:\r\nResponse 404: The page could not be found\r\nsdxl, playground-v2.5, sd-3\r\n\r\n dall-e-3: Missing \"_U\" cookie\r\n \r\n midjourney: Cannot connect to host image.pollinations.ai:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c159eebd494b1aef06340429b7b62cdfb84f783d", "files": [{"path": "projects/windows/main.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["projects/windows/main.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "b7eee50930dbd782d7c068d1d29cd270b97bc741", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1710", "iss_label": "bug\nstale", "title": "AttributeError: module 'g4f' has no attribute 'client'", "body": "**Bug description** \r\nWhen trying to run script from Quickstart, i get this error.\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py\", line 3, in <module>\r\n engine = g4f.client.Client()\r\nAttributeError: module 'g4f' has no attribute 'client'\r\n\r\n**Environment**\r\nPython version: 3.11.7", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "b7eee50930dbd782d7c068d1d29cd270b97bc741", "files": [{"path": "g4f/client/__init__.py", "Loc": {}}, {"path": "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"}]}, "own_code_loc": [{"path": "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["g4f/client/__init__.py"], "doc": [], "test": ["C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "2a54c36043b9d87b96c4b7699ce194f8523479b8", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/552", "iss_label": "bug", "title": "Unable to fetch the response, Please try again.", "body": "![IMG_20230514_171809.jpg](https://github.com/xtekky/gpt4free/assets/29172927/6263b9db-3362-4c5b-b043-80b62213a61b)\n\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "2a54c36043b9d87b96c4b7699ce194f8523479b8", "files": [{"path": "gpt4free/you/__init__.py", "Loc": {"('Completion', 'create', 22)": {"mod": [41]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt4free/you/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "c29487cdb522a2655ccff45bdfc33895ed4daf84", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2078", "iss_label": "bug", "title": "HuggingChat provider is not working - ResponseStatusError: Response 500", "body": "### Bug description\r\n\r\nWhen I try to use the HuggingChat provider, having added a cookies/har file, I always get the same error: `An error occurred: HuggingChat: ResponseStatusError: Response 500:`\r\n\r\n```\r\nUsing HuggingChat provider and CohereForAI/c4ai-command-r-plus model\r\nINFO:werkzeug:192.168.80.1 - - [22/Jun/2024 16:31:48] \"POST /backend-api/v2/conversation HTTP/1.1\" 200 -\r\nERROR:root:Response 500: \r\nTraceback (most recent call last):\r\n File \"/app/g4f/gui/server/api.py\", line 177, in _create_response_stream\r\n for chunk in ChatCompletion.create(**kwargs):\r\n File \"/app/g4f/providers/base_provider.py\", line 223, in create_completion\r\n yield loop.run_until_complete(await_callback(gen.__anext__))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/asyncio/base_events.py\", line 654, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"/app/g4f/providers/base_provider.py\", line 52, in await_callback\r\n return await callback()\r\n ^^^^^^^^^^^^^^^^\r\n File \"/app/g4f/Provider/HuggingChat.py\", line 99, in create_async_generator\r\n await raise_for_status(response)\r\n File \"/app/g4f/requests/raise_for_status.py\", line 28, in raise_for_status_async\r\n raise ResponseStatusError(f\"Response {response.status}: {message}\")\r\ng4f.errors.ResponseStatusError: Response 500:\r\n```\r\n\r\n### Steps to reproduce\r\n\r\n1. Put your cookies json file / har file for `huggingface.co` in the `har_and_cookies` directory\r\n2. Run gpt4free in Docker using docker compose\r\n3. Open g4f web ui (using OpenAI compatible API (port `1337`) gives the same error, though)\r\n4. Select this provider: `HuggingChat (Auth)`\r\n5. Select any model, for example `CohereForAI/c4ai-command-r-plus`\r\n6. Send any message to the LLM\r\n7. See the error\r\n\r\n### Screenshot\r\n\r\n![image](https://github.com/xtekky/gpt4free/assets/35491968/7afaf19b-4af2-4703-8bf3-c4c02eb511fc)\r\n\r\n### Environment\r\n\r\n- gpt4free version 0.3.2.0 (this git repository, commit `e8f6013d`)\r\n- docker compose\r\n- Ubuntu 22.04.4 LTS x86_64\r\n\r\n-----\r\n\r\nduplicates https://github.com/xtekky/gpt4free/issues/2053 which is closed", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c29487cdb522a2655ccff45bdfc33895ed4daf84", "files": [{"path": "g4f/Provider/HuggingChat.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/HuggingChat.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/68", "iss_label": "question", "title": "default username and password of social fish", "body": "hay man the tool works fine but what is the default username and password of social fish", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e", "files": [{"path": "README.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "f7026b04f5e5909aa15848b25de2becd675871a9", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/2475", "iss_label": "", "title": "Multinomial Naive Bayes: Scikit and Weka have different results", "body": "Hi All,\nI used the sklearn.naive_bayes.MultinomialNB on a toy example.\nComparing the results with WEKA, I've noticed a quite different AUC.\nScikit (0.579) - Weka (0.664)\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "f7026b04f5e5909aa15848b25de2becd675871a9", "files": [{"path": "sklearn/cross_validation.py", "Loc": {"(None, 'cross_val_score', 1075)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/cross_validation.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "0ab5c678bba02888b62b777b4c757e367b3458d5", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/8470", "iss_label": "", "title": "How to let gbdt = GradientBoostingRegressor(), gbdt.fit(X_feature, X_label) know whether the feature of input X is categorical or numerical?", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "0ab5c678bba02888b62b777b4c757e367b3458d5", "files": [{"path": "sklearn/preprocessing/_encoders.py", "Loc": {"('OneHotEncoder', None, 151)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/preprocessing/_encoders.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "184f2dba255f279697cb1d7567428b3e6403c2d0", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/3209", "iss_label": "", "title": "BUG: read_csv: dtype={'id' : np.str}: Datatype not understood", "body": "I have a CSV with several columns. The first of which is a field called `id` with entries of the type `0001`, `0002`, etc. \n\nWhen loading this file, the following works:\n\n``` python\npd.read_csv(my_path, dtype={'id' : np.int})\n```\n\nbut the following doesn't:\n\n``` python\npd.read_csv(my_path, dtype={'id' : np.str})\n```\n\nnor does this either:\n\n``` python\npd.read_csv(my_path, dtype={'id' : str})\n```\n\nI get: `Datatype not understood`\n\nThis is with `pandas-0.10.1`\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [{"Loc": [12, 18], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\nand\n2", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "53011c3d7946dadb8274a4c5c7586ab54edf792d", "iss_html_url": "https://github.com/meta-llama/llama/issues/48", "iss_label": "", "title": "How to run 13B model on 4*16G V100\uff1f", "body": "RuntimeError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 15.78 GiB total capacity; 14.26 GiB already allocated; 121.19 MiB free; 14.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 143) of binary: /opt/conda/envs/torch1.12/bin/python", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "fabawi", "pro": "wrapyfi"}, {"org": "modular-ml", "pro": "wrapyfi-examples_llama"}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["wrapyfi", "wrapyfi-examples_llama"]}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "7e1b864d574fe6f5ff75fa1d028feb269f7152d2", "iss_html_url": "https://github.com/meta-llama/llama/issues/836", "iss_label": "model-usage", "title": "Failed to run llama2-13B but it worked with llama2-7B", "body": "It worked with llama2-7b. But when I tried to run the **llama2-13b** model using this `torchrun --nproc_per_node 2 example_chat_completion.py --ckpt_dir /path/to/llama-2-13b-chat/ --tokenizer_path /path/to/tokenizer.model --max_seq_len 128 --max_batch_size 4`, it didn't work.\r\n\r\nError log in brief: `RuntimeError: CUDA error: invalid device ordinal`\r\n\r\n#### Full error log\r\n```log\r\nWARNING:torch.distributed.run:\r\n*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.\r\n*****************************************\r\n> initializing model parallel with size 2\r\n> initializing ddp with size 1\r\n> initializing pipeline with size 1\r\nTraceback (most recent call last):\r\n File \"/home/alex/joy/ml/llama_playground/llama/example_chat_completion.py\", line 104, in <module>\r\n fire.Fire(main)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 141, in Fire\r\n component_trace = _Fire(component, args, parsed_flag_args, context, name)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 475, in _Fire\r\n component, remaining_args = _CallAndUpdateTrace(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 691, in _CallAndUpdateTrace\r\n component = fn(*varargs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/joy/ml/llama_playground/llama/example_chat_completion.py\", line 35, in main\r\n generator = Llama.build(\r\n ^^^^^^^^^^^^\r\n File \"/home/alex/joy/ml/llama_playground/llama/llama/generation.py\", line 92, in build\r\n torch.cuda.set_device(local_rank)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/cuda/__init__.py\", line 350, in set_device\r\n torch._C._cuda_setDevice(device)\r\nRuntimeError: CUDA error: invalid device ordinal\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n\r\nWARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41031 closing signal SIGTERM\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 41032) of binary: /home/alex/miniconda3/envs/llama/bin/python\r\nTraceback (most recent call last):\r\n File \"/home/alex/miniconda3/envs/llama/bin/torchrun\", line 33, in <module>\r\n sys.exit(load_entry_point('torch==2.0.1', 'console_scripts', 'torchrun')())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 346, in wrapper\r\n return f(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/run.py\", line 794, in main\r\n run(args)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/run.py\", line 785, in run\r\n elastic_launch(\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/launcher/api.py\", line 250, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError:\r\n============================================================\r\nexample_chat_completion.py FAILED\r\n------------------------------------------------------------\r\nFailures:\r\n <NO_OTHER_FAILURES>\r\n------------------------------------------------------------\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-10-02_12:32:27\r\n host : alex-workstation\r\n rank : 1 (local_rank: 1)\r\n exitcode : 1 (pid: 41032)\r\n error_file: <N/A>\r\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\n============================================================\r\n(\r\n```\r\n\r\n\r\n#### System Specs\r\ni9 9900K + 16G DDR4 (with 16GB swap) + 2080ti (modded version with 22GB VRAM, the card runs smoothly on Windows and Linux)\r\nOS:\r\nUbuntu 22.04 x86_64\r\nEnvironments:\r\nFrom miniconda\r\n```conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia```\r\n\r\n#### My attempt NO.1 \r\nI set the only GPU RTX 2080ti in the terminal. `export CUDA_VISIBLE_DEVICES=0` **0** is the ID of my RTX 2080ti.\r\nI looked up the GPU id by ```sudo lshw -C display```\r\n\r\nResult.\r\n```log\r\n *-display \r\n description: VGA compatible controller\r\n product: TU102 [GeForce RTX 2080 Ti Rev. A]\r\n vendor: NVIDIA Corporation\r\n physical id: 0\r\n bus info: pci@0000:01:00.0\r\n version: a1\r\n width: 64 bits\r\n clock: 33MHz\r\n capabilities: pm msi pciexpress vga_controller bus_master cap_list rom\r\n configuration: driver=nvidia latency=0\r\n resources: iomemory:2f0-2ef iomemory:2f0-2ef irq:186 memory:de000000-deffffff memory:2fe0000000-2fefffffff memory:2ff0000000-2ff1ffffff ioport:e000(size=128) memory:c0000-dffff\r\n *-display\r\n description: Display controller\r\n product: CoffeeLake-S GT2 [UHD Graphics 630]\r\n vendor: Intel Corporation\r\n physical id: 2\r\n bus info: pci@0000:00:02.0\r\n version: 02\r\n width: 64 bits\r\n clock: 33MHz\r\n capabilities: pciexpress msi pm bus_master cap_list\r\n configuration: driver=i915 latency=0\r\n resources: iomemory:2f0-2ef iomemory:2f0-2ef irq:185 memory:2ffe000000-2ffeffffff memory:2fd0000000-2fdfffffff ioport:f000(size=64)\r\n *-graphics\r\n product: EFI VGA\r\n physical id: 2\r\n logical name: /dev/fb0\r\n capabilities: fb\r\n configuration: depth=32 resolution=2560,1080\r\n```\r\nBut it's still the same error. FYI, when starting to run llama2-13B, the ram usage hadn't even reached 16GB yet.\r\n\r\nWith some testing codes using pytorch\r\n```python\r\nimport torch\r\ndevice_count = torch.cuda.device_count()\r\nprint(f\"Number of available devices: {device_count}\")\r\n\r\nfor i in range(device_count):\r\n print(f\"Device {i}: {torch.cuda.get_device_name(i)}\")\r\n```\r\noutput: \r\n**Number of available devices: 1\r\nDevice 0: NVIDIA GeForce RTX 2080 Ti**\r\n\r\nNvidia SMI info\r\n```log\r\n+---------------------------------------------------------------------------------------+\r\n| NVIDIA-SMI 535.113.01 Driver Version: 535.113.01 CUDA Version: 12.2 |\r\n|-----------------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|=========================================+======================+======================|\r\n| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:01:00.0 On | N/A |\r\n| 41% 34C P8 30W / 260W | 288MiB / 22528MiB | 12% Default |\r\n| | | N/A |\r\n+-----------------------------------------+----------------------+----------------------+\r\n \r\n+---------------------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=======================================================================================|\r\n| 0 N/A N/A 2216 G /usr/lib/xorg/Xorg 165MiB |\r\n| 0 N/A N/A 2338 G /usr/bin/gnome-shell 34MiB |\r\n| 0 N/A N/A 34805 G ...26077060,3793940789578302769,262144 82MiB |\r\n| 0 N/A N/A 44004 G ...sktop/5088/usr/bin/telegram-desktop 3MiB |\r\n+---------------------------------------------------------------------------------------+\r\n```\r\n\r\n#### My attempt NO.2\r\n\r\nChanged to Pytorch nightly and cuda 12.1 support. `conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia` My Linux is using Nvidia driver version 535.113.01 with cuda 12.2 support.\r\n\r\nPytorch version: 2.2.0.dev20231001\r\nSame error.\r\n\r\n#### My attempt NO.3\r\nDowngrade the Linux driver? (Not tested yet)\r\n\r\n#### My attempt NO.4\r\nUse the Docker version Pytorch and CUDA inside a docker instance. https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch \r\n\r\nAfter downloading the docker image, i started a docker instance by doing so `docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:23.09-py3`\r\n\r\nError\r\n`docker: Error response from daemon: could not select device driver \"\" with capabilities: [[gpu]]`\r\n\r\n\r\n\r\nHow to run llama2-13B-chat or 70B with a RTX graphics card of 22GB RAM? Thanks in advance!\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "meta-llama", "pro": "llama-cookbook", "path": ["examples/README.md", "examples/inference.py"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/inference.py"], "doc": ["examples/README.md"], "test": [], "config": [], "asset": []}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "57b0eb62de0636e75af471e49e2f1862d908d9d8", "iss_html_url": "https://github.com/meta-llama/llama/issues/201", "iss_label": "", "title": "Torchrun distributed running does not work", "body": "Running in a distributed manner either returns an error, or with the simplest example, produce obviously incorrect output.\r\n\r\nThe following is the result of running 13B model across two nodes. Node A:\r\n\r\n`python -m torch.distributed.run --nproc_per_node 1 --nnodes=2 --node_rank=0 --master_addr=\"gpu3.lan\" --master_port=1234 example.py --ckpt_dir $MODELS/65B --tokenizer_path $MODELS/tokenizer.model`\r\n\r\nNode B:\r\n\r\n`python -m torch.distributed.run --nproc_per_node 1 --nnodes=2 --node_rank=1 --master_addr=\"gpu3.lan\" --master_port=1234 example.py --ckpt_dir $MODELS/65B --tokenizer_path $MODELS/tokenizer.model`\r\n\r\nIt does complete without error, but the results are messed up:\r\n\r\n![image](https://user-images.githubusercontent.com/252193/225178366-2c929cd0-3e87-42d4-8bb5-5cc737189959.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "57b0eb62de0636e75af471e49e2f1862d908d9d8", "files": [{"path": "example.py", "Loc": {"(None, 'setup_model_parallel', 19)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["example.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7", "iss_html_url": "https://github.com/meta-llama/llama/issues/670", "iss_label": "", "title": "Counting tokens for Chat models", "body": "Does anyone how to calculate prompt and completion tokens for Llama Chat models for monitoring purposes?\r\nCan we add this in responses as many times we don't have libraries to achieve this in languages like java, kotlin, etc.\r\n\r\nSimilar to tiktoken by openai - https://github.com/openai/tiktoken", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7", "files": [{"path": "llama/tokenizer.py", "Loc": {"('Tokenizer', 'encode', 31)": {"mod": []}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["llama/tokenizer.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "8cd608cc019b306ab6d8b7abd61014b436968086", "iss_html_url": "https://github.com/meta-llama/llama/issues/732", "iss_label": "download-install", "title": "download.sh problem, llama model 70B, results in several 0kb .pth files after download; two separate network locations for testing; reported by several users on different networks; MacOS Apple Silicon M2 Ventura 13.4.1 (c) (22F770820d)", "body": "After verifying that all libraries from the requirements.txt were installed in my python3 environment, in bash terminal I run llama-main/download.sh -- however, upon completion downloading (and overall execution) I am finding that one or more consolidated.0x.pth files are zero kilobytes containing no data. \r\n\r\nI have tried to successfully download all .pth files on both WiFi & Ethernet from two separate networks. One at home, on my Verizon 5g ISP and the other on-campus at MIT. The same result occurs. I have verified disk storage space on both machines I attempted to acquire these files. It seems \" consolidated.05.pth \" fails most often; with the successfully acquired .pth files being 17.25 GB in size. However this morning I am seeing that consolidated.**05**.pth, consolidated.**04**.pth, and consolidated.**00**.pth have failed\r\n\r\nI am discouraged, as I have attempted to acquire these several times and requested a Meta access key twice. \r\n\r\nAre there any recommendations you can provide me with? Other resources, endpoints, or potential port forwards/triggers that might resolve the problem in some way? \r\n\r\nor is this a **bug**?\r\n\r\nThank you for your time!!\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "8cd608cc019b306ab6d8b7abd61014b436968086", "files": [{"path": "download.sh", "Loc": {"(None, None, 23)": {"mod": [23]}}, "status": "modified"}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["download.sh"]}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "99e19d4f83b7fe77e8b3b692e01019640d7b457a", "iss_html_url": "https://github.com/meta-llama/llama/issues/493", "iss_label": "download-install", "title": "download.sh: line 2: $'\\r': command not found", "body": "run download.sh by cygwin in windows but it give back \"download.sh: line 2: $'\\r': command not found\"\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "99e19d4f83b7fe77e8b3b692e01019640d7b457a", "files": [{"path": "download.sh", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["download.sh"]}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "1076b9c51c77ad06e9d7ba8a4c6df775741732bd", "iss_html_url": "https://github.com/meta-llama/llama/issues/21", "iss_label": "", "title": "Add to huggingface", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "docs", "pro": "transformers", "path": ["model_doc/llama"]}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": ["model_doc/llama"], "test": [], "config": [], "asset": []}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e", "iss_html_url": "https://github.com/meta-llama/llama/issues/751", "iss_label": "documentation", "title": "Run llama2 on specified GPU", "body": "Suppose I have 8 A6000 GPU, I would like to run separate experiments on separate GPU, how can I do it? For example, I want to run chat_completion.py on CUDA:0 and run text_completion.py on CUDA:1 simutaneously. Are there any ways to do it? Thank you.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e", "files": [{"path": "example_text_completion.py", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nhow can I do it", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["example_text_completion.py"], "doc": [], "test": [], "config": [], "asset": []}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "a102a597d1eb5d437f98dc0b55668ff61bc493b8", "iss_html_url": "https://github.com/meta-llama/llama/issues/740", "iss_label": "download-install", "title": "download.sh: Enter for all models fails", "body": "- Procedure\r\n`source download.sh; <enter url>; <Enter for all models>`\r\n- Result\r\nFolders etc. set up, models not downloaded. 403 Forbidden Error\r\n- TS\r\nWas able to download all models by explicitly passing names as a list", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": ["wget"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["wget"]}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "d7e2e37e163981fd674ea2a633fac2014550898d", "iss_html_url": "https://github.com/meta-llama/llama/issues/795", "iss_label": "", "title": "[Question] Is the Use of Llama2 Forbidden in Languages Other Than English?", "body": "Hello,\r\n\r\nI recently came across a claim from [Baichuan-inc](https://github.com/baichuan-inc) during their live stream event and in the press release for the Baichuan2 model. They stated that Meta prohibits the use of Llama2 in languages other than English.\r\n\r\nHowever, after reviewing the [use policy](https://ai.meta.com/llama/use-policy/) and the [license agreement](https://ai.meta.com/llama/license/) provided by Meta, I couldn't find any specific restriction regarding the model's application language. Additionally, in the `Responsible-Use-Guide.pdf`, there are even mentions of considerations for markets in other languages.\r\n\r\nCould you please clarify if the statement by [Baichuan-inc](https://github.com/baichuan-inc) that \"Meta prohibits the use of Llama2 in languages other than English,\" is accurate? \r\n\r\nThank you!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {"base_commit": "d7e2e37e163981fd674ea2a633fac2014550898d", "files": [{"path": "MODEL_CARD.md", "Loc": {}}]}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u8be2\u95ee\u5e93\u8bed\u8a00\u652f\u6301\u4fe1\u606f", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["MODEL_CARD.md"], "test": [], "config": [], "asset": []}}, {"organization": "meta-llama", "repo_name": "llama", "base_commit": "57b0eb62de0636e75af471e49e2f1862d908d9d8", "iss_html_url": "https://github.com/meta-llama/llama/issues/227", "iss_label": "documentation\nresearch-paper", "title": "where is the train file?", "body": "where is the train file? I want to learn how to train.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": {}, "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "meta-llama", "pro": "llama-cookbook", "path": ["llama_finetuning.py"]}], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code\nDoc"}, "loctype": {"code": ["llama_finetuning.py"], "doc": [], "test": [], "config": [], "asset": []}}]